public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-02 22:32 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-02 22:32 UTC (permalink / raw
  To: gentoo-commits

commit:     85190f4cb73b057b99259b598c315326ac7ad218
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  2 22:27:34 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug  2 22:31:38 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=85190f4c

Select SECCOMP options only if supported

Thanks to Matt Turner for this patch

Some architectures (e.g., alpha, sparc) do not support SECCOMP.
Without this kernel builds will show:

WARNING: unmet direct dependencies detected for SECCOMP
  Depends on [n]: HAVE_ARCH_SECCOMP [=n]
  Selected by [y]:
  - GENTOO_LINUX_INIT_SYSTEMD [=y] && GENTOO_LINUX [=y] && GENTOO_LINUX_UDEV [=y]

WARNING: unmet direct dependencies detected for SECCOMP_FILTER
  Depends on [n]: HAVE_ARCH_SECCOMP_FILTER [=n] && SECCOMP [=y] && NET [=y]
  Selected by [y]:
  - GENTOO_LINUX_INIT_SYSTEMD [=y] && GENTOO_LINUX [=y] && GENTOO_LINUX_UDEV [=y]

Signed-off-by: Matt Turner <mattst88 <AT> gentoo.org>
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index c063c6d..f875dba 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -138,8 +138,8 @@
 +	select NET
 +	select NET_NS
 +	select PROC_FS
-+	select SECCOMP
-+	select SECCOMP_FILTER
++	select SECCOMP if HAVE_ARCH_SECCOMP
++	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
@@ -188,8 +188,8 @@
 +	select DEBUG_SG
 +	select BUG_ON_DATA_CORRUPTION
 +	select SCHED_STACK_END_CHECK
-+	select SECCOMP
-+	select SECCOMP_FILTER
++	select SECCOMP if HAVE_ARCH_SECCOMP
++	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
 +	select SECURITY_YAMA
 +	select SLAB_FREELIST_RANDOM
 +	select SLAB_FREELIST_HARDENED


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-18 15:03 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-18 15:03 UTC (permalink / raw
  To: gentoo-commits

commit:     66e67eeda8a45252fde1ab2583983b68d79ba5ad
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  3 22:49:56 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 18 15:02:42 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=66e67eed

Add CONFIG_RELOCATABLE when selecting RANDOMIZE_BASE

Redo menu's to make more user-friendly

Bug: https://bugs.gentoo.org/806300

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 51 ++++++++++++++++++++++------------------
 1 file changed, 28 insertions(+), 23 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index f875dba..a254a6b 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-07-04 10:53:51.006624416 -0400
-+++ b/distro/Kconfig	2021-07-04 11:07:33.534248860 -0400
-@@ -0,0 +1,263 @@
+--- /dev/null	2021-08-03 06:44:27.767516067 -0400
++++ b/distro/Kconfig	2021-08-03 18:43:33.303563865 -0400
+@@ -0,0 +1,268 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -166,11 +166,22 @@
 +
 +endmenu
 +
-+menu "Enable Kernel Self Protection Project Recommendations"
-+	visible if GENTOO_LINUX
++menuconfig GENTOO_KERNEL_SELF_PROTECTION
++	bool "Kernel Self Protection Project"
++	depends on GENTOO_LINUX
++	help
++  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
++		GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency information on your 
++		specific architecture.
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++		for X86_64
 +
-+config GENTOO_KERNEL_SELF_PROTECTION
-+	bool "Architecture Independant Kernel Self Protection Project Recommendations"
++if GENTOO_KERNEL_SELF_PROTECTION
++config GENTOO_KERNEL_SELF_PROTECTION_COMMON
++	bool "Enable Kernel Self Protection Project Recommendations"
 +
 +	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
 +
@@ -214,26 +225,21 @@
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
-+		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
-+		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
-+		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
-+		dependency information on your specific architecture.
-+		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
-+		for X86_64
-+
-+menu "Architecture Specific Self Protection Project Recommendations"
++		Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency 
++		information on your specific architecture.  Note 2: Please see the URL above for 
++		numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 for X86_64
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_64
-+	bool "X86_64 KSPP Settings"
++	bool "X86_64 KSPP Settings" if GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +
-+	depends on !X86_MSR && X86_64
++	depends on !X86_MSR && X86_64 && GENTOO_KERNEL_SELF_PROTECTION
 +	default n
 +	
 +	select RANDOMIZE_BASE
 +	select RANDOMIZE_MEMORY
++	select RELOCATABLE
 +	select LEGACY_VSYSCALL_NONE
-+ select PAGE_TABLE_ISOLATION
++ 	select PAGE_TABLE_ISOLATION
 +
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM64
@@ -243,6 +249,7 @@
 +	default n
 +
 +	select RANDOMIZE_BASE
++	select RELOCATABLE
 +	select ARM64_SW_TTBR0_PAN
 +	select CONFIG_UNMAP_KERNEL_AT_EL0
 +
@@ -255,6 +262,7 @@
 +	select HIGHMEM64G
 +	select X86_PAE
 +	select RANDOMIZE_BASE
++	select RELOCATABLE
 +	select PAGE_TABLE_ISOLATION
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM
@@ -267,10 +275,7 @@
 +	select STRICT_MEMORY_RWX
 +	select CPU_SW_DOMAIN_PAN
 +
-+endmenu
-+
-+endmenu
-+
++endif
 +endmenu
 diff --git a/security/Kconfig b/security/Kconfig
 index 7561f6f99..01f0bf73f 100644


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-18 15:03 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-18 15:03 UTC (permalink / raw
  To: gentoo-commits

commit:     86826c396abefce49c4f34ff4406d2430ad4dea7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  9 23:18:23 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 18 15:02:42 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=86826c39

Fix GCC_PLUGINS depends

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index a254a6b..f0dbf2d 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-03 06:44:27.767516067 -0400
-+++ b/distro/Kconfig	2021-08-03 18:43:33.303563865 -0400
-@@ -0,0 +1,268 @@
+--- /dev/null	2021-08-09 07:18:54.945580285 -0400
++++ b/distro/Kconfig	2021-08-09 19:15:34.418191114 -0400
+@@ -0,0 +1,267 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -170,7 +170,7 @@
 +	bool "Kernel Self Protection Project"
 +	depends on GENTOO_LINUX
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
 +		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
 +		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
 +		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
@@ -183,7 +183,7 @@
 +config GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +	bool "Enable Kernel Self Protection Project Recommendations"
 +
-+	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
++	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS
 +
 +	select BUG
 +	select STRICT_KERNEL_RWX
@@ -216,7 +216,6 @@
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
 +	select PANIC_ON_OOPS
-+	select CONFIG_GCC_PLUGINS
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-25 12:25 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-25 12:25 UTC (permalink / raw
  To: gentoo-commits

commit:     de032601c6983688125d06ffd873eff0f9dd6e8b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 24 19:53:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 25 12:25:44 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=de032601

Add CONFIG option to print firmware info

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index f0dbf2d..7152b76 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-09 07:18:54.945580285 -0400
-+++ b/distro/Kconfig	2021-08-09 19:15:34.418191114 -0400
-@@ -0,0 +1,267 @@
+--- /dev/null	2021-08-24 15:34:24.700702871 -0400
++++ b/distro/Kconfig	2021-08-24 15:49:16.965525424 -0400
+@@ -0,0 +1,281 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -275,6 +275,20 @@
 +	select CPU_SW_DOMAIN_PAN
 +
 +endif
++
++config GENTOO_PRINT_FIRMWARE_INFO
++	bool "Print firmware information that the kernel attempts to load"
++
++	depends on GENTOO_LINUX
++	default n 
++
++	help
++		Enable this option to print information about firmware that the kernel
++		is attempting to load.  This information can be accessible via the
++		dmesg command-line utility
++
++		See the settings that become available for more details and fine-tuning.
++
 +endmenu
 diff --git a/security/Kconfig b/security/Kconfig
 index 7561f6f99..01f0bf73f 100644


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-25 16:24 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-25 16:24 UTC (permalink / raw
  To: gentoo-commits

commit:     32ca27ce2fb4e7e6528dfd1912299340e0313a40
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  3 11:00:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 25 12:51:29 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=32ca27ce

Fix SECCOMP Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 7152b76..fd8f955 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -139,7 +139,7 @@
 +	select NET_NS
 +	select PROC_FS
 +	select SECCOMP if HAVE_ARCH_SECCOMP
-+	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
++	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
@@ -200,7 +200,7 @@
 +	select BUG_ON_DATA_CORRUPTION
 +	select SCHED_STACK_END_CHECK
 +	select SECCOMP if HAVE_ARCH_SECCOMP
-+	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
++	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
 +	select SECURITY_YAMA
 +	select SLAB_FREELIST_RANDOM
 +	select SLAB_FREELIST_HARDENED


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-25 16:24 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-25 16:24 UTC (permalink / raw
  To: gentoo-commits

commit:     1cf70803591aa2fe8a79c85d7d5cbc91b79dcd6c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 25 16:20:53 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 25 16:24:29 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1cf70803

Change CONFIG_GENTOO_PRINT_FIRMWARE_INFO to y

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index fd8f955..d2175f0 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -280,7 +280,7 @@
 +	bool "Print firmware information that the kernel attempts to load"
 +
 +	depends on GENTOO_LINUX
-+	default n 
++	default y
 +
 +	help
 +		Enable this option to print information about firmware that the kernel


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-25 16:30 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-25 16:30 UTC (permalink / raw
  To: gentoo-commits

commit:     cdbe54e1de55e1964a5dc2df7feefdf4ade7cdae
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 25 16:29:54 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 25 16:29:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cdbe54e1

Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO

Thanks to Georgy Yakovlev

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                               |  4 ++++
 3000_Support-printing-firmware-info.patch | 14 ++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/0000_README b/0000_README
index d9af695..2619131 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
+Patch:  3000_Support-printing-firmware-info.patch
+From:   https://bugs.gentoo.org/732852
+Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 0000000..a630cfb
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c	2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c	2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+ 
+ 	ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ 					offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++        printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-08-30 17:23 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-08-30 17:23 UTC (permalink / raw
  To: gentoo-commits

commit:     ce9ce2f6facb4a93def81aaf8f2dec5ef8ec5b2f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 30 17:23:05 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug 30 17:23:05 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ce9ce2f6

Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 ++
 2700_Bluetooth-usb-alt-3-for-WBS.patch | 84 ++++++++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+)

diff --git a/0000_README b/0000_README
index 2619131..99766a3 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2700_Bluetooth-usb-alt-3-for-WBS.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=55981d3541812234e687062926ff199c83f79a39
+Desc:   Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_Bluetooth-usb-alt-3-for-WBS.patch b/2700_Bluetooth-usb-alt-3-for-WBS.patch
new file mode 100644
index 0000000..e0a67ea
--- /dev/null
+++ b/2700_Bluetooth-usb-alt-3-for-WBS.patch
@@ -0,0 +1,84 @@
+From 55981d3541812234e687062926ff199c83f79a39 Mon Sep 17 00:00:00 2001
+From: Pauli Virtanen <pav@iki.fi>
+Date: Mon, 26 Jul 2021 21:02:06 +0300
+Subject: Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Some USB BT adapters don't satisfy the MTU requirement mentioned in
+commit e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
+and have ALT 3 setting that produces no/garbled audio. Some adapters
+with larger MTU were also reported to have problems with ALT 3.
+
+Add a flag and check it and MTU before selecting ALT 3, falling back to
+ALT 1. Enable the flag for Realtek, restoring the previous behavior for
+non-Realtek devices.
+
+Tested with USB adapters (mtu<72, no/garbled sound with ALT3, ALT1
+works) BCM20702A1 0b05:17cb, CSR8510A10 0a12:0001, and (mtu>=72, ALT3
+works) RTL8761BU 0bda:8771, Intel AX200 8087:0029 (after disabling
+ALT6). Also got reports for (mtu>=72, ALT 3 reported to produce bad
+audio) Intel 8087:0a2b.
+
+Signed-off-by: Pauli Virtanen <pav@iki.fi>
+Fixes: e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
+Tested-by: Michał Kępień <kernel@kempniu.pl>
+Tested-by: Jonathan Lampérth <jon@h4n.dev>
+Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
+---
+ drivers/bluetooth/btusb.c | 22 ++++++++++++++--------
+ 1 file changed, 14 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 488f110e17e27..2336f731dbc7e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -528,6 +528,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
++#define BTUSB_USE_ALT3_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1761,16 +1762,20 @@ static void btusb_work(struct work_struct *work)
+ 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
+ 			 * many adapters do not support it.  Alt 1 appears to
+ 			 * work for all adapters that do not have alt 6, and
+-			 * which work with WBS at all.
++			 * which work with WBS at all.  Some devices prefer
++			 * alt 3 (HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes), requiring
++			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
++			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
+ 			 */
+-			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+-			/* Because mSBC frames do not need to be aligned to the
+-			 * SCO packet boundary. If support the Alt 3, use the
+-			 * Alt 3 for HCI payload >= 60 Bytes let air packet
+-			 * data satisfy 60 bytes.
+-			 */
+-			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++			if (btusb_find_altsetting(data, 6))
++				new_alts = 6;
++			else if (btusb_find_altsetting(data, 3) &&
++				 hdev->sco_mtu >= 72 &&
++				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
+ 				new_alts = 3;
++			else
++				new_alts = 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -3882,6 +3887,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
++		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ 	}
+ 
+ 	if (!reset)
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-03  9:15 Alice Ferrazzi
  0 siblings, 0 replies; 40+ messages in thread
From: Alice Ferrazzi @ 2021-09-03  9:15 UTC (permalink / raw
  To: gentoo-commits

commit:     4d7ce32682a38594c52904f5943b117907f197a5
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 09:12:07 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 09:15:04 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4d7ce326

Bump patch to 20210818 commit f1d0af2c9d807b137909e98c11caf7504f4e2066

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index c45d13b..e37528f 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -219,7 +219,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MZEN3
 +	bool "AMD Zen 3"
-+	depends on ( CC_IS_GCC && GCC_VERSION >= 100300 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	help
 +	  Select this for AMD Family 19h Zen 3 processors.
 +
@@ -378,7 +378,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MCOOPERLAKE
 +	bool "Intel Cooper Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
 +	select X86_P6_NOP
 +	help
 +
@@ -388,7 +388,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MTIGERLAKE
 +	bool "Intel Tiger Lake"
-+	depends on  ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
++	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
 +	select X86_P6_NOP
 +	help
 +
@@ -398,7 +398,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MSAPPHIRERAPIDS
 +	bool "Intel Sapphire Rapids"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -408,7 +408,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MROCKETLAKE
 +	bool "Intel Rocket Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -418,7 +418,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MALDERLAKE
 +	bool "Intel Alder Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -435,7 +435,7 @@ index 814fe0d349b0..8acf6519d279 100644
  
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU.
@@ -443,7 +443,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config GENERIC_CPU3
 +	bool "Generic-x86-64-v3"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64-v3 CPU with v3 instructions.
@@ -451,7 +451,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config GENERIC_CPU4
 +	bool "Generic-x86-64-v4"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU with v4 instructions.


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-03 11:17 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-03 11:17 UTC (permalink / raw
  To: gentoo-commits

commit:     3593568fbe32630d6f1df2668b031bb0c8c441bf
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 11:17:29 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 11:17:29 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3593568f

Linux patch 5.14.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1000_linux-5.14.1.patch | 406 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 410 insertions(+)

diff --git a/0000_README b/0000_README
index 99766a3..b8850b4 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.14.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-5.14.1.patch b/1000_linux-5.14.1.patch
new file mode 100644
index 0000000..469dd5b
--- /dev/null
+++ b/1000_linux-5.14.1.patch
@@ -0,0 +1,406 @@
+diff --git a/Makefile b/Makefile
+index 61741e9d9c6e6..83d1f7c1fd304 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 87460e0e5c72f..fef79ea52e3ed 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -4029,23 +4029,23 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
+ 	if (fdc_state[FDC(drive)].rawcmd == 1)
+ 		fdc_state[FDC(drive)].rawcmd = 2;
+ 
+-	if (mode & (FMODE_READ|FMODE_WRITE)) {
+-		drive_state[drive].last_checked = 0;
+-		clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags);
+-		if (bdev_check_media_change(bdev))
+-			floppy_revalidate(bdev->bd_disk);
+-		if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
+-			goto out;
+-		if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
++	if (!(mode & FMODE_NDELAY)) {
++		if (mode & (FMODE_READ|FMODE_WRITE)) {
++			drive_state[drive].last_checked = 0;
++			clear_bit(FD_OPEN_SHOULD_FAIL_BIT,
++				  &drive_state[drive].flags);
++			if (bdev_check_media_change(bdev))
++				floppy_revalidate(bdev->bd_disk);
++			if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
++				goto out;
++			if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
++				goto out;
++		}
++		res = -EROFS;
++		if ((mode & FMODE_WRITE) &&
++		    !test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
+ 			goto out;
+ 	}
+-
+-	res = -EROFS;
+-
+-	if ((mode & FMODE_WRITE) &&
+-			!test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
+-		goto out;
+-
+ 	mutex_unlock(&open_lock);
+ 	mutex_unlock(&floppy_mutex);
+ 	return 0;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a9855a2dd5616..a552e7b483603 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -525,6 +525,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
++#define BTUSB_USE_ALT3_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1757,16 +1758,20 @@ static void btusb_work(struct work_struct *work)
+ 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
+ 			 * many adapters do not support it.  Alt 1 appears to
+ 			 * work for all adapters that do not have alt 6, and
+-			 * which work with WBS at all.
++			 * which work with WBS at all.  Some devices prefer
++			 * alt 3 (HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes), requiring
++			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
++			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
+ 			 */
+-			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+-			/* Because mSBC frames do not need to be aligned to the
+-			 * SCO packet boundary. If support the Alt 3, use the
+-			 * Alt 3 for HCI payload >= 60 Bytes let air packet
+-			 * data satisfy 60 bytes.
+-			 */
+-			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++			if (btusb_find_altsetting(data, 6))
++				new_alts = 6;
++			else if (btusb_find_altsetting(data, 3) &&
++				 hdev->sco_mtu >= 72 &&
++				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
+ 				new_alts = 3;
++			else
++				new_alts = 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -4742,6 +4747,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
++		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ 	}
+ 
+ 	if (!reset)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 632f0fcc5aa73..05bc46634b369 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1308,11 +1308,8 @@ mt7530_port_bridge_leave(struct dsa_switch *ds, int port,
+ 		/* Remove this port from the port matrix of the other ports
+ 		 * in the same bridge. If the port is disabled, port matrix
+ 		 * is kept and not being setup until the port becomes enabled.
+-		 * And the other port's port matrix cannot be broken when the
+-		 * other port is still a VLAN-aware port.
+ 		 */
+-		if (dsa_is_user_port(ds, i) && i != port &&
+-		   !dsa_port_is_vlan_filtering(dsa_to_port(ds, i))) {
++		if (dsa_is_user_port(ds, i) && i != port) {
+ 			if (dsa_to_port(ds, i)->bridge_dev != bridge)
+ 				continue;
+ 			if (priv->ports[i].enable)
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 0e0cd9e9e589e..3639bb6dc372e 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -246,6 +246,8 @@ int vt_waitactive(int n)
+  *
+  * XXX It should at least call into the driver, fbdev's definitely need to
+  * restore their engine state. --BenH
++ *
++ * Called with the console lock held.
+  */
+ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ {
+@@ -262,7 +264,6 @@ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ 		return -EINVAL;
+ 	}
+ 
+-	/* FIXME: this needs the console lock extending */
+ 	if (vc->vc_mode == mode)
+ 		return 0;
+ 
+@@ -271,12 +272,10 @@ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ 		return 0;
+ 
+ 	/* explicitly blank/unblank the screen if switching modes */
+-	console_lock();
+ 	if (mode == KD_TEXT)
+ 		do_unblank_screen(1);
+ 	else
+ 		do_blank_screen(1);
+-	console_unlock();
+ 
+ 	return 0;
+ }
+@@ -378,7 +377,10 @@ static int vt_k_ioctl(struct tty_struct *tty, unsigned int cmd,
+ 		if (!perm)
+ 			return -EPERM;
+ 
+-		return vt_kdsetmode(vc, arg);
++		console_lock();
++		ret = vt_kdsetmode(vc, arg);
++		console_unlock();
++		return ret;
+ 
+ 	case KDGETMODE:
+ 		return put_user(vc->vc_mode, (int __user *)arg);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 70f94b75f25a6..354ffd8f81af9 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2137,7 +2137,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 
+ 	if (IS_ERR(device)) {
+ 		if (PTR_ERR(device) == -ENOENT &&
+-		    strcmp(device_path, "missing") == 0)
++		    device_path && strcmp(device_path, "missing") == 0)
+ 			ret = BTRFS_ERROR_DEV_MISSING_NOT_FOUND;
+ 		else
+ 			ret = PTR_ERR(device);
+diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
+index a73b0376e6f37..af74599ae1cf0 100644
+--- a/fs/crypto/hooks.c
++++ b/fs/crypto/hooks.c
+@@ -384,3 +384,47 @@ err_kfree:
+ 	return ERR_PTR(err);
+ }
+ EXPORT_SYMBOL_GPL(fscrypt_get_symlink);
++
++/**
++ * fscrypt_symlink_getattr() - set the correct st_size for encrypted symlinks
++ * @path: the path for the encrypted symlink being queried
++ * @stat: the struct being filled with the symlink's attributes
++ *
++ * Override st_size of encrypted symlinks to be the length of the decrypted
++ * symlink target (or the no-key encoded symlink target, if the key is
++ * unavailable) rather than the length of the encrypted symlink target.  This is
++ * necessary for st_size to match the symlink target that userspace actually
++ * sees.  POSIX requires this, and some userspace programs depend on it.
++ *
++ * This requires reading the symlink target from disk if needed, setting up the
++ * inode's encryption key if possible, and then decrypting or encoding the
++ * symlink target.  This makes lstat() more heavyweight than is normally the
++ * case.  However, decrypted symlink targets will be cached in ->i_link, so
++ * usually the symlink won't have to be read and decrypted again later if/when
++ * it is actually followed, readlink() is called, or lstat() is called again.
++ *
++ * Return: 0 on success, -errno on failure
++ */
++int fscrypt_symlink_getattr(const struct path *path, struct kstat *stat)
++{
++	struct dentry *dentry = path->dentry;
++	struct inode *inode = d_inode(dentry);
++	const char *link;
++	DEFINE_DELAYED_CALL(done);
++
++	/*
++	 * To get the symlink target that userspace will see (whether it's the
++	 * decrypted target or the no-key encoded target), we can just get it in
++	 * the same way the VFS does during path resolution and readlink().
++	 */
++	link = READ_ONCE(inode->i_link);
++	if (!link) {
++		link = inode->i_op->get_link(dentry, inode, &done);
++		if (IS_ERR(link))
++			return PTR_ERR(link);
++	}
++	stat->size = strlen(link);
++	do_delayed_call(&done);
++	return 0;
++}
++EXPORT_SYMBOL_GPL(fscrypt_symlink_getattr);
+diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
+index dd05af983092d..69109746e6e21 100644
+--- a/fs/ext4/symlink.c
++++ b/fs/ext4/symlink.c
+@@ -52,10 +52,20 @@ static const char *ext4_encrypted_get_link(struct dentry *dentry,
+ 	return paddr;
+ }
+ 
++static int ext4_encrypted_symlink_getattr(struct user_namespace *mnt_userns,
++					  const struct path *path,
++					  struct kstat *stat, u32 request_mask,
++					  unsigned int query_flags)
++{
++	ext4_getattr(mnt_userns, path, stat, request_mask, query_flags);
++
++	return fscrypt_symlink_getattr(path, stat);
++}
++
+ const struct inode_operations ext4_encrypted_symlink_inode_operations = {
+ 	.get_link	= ext4_encrypted_get_link,
+ 	.setattr	= ext4_setattr,
+-	.getattr	= ext4_getattr,
++	.getattr	= ext4_encrypted_symlink_getattr,
+ 	.listxattr	= ext4_listxattr,
+ };
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index e149c8c66a71b..9c528e583c9d5 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -1323,9 +1323,19 @@ static const char *f2fs_encrypted_get_link(struct dentry *dentry,
+ 	return target;
+ }
+ 
++static int f2fs_encrypted_symlink_getattr(struct user_namespace *mnt_userns,
++					  const struct path *path,
++					  struct kstat *stat, u32 request_mask,
++					  unsigned int query_flags)
++{
++	f2fs_getattr(mnt_userns, path, stat, request_mask, query_flags);
++
++	return fscrypt_symlink_getattr(path, stat);
++}
++
+ const struct inode_operations f2fs_encrypted_symlink_inode_operations = {
+ 	.get_link	= f2fs_encrypted_get_link,
+-	.getattr	= f2fs_getattr,
++	.getattr	= f2fs_encrypted_symlink_getattr,
+ 	.setattr	= f2fs_setattr,
+ 	.listxattr	= f2fs_listxattr,
+ };
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 2e4e1d1599693..5cfa28cd00cdc 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -1630,6 +1630,17 @@ static const char *ubifs_get_link(struct dentry *dentry,
+ 	return fscrypt_get_symlink(inode, ui->data, ui->data_len, done);
+ }
+ 
++static int ubifs_symlink_getattr(struct user_namespace *mnt_userns,
++				 const struct path *path, struct kstat *stat,
++				 u32 request_mask, unsigned int query_flags)
++{
++	ubifs_getattr(mnt_userns, path, stat, request_mask, query_flags);
++
++	if (IS_ENCRYPTED(d_inode(path->dentry)))
++		return fscrypt_symlink_getattr(path, stat);
++	return 0;
++}
++
+ const struct address_space_operations ubifs_file_address_operations = {
+ 	.readpage       = ubifs_readpage,
+ 	.writepage      = ubifs_writepage,
+@@ -1655,7 +1666,7 @@ const struct inode_operations ubifs_file_inode_operations = {
+ const struct inode_operations ubifs_symlink_inode_operations = {
+ 	.get_link    = ubifs_get_link,
+ 	.setattr     = ubifs_setattr,
+-	.getattr     = ubifs_getattr,
++	.getattr     = ubifs_symlink_getattr,
+ 	.listxattr   = ubifs_listxattr,
+ 	.update_time = ubifs_update_time,
+ };
+diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
+index 2ea1387bb497b..b7bfd0cd4f3ef 100644
+--- a/include/linux/fscrypt.h
++++ b/include/linux/fscrypt.h
+@@ -253,6 +253,7 @@ int __fscrypt_encrypt_symlink(struct inode *inode, const char *target,
+ const char *fscrypt_get_symlink(struct inode *inode, const void *caddr,
+ 				unsigned int max_size,
+ 				struct delayed_call *done);
++int fscrypt_symlink_getattr(const struct path *path, struct kstat *stat);
+ static inline void fscrypt_set_ops(struct super_block *sb,
+ 				   const struct fscrypt_operations *s_cop)
+ {
+@@ -583,6 +584,12 @@ static inline const char *fscrypt_get_symlink(struct inode *inode,
+ 	return ERR_PTR(-EOPNOTSUPP);
+ }
+ 
++static inline int fscrypt_symlink_getattr(const struct path *path,
++					  struct kstat *stat)
++{
++	return -EOPNOTSUPP;
++}
++
+ static inline void fscrypt_set_ops(struct super_block *sb,
+ 				   const struct fscrypt_operations *s_cop)
+ {
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index eaf5bb008aa99..d65ce093e5a7c 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -4012,6 +4012,10 @@ int netdev_rx_handler_register(struct net_device *dev,
+ void netdev_rx_handler_unregister(struct net_device *dev);
+ 
+ bool dev_valid_name(const char *name);
++static inline bool is_socket_ioctl_cmd(unsigned int cmd)
++{
++	return _IOC_TYPE(cmd) == SOCK_IOC_TYPE;
++}
+ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr,
+ 		bool *need_copyout);
+ int dev_ifconf(struct net *net, struct ifconf *, int);
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index b2be4e978ba3e..2cd7b5694422d 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -593,7 +593,6 @@ static void prune_tree_chunks(struct audit_tree *victim, bool tagged)
+ 		spin_lock(&hash_lock);
+ 	}
+ 	spin_unlock(&hash_lock);
+-	put_tree(victim);
+ }
+ 
+ /*
+@@ -602,6 +601,7 @@ static void prune_tree_chunks(struct audit_tree *victim, bool tagged)
+ static void prune_one(struct audit_tree *victim)
+ {
+ 	prune_tree_chunks(victim, false);
++	put_tree(victim);
+ }
+ 
+ /* trim the uncommitted chunks from tree */
+diff --git a/net/socket.c b/net/socket.c
+index 0b2dad3bdf7fe..8808b3617dac9 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1109,7 +1109,7 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 		rtnl_unlock();
+ 		if (!err && copy_to_user(argp, &ifc, sizeof(struct ifconf)))
+ 			err = -EFAULT;
+-	} else {
++	} else if (is_socket_ioctl_cmd(cmd)) {
+ 		struct ifreq ifr;
+ 		bool need_copyout;
+ 		if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))
+@@ -1118,6 +1118,8 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 		if (!err && need_copyout)
+ 			if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))
+ 				return -EFAULT;
++	} else {
++		err = -ENOTTY;
+ 	}
+ 	return err;
+ }
+@@ -3306,6 +3308,8 @@ static int compat_ifr_data_ioctl(struct net *net, unsigned int cmd,
+ 	struct ifreq ifreq;
+ 	u32 data32;
+ 
++	if (!is_socket_ioctl_cmd(cmd))
++		return -ENOTTY;
+ 	if (copy_from_user(ifreq.ifr_name, u_ifreq32->ifr_name, IFNAMSIZ))
+ 		return -EFAULT;
+ 	if (get_user(data32, &u_ifreq32->ifr_data))


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-03 11:52 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-03 11:52 UTC (permalink / raw
  To: gentoo-commits

commit:     617b247c0eb4261fed5c7d51924f4a7f75b7c1e8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 11:52:35 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 11:52:35 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=617b247c

Remove redundant patch

Removed: 2700_Bluetooth-usb-alt-3-for-WBS.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 --
 2700_Bluetooth-usb-alt-3-for-WBS.patch | 84 ----------------------------------
 2 files changed, 88 deletions(-)

diff --git a/0000_README b/0000_README
index b8850b4..dc9ab2d 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_Bluetooth-usb-alt-3-for-WBS.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=55981d3541812234e687062926ff199c83f79a39
-Desc:   Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_Bluetooth-usb-alt-3-for-WBS.patch b/2700_Bluetooth-usb-alt-3-for-WBS.patch
deleted file mode 100644
index e0a67ea..0000000
--- a/2700_Bluetooth-usb-alt-3-for-WBS.patch
+++ /dev/null
@@ -1,84 +0,0 @@
-From 55981d3541812234e687062926ff199c83f79a39 Mon Sep 17 00:00:00 2001
-From: Pauli Virtanen <pav@iki.fi>
-Date: Mon, 26 Jul 2021 21:02:06 +0300
-Subject: Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Some USB BT adapters don't satisfy the MTU requirement mentioned in
-commit e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
-and have ALT 3 setting that produces no/garbled audio. Some adapters
-with larger MTU were also reported to have problems with ALT 3.
-
-Add a flag and check it and MTU before selecting ALT 3, falling back to
-ALT 1. Enable the flag for Realtek, restoring the previous behavior for
-non-Realtek devices.
-
-Tested with USB adapters (mtu<72, no/garbled sound with ALT3, ALT1
-works) BCM20702A1 0b05:17cb, CSR8510A10 0a12:0001, and (mtu>=72, ALT3
-works) RTL8761BU 0bda:8771, Intel AX200 8087:0029 (after disabling
-ALT6). Also got reports for (mtu>=72, ALT 3 reported to produce bad
-audio) Intel 8087:0a2b.
-
-Signed-off-by: Pauli Virtanen <pav@iki.fi>
-Fixes: e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
-Tested-by: Michał Kępień <kernel@kempniu.pl>
-Tested-by: Jonathan Lampérth <jon@h4n.dev>
-Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
----
- drivers/bluetooth/btusb.c | 22 ++++++++++++++--------
- 1 file changed, 14 insertions(+), 8 deletions(-)
-
-diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
-index 488f110e17e27..2336f731dbc7e 100644
---- a/drivers/bluetooth/btusb.c
-+++ b/drivers/bluetooth/btusb.c
-@@ -528,6 +528,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
- #define BTUSB_HW_RESET_ACTIVE	12
- #define BTUSB_TX_WAIT_VND_EVT	13
- #define BTUSB_WAKEUP_DISABLE	14
-+#define BTUSB_USE_ALT3_FOR_WBS	15
- 
- struct btusb_data {
- 	struct hci_dev       *hdev;
-@@ -1761,16 +1762,20 @@ static void btusb_work(struct work_struct *work)
- 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
- 			 * many adapters do not support it.  Alt 1 appears to
- 			 * work for all adapters that do not have alt 6, and
--			 * which work with WBS at all.
-+			 * which work with WBS at all.  Some devices prefer
-+			 * alt 3 (HCI payload >= 60 Bytes let air packet
-+			 * data satisfy 60 bytes), requiring
-+			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
-+			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
- 			 */
--			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
--			/* Because mSBC frames do not need to be aligned to the
--			 * SCO packet boundary. If support the Alt 3, use the
--			 * Alt 3 for HCI payload >= 60 Bytes let air packet
--			 * data satisfy 60 bytes.
--			 */
--			if (new_alts == 1 && btusb_find_altsetting(data, 3))
-+			if (btusb_find_altsetting(data, 6))
-+				new_alts = 6;
-+			else if (btusb_find_altsetting(data, 3) &&
-+				 hdev->sco_mtu >= 72 &&
-+				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
- 				new_alts = 3;
-+			else
-+				new_alts = 1;
- 		}
- 
- 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
-@@ -3882,6 +3887,7 @@ static int btusb_probe(struct usb_interface *intf,
- 		 * (DEVICE_REMOTE_WAKEUP)
- 		 */
- 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
-+		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
- 	}
- 
- 	if (!reset)
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-08 12:39 Alice Ferrazzi
  0 siblings, 0 replies; 40+ messages in thread
From: Alice Ferrazzi @ 2021-09-08 12:39 UTC (permalink / raw
  To: gentoo-commits

commit:     7b32dce1b780090fed085118f338b12b31e4ea8f
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  8 12:37:04 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Sep  8 12:39:36 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7b32dce1

Linux patch 5.14.2

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README             |   4 +
 1001_linux-5.14.2.patch | 336 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 340 insertions(+)

diff --git a/0000_README b/0000_README
index dc9ab2d..d1db2c0 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-5.14.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.1
 
+Patch:  1001_linux-5.14.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.14.2.patch b/1001_linux-5.14.2.patch
new file mode 100644
index 0000000..be5d674
--- /dev/null
+++ b/1001_linux-5.14.2.patch
@@ -0,0 +1,336 @@
+diff --git a/Makefile b/Makefile
+index 83d1f7c1fd304..9a2b00ecc6af4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index 3878880469d10..b843902ad9fd7 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -30,7 +30,7 @@ config XTENSA
+ 	select HAVE_DMA_CONTIGUOUS
+ 	select HAVE_EXIT_THREAD
+ 	select HAVE_FUNCTION_TRACER
+-	select HAVE_FUTEX_CMPXCHG if !MMU
++	select HAVE_FUTEX_CMPXCHG if !MMU && FUTEX
+ 	select HAVE_HW_BREAKPOINT if PERF_EVENTS
+ 	select HAVE_IRQ_TIME_ACCOUNTING
+ 	select HAVE_PCI
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index 06130dc431a04..b234958f883a4 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -377,27 +377,27 @@ static int hid_submit_ctrl(struct hid_device *hid)
+ 	len = hid_report_len(report);
+ 	if (dir == USB_DIR_OUT) {
+ 		usbhid->urbctrl->pipe = usb_sndctrlpipe(hid_to_usb_dev(hid), 0);
+-		usbhid->urbctrl->transfer_buffer_length = len;
+ 		if (raw_report) {
+ 			memcpy(usbhid->ctrlbuf, raw_report, len);
+ 			kfree(raw_report);
+ 			usbhid->ctrl[usbhid->ctrltail].raw_report = NULL;
+ 		}
+ 	} else {
+-		int maxpacket, padlen;
++		int maxpacket;
+ 
+ 		usbhid->urbctrl->pipe = usb_rcvctrlpipe(hid_to_usb_dev(hid), 0);
+ 		maxpacket = usb_maxpacket(hid_to_usb_dev(hid),
+ 					  usbhid->urbctrl->pipe, 0);
+ 		if (maxpacket > 0) {
+-			padlen = DIV_ROUND_UP(len, maxpacket);
+-			padlen *= maxpacket;
+-			if (padlen > usbhid->bufsize)
+-				padlen = usbhid->bufsize;
++			len += (len == 0);    /* Don't allow 0-length reports */
++			len = DIV_ROUND_UP(len, maxpacket);
++			len *= maxpacket;
++			if (len > usbhid->bufsize)
++				len = usbhid->bufsize;
+ 		} else
+-			padlen = 0;
+-		usbhid->urbctrl->transfer_buffer_length = padlen;
++			len = 0;
+ 	}
++	usbhid->urbctrl->transfer_buffer_length = len;
+ 	usbhid->urbctrl->dev = hid_to_usb_dev(hid);
+ 
+ 	usbhid->cr->bRequestType = USB_TYPE_CLASS | USB_RECIP_INTERFACE | dir;
+diff --git a/drivers/media/usb/stkwebcam/stk-webcam.c b/drivers/media/usb/stkwebcam/stk-webcam.c
+index a45d464427c4c..0e231e576dc3d 100644
+--- a/drivers/media/usb/stkwebcam/stk-webcam.c
++++ b/drivers/media/usb/stkwebcam/stk-webcam.c
+@@ -1346,7 +1346,7 @@ static int stk_camera_probe(struct usb_interface *interface,
+ 	if (!dev->isoc_ep) {
+ 		pr_err("Could not find isoc-in endpoint\n");
+ 		err = -ENODEV;
+-		goto error;
++		goto error_put;
+ 	}
+ 	dev->vsettings.palette = V4L2_PIX_FMT_RGB565;
+ 	dev->vsettings.mode = MODE_VGA;
+@@ -1359,10 +1359,12 @@ static int stk_camera_probe(struct usb_interface *interface,
+ 
+ 	err = stk_register_video_device(dev);
+ 	if (err)
+-		goto error;
++		goto error_put;
+ 
+ 	return 0;
+ 
++error_put:
++	usb_put_intf(interface);
+ error:
+ 	v4l2_ctrl_handler_free(hdl);
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 3c80bfbf3bec9..d48bed5782a5c 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -1164,10 +1164,8 @@ static int cp210x_set_chars(struct usb_serial_port *port,
+ 
+ 	kfree(dmabuf);
+ 
+-	if (result < 0) {
+-		dev_err(&port->dev, "failed to set special chars: %d\n", result);
++	if (result < 0)
+ 		return result;
+-	}
+ 
+ 	return 0;
+ }
+@@ -1192,6 +1190,7 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 	struct cp210x_flow_ctl flow_ctl;
+ 	u32 flow_repl;
+ 	u32 ctl_hs;
++	bool crtscts;
+ 	int ret;
+ 
+ 	/*
+@@ -1219,8 +1218,10 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 		chars.bXoffChar = STOP_CHAR(tty);
+ 
+ 		ret = cp210x_set_chars(port, &chars);
+-		if (ret)
+-			return;
++		if (ret) {
++			dev_err(&port->dev, "failed to set special chars: %d\n",
++					ret);
++		}
+ 	}
+ 
+ 	mutex_lock(&port_priv->mutex);
+@@ -1249,14 +1250,14 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 			flow_repl |= CP210X_SERIAL_RTS_FLOW_CTL;
+ 		else
+ 			flow_repl |= CP210X_SERIAL_RTS_INACTIVE;
+-		port_priv->crtscts = true;
++		crtscts = true;
+ 	} else {
+ 		ctl_hs &= ~CP210X_SERIAL_CTS_HANDSHAKE;
+ 		if (port_priv->rts)
+ 			flow_repl |= CP210X_SERIAL_RTS_ACTIVE;
+ 		else
+ 			flow_repl |= CP210X_SERIAL_RTS_INACTIVE;
+-		port_priv->crtscts = false;
++		crtscts = false;
+ 	}
+ 
+ 	if (I_IXOFF(tty)) {
+@@ -1279,8 +1280,12 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 	flow_ctl.ulControlHandshake = cpu_to_le32(ctl_hs);
+ 	flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl);
+ 
+-	cp210x_write_reg_block(port, CP210X_SET_FLOW, &flow_ctl,
++	ret = cp210x_write_reg_block(port, CP210X_SET_FLOW, &flow_ctl,
+ 			sizeof(flow_ctl));
++	if (ret)
++		goto out_unlock;
++
++	port_priv->crtscts = crtscts;
+ out_unlock:
+ 	mutex_unlock(&port_priv->mutex);
+ }
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 930b3d50a3308..f45ca7ddf78ea 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -433,6 +433,7 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ 		switch (bcdDevice) {
+ 		case 0x100:
+ 		case 0x305:
++		case 0x405:
+ 			/*
+ 			 * Assume it's an HXN-type if the device doesn't
+ 			 * support the old read request value.
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 70cb64db33f73..24e994e75f5ca 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -750,6 +750,12 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	ext4_write_lock_xattr(inode, &no_expand);
+ 	BUG_ON(!ext4_has_inline_data(inode));
+ 
++	/*
++	 * ei->i_inline_off may have changed since ext4_write_begin()
++	 * called ext4_try_to_write_inline_data()
++	 */
++	(void) ext4_find_inline_data_nolock(inode);
++
+ 	kaddr = kmap_atomic(page);
+ 	ext4_write_inline_data(inode, &iloc, kaddr, pos, len);
+ 	kunmap_atomic(kaddr);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index dfa09a277b56f..970013c93d3ea 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5032,6 +5032,14 @@ no_journal:
+ 		err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
+ 					  GFP_KERNEL);
+ 	}
++	/*
++	 * Update the checksum after updating free space/inode
++	 * counters.  Otherwise the superblock can have an incorrect
++	 * checksum in the buffer cache until it is written out and
++	 * e2fsprogs programs trying to open a file system immediately
++	 * after it is mounted can fail.
++	 */
++	ext4_superblock_csum_set(sb);
+ 	if (!err)
+ 		err = percpu_counter_init(&sbi->s_dirs_counter,
+ 					  ext4_count_dirs(sb), GFP_KERNEL);
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 7d5883432085a..a144a3f68e9eb 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1746,7 +1746,7 @@ static int snd_pcm_lib_ioctl_fifo_size(struct snd_pcm_substream *substream,
+ 		channels = params_channels(params);
+ 		frame_size = snd_pcm_format_size(format, channels);
+ 		if (frame_size > 0)
+-			params->fifo_size /= (unsigned)frame_size;
++			params->fifo_size /= frame_size;
+ 	}
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 7ad689f991e7e..70516527ebce3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8438,6 +8438,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -9521,6 +9522,16 @@ static int patch_alc269(struct hda_codec *codec)
+ 
+ 	snd_hda_pick_fixup(codec, alc269_fixup_models,
+ 		       alc269_fixup_tbl, alc269_fixups);
++	/* FIXME: both TX300 and ROG Strix G17 have the same SSID, and
++	 * the quirk breaks the latter (bko#214101).
++	 * Clear the wrong entry.
++	 */
++	if (codec->fixup_id == ALC282_FIXUP_ASUS_TX300 &&
++	    codec->core.vendor_id == 0x10ec0294) {
++		codec_dbg(codec, "Clear wrong fixup for ASUS ROG Strix G17\n");
++		codec->fixup_id = HDA_FIXUP_ID_NOT_SET;
++	}
++
+ 	snd_hda_pick_pin_fixup(codec, alc269_pin_fixup_tbl, alc269_fixups, true);
+ 	snd_hda_pick_pin_fixup(codec, alc269_fallback_pin_fixup_tbl, alc269_fixups, false);
+ 	snd_hda_pick_fixup(codec, NULL,	alc269_fixup_vendor_tbl,
+diff --git a/sound/usb/card.h b/sound/usb/card.h
+index 6c0a052a28f99..5b19901f305a3 100644
+--- a/sound/usb/card.h
++++ b/sound/usb/card.h
+@@ -94,6 +94,7 @@ struct snd_usb_endpoint {
+ 	struct list_head ready_playback_urbs; /* playback URB FIFO for implicit fb */
+ 
+ 	unsigned int nurbs;		/* # urbs */
++	unsigned int nominal_queue_size; /* total buffer sizes in URBs */
+ 	unsigned long active_mask;	/* bitmask of active urbs */
+ 	unsigned long unlink_mask;	/* bitmask of unlinked urbs */
+ 	char *syncbuf;			/* sync buffer for all sync URBs */
+@@ -187,6 +188,7 @@ struct snd_usb_substream {
+ 	} dsd_dop;
+ 
+ 	bool trigger_tstamp_pending_update; /* trigger timestamp being updated from initial estimate */
++	bool early_playback_start;	/* early start needed for playback? */
+ 	struct media_ctl *media_ctl;
+ };
+ 
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 4f856771216b4..bf26c04cf4716 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -1126,6 +1126,10 @@ static int data_ep_set_params(struct snd_usb_endpoint *ep)
+ 		INIT_LIST_HEAD(&u->ready_list);
+ 	}
+ 
++	/* total buffer bytes of all URBs plus the next queue;
++	 * referred in pcm.c
++	 */
++	ep->nominal_queue_size = maxsize * urb_packs * (ep->nurbs + 1);
+ 	return 0;
+ 
+ out_of_memory:
+@@ -1287,6 +1291,11 @@ int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
+ 	 * to be set up before parameter setups
+ 	 */
+ 	iface_first = ep->cur_audiofmt->protocol == UAC_VERSION_1;
++	/* Workaround for Sony WALKMAN NW-A45 DAC;
++	 * it requires the interface setup at first like UAC1
++	 */
++	if (chip->usb_id == USB_ID(0x054c, 0x0b8c))
++		iface_first = true;
+ 	if (iface_first) {
+ 		err = endpoint_set_interface(chip, ep, true);
+ 		if (err < 0)
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 4e5031a680647..f5cbf61ac366e 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -614,6 +614,14 @@ static int snd_usb_pcm_prepare(struct snd_pcm_substream *substream)
+ 	subs->period_elapsed_pending = 0;
+ 	runtime->delay = 0;
+ 
++	/* check whether early start is needed for playback stream */
++	subs->early_playback_start =
++		subs->direction == SNDRV_PCM_STREAM_PLAYBACK &&
++		subs->data_endpoint->nominal_queue_size >= subs->buffer_bytes;
++
++	if (subs->early_playback_start)
++		ret = start_endpoints(subs);
++
+  unlock:
+ 	snd_usb_unlock_shutdown(chip);
+ 	return ret;
+@@ -1394,7 +1402,7 @@ static void prepare_playback_urb(struct snd_usb_substream *subs,
+ 		subs->trigger_tstamp_pending_update = false;
+ 	}
+ 
+-	if (period_elapsed && !subs->running) {
++	if (period_elapsed && !subs->running && !subs->early_playback_start) {
+ 		subs->period_elapsed_pending = 1;
+ 		period_elapsed = 0;
+ 	}
+@@ -1448,7 +1456,8 @@ static int snd_usb_substream_playback_trigger(struct snd_pcm_substream *substrea
+ 					      prepare_playback_urb,
+ 					      retire_playback_urb,
+ 					      subs);
+-		if (cmd == SNDRV_PCM_TRIGGER_START) {
++		if (!subs->early_playback_start &&
++		    cmd == SNDRV_PCM_TRIGGER_START) {
+ 			err = start_endpoints(subs);
+ 			if (err < 0) {
+ 				snd_usb_endpoint_set_callback(subs->data_endpoint,


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-12 14:36 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-12 14:36 UTC (permalink / raw
  To: gentoo-commits

commit:     5515c5fcb54e0deb1c8681a2ee87c724495401b9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 12 14:35:54 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep 12 14:35:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5515c5fc

Linux patch 5.14.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1002_linux-5.14.3.patch | 1007 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1011 insertions(+)

diff --git a/0000_README b/0000_README
index d1db2c0..4ad6164 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.14.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.2
 
+Patch:  1002_linux-5.14.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.14.3.patch b/1002_linux-5.14.3.patch
new file mode 100644
index 0000000..5fdb325
--- /dev/null
+++ b/1002_linux-5.14.3.patch
@@ -0,0 +1,1007 @@
+diff --git a/Makefile b/Makefile
+index 9a2b00ecc6af4..8715942fccb4a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index ebfb911082326..0a40df66a40de 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -388,10 +388,11 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ 	},
+ 	{	/* Handle problems with rebooting on the OptiPlex 990. */
+ 		.callback = set_pci_reboot,
+-		.ident = "Dell OptiPlex 990",
++		.ident = "Dell OptiPlex 990 BIOS A0x",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 990"),
++			DMI_MATCH(DMI_BIOS_VERSION, "A0"),
+ 		},
+ 	},
+ 	{	/* Handle problems with rebooting on Dell 300's */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a552e7b483603..0255bf243ce55 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -452,6 +452,10 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Additional Realtek 8822CE Bluetooth devices */
+ 	{ USB_DEVICE(0x04ca, 0x4005), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	/* Bluetooth component of Realtek 8852AE device */
++	{ USB_DEVICE(0x04ca, 0x4006), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	{ USB_DEVICE(0x04c5, 0x161f), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0b05, 0x18ef), .driver_info = BTUSB_REALTEK |
+@@ -1895,7 +1899,7 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 		is_fake = true;
+ 
+ 	if (is_fake) {
+-		bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds...");
++		bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds and force-suspending once...");
+ 
+ 		/* Generally these clones have big discrepancies between
+ 		 * advertised features and what's actually supported.
+@@ -1912,41 +1916,46 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 		clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+ 
+ 		/*
+-		 * Special workaround for clones with a Barrot 8041a02 chip,
+-		 * these clones are really messed-up:
+-		 * 1. Their bulk rx endpoint will never report any data unless
+-		 * the device was suspended at least once (yes really).
++		 * Special workaround for these BT 4.0 chip clones, and potentially more:
++		 *
++		 * - 0x0134: a Barrot 8041a02                 (HCI rev: 0x1012 sub: 0x0810)
++		 * - 0x7558: IC markings FR3191AHAL 749H15143 (HCI rev/sub-version: 0x0709)
++		 *
++		 * These controllers are really messed-up.
++		 *
++		 * 1. Their bulk RX endpoint will never report any data unless
++		 * the device was suspended at least once (yes, really).
+ 		 * 2. They will not wakeup when autosuspended and receiving data
+-		 * on their bulk rx endpoint from e.g. a keyboard or mouse
++		 * on their bulk RX endpoint from e.g. a keyboard or mouse
+ 		 * (IOW remote-wakeup support is broken for the bulk endpoint).
+ 		 *
+ 		 * To fix 1. enable runtime-suspend, force-suspend the
+-		 * hci and then wake-it up by disabling runtime-suspend.
++		 * HCI and then wake-it up by disabling runtime-suspend.
+ 		 *
+-		 * To fix 2. clear the hci's can_wake flag, this way the hci
++		 * To fix 2. clear the HCI's can_wake flag, this way the HCI
+ 		 * will still be autosuspended when it is not open.
++		 *
++		 * --
++		 *
++		 * Because these are widespread problems we prefer generic solutions; so
++		 * apply this initialization quirk to every controller that gets here,
++		 * it should be harmless. The alternative is to not work at all.
+ 		 */
+-		if (bcdDevice == 0x8891 &&
+-		    le16_to_cpu(rp->lmp_subver) == 0x1012 &&
+-		    le16_to_cpu(rp->hci_rev) == 0x0810 &&
+-		    le16_to_cpu(rp->hci_ver) == BLUETOOTH_VER_4_0) {
+-			bt_dev_warn(hdev, "CSR: detected a fake CSR dongle using a Barrot 8041a02 chip, this chip is very buggy and may have issues");
++		pm_runtime_allow(&data->udev->dev);
+ 
+-			pm_runtime_allow(&data->udev->dev);
++		ret = pm_runtime_suspend(&data->udev->dev);
++		if (ret >= 0)
++			msleep(200);
++		else
++			bt_dev_err(hdev, "CSR: Failed to suspend the device for our Barrot 8041a02 receive-issue workaround");
+ 
+-			ret = pm_runtime_suspend(&data->udev->dev);
+-			if (ret >= 0)
+-				msleep(200);
+-			else
+-				bt_dev_err(hdev, "Failed to suspend the device for Barrot 8041a02 receive-issue workaround");
++		pm_runtime_forbid(&data->udev->dev);
+ 
+-			pm_runtime_forbid(&data->udev->dev);
++		device_set_wakeup_capable(&data->udev->dev, false);
+ 
+-			device_set_wakeup_capable(&data->udev->dev, false);
+-			/* Re-enable autosuspend if this was requested */
+-			if (enable_autosuspend)
+-				usb_enable_autosuspend(data->udev);
+-		}
++		/* Re-enable autosuspend if this was requested */
++		if (enable_autosuspend)
++			usb_enable_autosuspend(data->udev);
+ 	}
+ 
+ 	kfree_skb(skb);
+diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
+index 8ae89273f58e9..54e9d4d2cf5f5 100644
+--- a/drivers/cxl/acpi.c
++++ b/drivers/cxl/acpi.c
+@@ -243,6 +243,9 @@ static struct acpi_device *to_cxl_host_bridge(struct device *dev)
+ {
+ 	struct acpi_device *adev = to_acpi_device(dev);
+ 
++	if (!acpi_pci_find_root(adev->handle))
++		return NULL;
++
+ 	if (strcmp(acpi_device_hid(adev), "ACPI0016") == 0)
+ 		return adev;
+ 	return NULL;
+@@ -266,10 +269,6 @@ static int add_host_bridge_uport(struct device *match, void *arg)
+ 	if (!bridge)
+ 		return 0;
+ 
+-	pci_root = acpi_pci_find_root(bridge->handle);
+-	if (!pci_root)
+-		return -ENXIO;
+-
+ 	dport = find_dport_by_dev(root_port, match);
+ 	if (!dport) {
+ 		dev_dbg(host, "host bridge expected and not found\n");
+@@ -282,6 +281,11 @@ static int add_host_bridge_uport(struct device *match, void *arg)
+ 		return PTR_ERR(port);
+ 	dev_dbg(host, "%s: add: %s\n", dev_name(match), dev_name(&port->dev));
+ 
++	/*
++	 * Note that this lookup already succeeded in
++	 * to_cxl_host_bridge(), so no need to check for failure here
++	 */
++	pci_root = acpi_pci_find_root(bridge->handle);
+ 	ctx = (struct cxl_walk_context){
+ 		.dev = host,
+ 		.root = pci_root->bus,
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 4cf351a3cf992..145ad4bc305fc 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -568,7 +568,7 @@ static bool cxl_mem_raw_command_allowed(u16 opcode)
+ 	if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
+ 		return false;
+ 
+-	if (security_locked_down(LOCKDOWN_NONE))
++	if (security_locked_down(LOCKDOWN_PCI_ACCESS))
+ 		return false;
+ 
+ 	if (cxl_raw_allow_all)
+@@ -1022,8 +1022,8 @@ static int cxl_probe_regs(struct cxl_mem *cxlm, void __iomem *base,
+ 		    !dev_map->memdev.valid) {
+ 			dev_err(dev, "registers not found: %s%s%s\n",
+ 				!dev_map->status.valid ? "status " : "",
+-				!dev_map->mbox.valid ? "status " : "",
+-				!dev_map->memdev.valid ? "status " : "");
++				!dev_map->mbox.valid ? "mbox " : "",
++				!dev_map->memdev.valid ? "memdev " : "");
+ 			return -ENXIO;
+ 		}
+ 
+diff --git a/drivers/firmware/dmi-id.c b/drivers/firmware/dmi-id.c
+index 4d5421d14a410..940ddf916202a 100644
+--- a/drivers/firmware/dmi-id.c
++++ b/drivers/firmware/dmi-id.c
+@@ -73,6 +73,10 @@ static void ascii_filter(char *d, const char *s)
+ 
+ static ssize_t get_modalias(char *buffer, size_t buffer_size)
+ {
++	/*
++	 * Note new fields need to be added at the end to keep compatibility
++	 * with udev's hwdb which does matches on "`cat dmi/id/modalias`*".
++	 */
+ 	static const struct mafield {
+ 		const char *prefix;
+ 		int field;
+@@ -85,13 +89,13 @@ static ssize_t get_modalias(char *buffer, size_t buffer_size)
+ 		{ "svn", DMI_SYS_VENDOR },
+ 		{ "pn",  DMI_PRODUCT_NAME },
+ 		{ "pvr", DMI_PRODUCT_VERSION },
+-		{ "sku", DMI_PRODUCT_SKU },
+ 		{ "rvn", DMI_BOARD_VENDOR },
+ 		{ "rn",  DMI_BOARD_NAME },
+ 		{ "rvr", DMI_BOARD_VERSION },
+ 		{ "cvn", DMI_CHASSIS_VENDOR },
+ 		{ "ct",  DMI_CHASSIS_TYPE },
+ 		{ "cvr", DMI_CHASSIS_VERSION },
++		{ "sku", DMI_PRODUCT_SKU },
+ 		{ NULL,  DMI_NONE }
+ 	};
+ 
+diff --git a/drivers/net/can/c_can/c_can_ethtool.c b/drivers/net/can/c_can/c_can_ethtool.c
+index cd5f07fca2a56..377c7d2e76120 100644
+--- a/drivers/net/can/c_can/c_can_ethtool.c
++++ b/drivers/net/can/c_can/c_can_ethtool.c
+@@ -15,10 +15,8 @@ static void c_can_get_drvinfo(struct net_device *netdev,
+ 			      struct ethtool_drvinfo *info)
+ {
+ 	struct c_can_priv *priv = netdev_priv(netdev);
+-	struct platform_device *pdev = to_platform_device(priv->device);
+-
+ 	strscpy(info->driver, "c_can", sizeof(info->driver));
+-	strscpy(info->bus_info, pdev->name, sizeof(info->bus_info));
++	strscpy(info->bus_info, dev_name(priv->device), sizeof(info->bus_info));
+ }
+ 
+ static void c_can_get_ringparam(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 4d8e337f5085a..55411c100a0e5 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -3489,6 +3489,7 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp)
+ 	rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000);
+ 
+ 	rtl_pcie_state_l2l3_disable(tp);
++	rtl_hw_aspm_clkreq_enable(tp, true);
+ }
+ 
+ static void rtl_hw_start_8106(struct rtl8169_private *tp)
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index ab3de1551b503..7b1c81b899cdf 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3235,12 +3235,12 @@ static void fixup_mpss_256(struct pci_dev *dev)
+ {
+ 	dev->pcie_mpss = 1; /* 256 bytes */
+ }
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
+ 
+ /*
+  * Intel 5000 and 5100 Memory controllers have an erratum with read completion
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index f9bdf4e331341..6acfc94a16e73 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -56,6 +56,7 @@
+ #define PCIE_BAR_ENABLE			BIT(0)
+ #define PCIE_PORT_INT_EN(x)		BIT(20 + (x))
+ #define PCIE_PORT_LINKUP		BIT(0)
++#define PCIE_PORT_CNT			3
+ 
+ #define PERST_DELAY_MS			100
+ 
+@@ -388,10 +389,11 @@ static void mt7621_pcie_reset_ep_deassert(struct mt7621_pcie *pcie)
+ 	msleep(PERST_DELAY_MS);
+ }
+ 
+-static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
++static int mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+ {
+ 	struct device *dev = pcie->dev;
+ 	struct mt7621_pcie_port *port, *tmp;
++	u8 num_disabled = 0;
+ 	int err;
+ 
+ 	mt7621_pcie_reset_assert(pcie);
+@@ -423,6 +425,7 @@ static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+ 				slot);
+ 			mt7621_control_assert(port);
+ 			port->enabled = false;
++			num_disabled++;
+ 
+ 			if (slot == 0) {
+ 				tmp = port;
+@@ -433,6 +436,8 @@ static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+ 				phy_power_off(tmp->phy);
+ 		}
+ 	}
++
++	return (num_disabled != PCIE_PORT_CNT) ? 0 : -ENODEV;
+ }
+ 
+ static void mt7621_pcie_enable_port(struct mt7621_pcie_port *port)
+@@ -540,7 +545,11 @@ static int mt7621_pci_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
+-	mt7621_pcie_init_ports(pcie);
++	err = mt7621_pcie_init_ports(pcie);
++	if (err) {
++		dev_err(dev, "Nothing connected in virtual bridges\n");
++		return 0;
++	}
+ 
+ 	err = mt7621_pcie_enable_ports(bridge);
+ 	if (err) {
+diff --git a/drivers/usb/cdns3/cdnsp-mem.c b/drivers/usb/cdns3/cdnsp-mem.c
+index a47948a1623fd..ad9aee3f1e398 100644
+--- a/drivers/usb/cdns3/cdnsp-mem.c
++++ b/drivers/usb/cdns3/cdnsp-mem.c
+@@ -882,7 +882,7 @@ static u32 cdnsp_get_endpoint_max_burst(struct usb_gadget *g,
+ 	if (g->speed == USB_SPEED_HIGH &&
+ 	    (usb_endpoint_xfer_isoc(pep->endpoint.desc) ||
+ 	     usb_endpoint_xfer_int(pep->endpoint.desc)))
+-		return (usb_endpoint_maxp(pep->endpoint.desc) & 0x1800) >> 11;
++		return usb_endpoint_maxp_mult(pep->endpoint.desc) - 1;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index c0ca7144e5128..43f1b0d461c1e 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -1610,7 +1610,7 @@ static void tegra_xudc_ep_context_setup(struct tegra_xudc_ep *ep)
+ 	u16 maxpacket, maxburst = 0, esit = 0;
+ 	u32 val;
+ 
+-	maxpacket = usb_endpoint_maxp(desc) & 0x7ff;
++	maxpacket = usb_endpoint_maxp(desc);
+ 	if (xudc->gadget.speed == USB_SPEED_SUPER) {
+ 		if (!usb_endpoint_xfer_control(desc))
+ 			maxburst = comp_desc->bMaxBurst;
+@@ -1621,7 +1621,7 @@ static void tegra_xudc_ep_context_setup(struct tegra_xudc_ep *ep)
+ 		   (usb_endpoint_xfer_int(desc) ||
+ 		    usb_endpoint_xfer_isoc(desc))) {
+ 		if (xudc->gadget.speed == USB_SPEED_HIGH) {
+-			maxburst = (usb_endpoint_maxp(desc) >> 11) & 0x3;
++			maxburst = usb_endpoint_maxp_mult(desc) - 1;
+ 			if (maxburst == 0x3) {
+ 				dev_warn(xudc->dev,
+ 					 "invalid endpoint maxburst\n");
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index 2c0fda57869e4..dc832ddf7033f 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -198,12 +198,13 @@ static void xhci_ring_dump_segment(struct seq_file *s,
+ 	int			i;
+ 	dma_addr_t		dma;
+ 	union xhci_trb		*trb;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	for (i = 0; i < TRBS_PER_SEGMENT; i++) {
+ 		trb = &seg->trbs[i];
+ 		dma = seg->dma + i * sizeof(*trb);
+ 		seq_printf(s, "%pad: %s\n", &dma,
+-			   xhci_decode_trb(le32_to_cpu(trb->generic.field[0]),
++			   xhci_decode_trb(str, XHCI_MSG_MAX, le32_to_cpu(trb->generic.field[0]),
+ 					   le32_to_cpu(trb->generic.field[1]),
+ 					   le32_to_cpu(trb->generic.field[2]),
+ 					   le32_to_cpu(trb->generic.field[3])));
+@@ -260,11 +261,13 @@ static int xhci_slot_context_show(struct seq_file *s, void *unused)
+ 	struct xhci_slot_ctx	*slot_ctx;
+ 	struct xhci_slot_priv	*priv = s->private;
+ 	struct xhci_virt_device	*dev = priv->dev;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ 	slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx);
+ 	seq_printf(s, "%pad: %s\n", &dev->out_ctx->dma,
+-		   xhci_decode_slot_context(le32_to_cpu(slot_ctx->dev_info),
++		   xhci_decode_slot_context(str,
++					    le32_to_cpu(slot_ctx->dev_info),
+ 					    le32_to_cpu(slot_ctx->dev_info2),
+ 					    le32_to_cpu(slot_ctx->tt_info),
+ 					    le32_to_cpu(slot_ctx->dev_state)));
+@@ -280,6 +283,7 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ 	struct xhci_ep_ctx	*ep_ctx;
+ 	struct xhci_slot_priv	*priv = s->private;
+ 	struct xhci_virt_device	*dev = priv->dev;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ 
+@@ -287,7 +291,8 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ 		ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
+ 		dma = dev->out_ctx->dma + (ep_index + 1) * CTX_SIZE(xhci->hcc_params);
+ 		seq_printf(s, "%pad: %s\n", &dma,
+-			   xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info),
++			   xhci_decode_ep_context(str,
++						  le32_to_cpu(ep_ctx->ep_info),
+ 						  le32_to_cpu(ep_ctx->ep_info2),
+ 						  le64_to_cpu(ep_ctx->deq),
+ 						  le32_to_cpu(ep_ctx->tx_info)));
+@@ -341,9 +346,10 @@ static int xhci_portsc_show(struct seq_file *s, void *unused)
+ {
+ 	struct xhci_port	*port = s->private;
+ 	u32			portsc;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	portsc = readl(port->addr);
+-	seq_printf(s, "%s\n", xhci_decode_portsc(portsc));
++	seq_printf(s, "%s\n", xhci_decode_portsc(str, portsc));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index cffcaf4dfa9f8..0bb1a6295d64a 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -575,10 +575,12 @@ static u32 get_esit_boundary(struct mu3h_sch_ep_info *sch_ep)
+ 	u32 boundary = sch_ep->esit;
+ 
+ 	if (sch_ep->sch_tt) { /* LS/FS with TT */
+-		/* tune for CS */
+-		if (sch_ep->ep_type != ISOC_OUT_EP)
+-			boundary++;
+-		else if (boundary > 1) /* normally esit >= 8 for FS/LS */
++		/*
++		 * tune for CS, normally esit >= 8 for FS/LS,
++		 * not add one for other types to avoid access array
++		 * out of boundary
++		 */
++		if (sch_ep->ep_type == ISOC_OUT_EP && boundary > 1)
+ 			boundary--;
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index 1bc4fe7b8c756..9888ba7d85b6a 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -134,6 +134,13 @@ static int xhci_rcar_download_firmware(struct usb_hcd *hcd)
+ 	const struct soc_device_attribute *attr;
+ 	const char *firmware_name;
+ 
++	/*
++	 * According to the datasheet, "Upon the completion of FW Download,
++	 * there is no need to write or reload FW".
++	 */
++	if (readl(regs + RCAR_USB3_DL_CTRL) & RCAR_USB3_DL_CTRL_FW_SUCCESS)
++		return 0;
++
+ 	attr = soc_device_match(rcar_quirks_match);
+ 	if (attr)
+ 		quirks = (uintptr_t)attr->data;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 8fea44bbc2665..9017986241f51 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -942,17 +942,21 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ 					 td->urb->stream_id);
+ 		hw_deq &= ~0xf;
+ 
+-		if (td->cancel_status == TD_HALTED) {
+-			cached_td = td;
+-		} else if (trb_in_td(xhci, td->start_seg, td->first_trb,
+-			      td->last_trb, hw_deq, false)) {
++		if (td->cancel_status == TD_HALTED ||
++		    trb_in_td(xhci, td->start_seg, td->first_trb, td->last_trb, hw_deq, false)) {
+ 			switch (td->cancel_status) {
+ 			case TD_CLEARED: /* TD is already no-op */
+ 			case TD_CLEARING_CACHE: /* set TR deq command already queued */
+ 				break;
+ 			case TD_DIRTY: /* TD is cached, clear it */
+ 			case TD_HALTED:
+-				/* FIXME  stream case, several stopped rings */
++				td->cancel_status = TD_CLEARING_CACHE;
++				if (cached_td)
++					/* FIXME  stream case, several stopped rings */
++					xhci_dbg(xhci,
++						 "Move dq past stream %u URB %p instead of stream %u URB %p\n",
++						 td->urb->stream_id, td->urb,
++						 cached_td->urb->stream_id, cached_td->urb);
+ 				cached_td = td;
+ 				break;
+ 			}
+@@ -961,18 +965,24 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ 			td->cancel_status = TD_CLEARED;
+ 		}
+ 	}
+-	if (cached_td) {
+-		cached_td->cancel_status = TD_CLEARING_CACHE;
+ 
+-		err = xhci_move_dequeue_past_td(xhci, slot_id, ep->ep_index,
+-						cached_td->urb->stream_id,
+-						cached_td);
+-		/* Failed to move past cached td, try just setting it noop */
+-		if (err) {
+-			td_to_noop(xhci, ring, cached_td, false);
+-			cached_td->cancel_status = TD_CLEARED;
++	/* If there's no need to move the dequeue pointer then we're done */
++	if (!cached_td)
++		return 0;
++
++	err = xhci_move_dequeue_past_td(xhci, slot_id, ep->ep_index,
++					cached_td->urb->stream_id,
++					cached_td);
++	if (err) {
++		/* Failed to move past cached td, just set cached TDs to no-op */
++		list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
++			if (td->cancel_status != TD_CLEARING_CACHE)
++				continue;
++			xhci_dbg(xhci, "Failed to clear cancelled cached URB %p, mark clear anyway\n",
++				 td->urb);
++			td_to_noop(xhci, ring, td, false);
++			td->cancel_status = TD_CLEARED;
+ 		}
+-		cached_td = NULL;
+ 	}
+ 	return 0;
+ }
+@@ -1212,6 +1222,7 @@ void xhci_stop_endpoint_command_watchdog(struct timer_list *t)
+ 	struct xhci_hcd *xhci = ep->xhci;
+ 	unsigned long flags;
+ 	u32 usbsts;
++	char str[XHCI_MSG_MAX];
+ 
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 
+@@ -1225,7 +1236,7 @@ void xhci_stop_endpoint_command_watchdog(struct timer_list *t)
+ 	usbsts = readl(&xhci->op_regs->status);
+ 
+ 	xhci_warn(xhci, "xHCI host not responding to stop endpoint command.\n");
+-	xhci_warn(xhci, "USBSTS:%s\n", xhci_decode_usbsts(usbsts));
++	xhci_warn(xhci, "USBSTS:%s\n", xhci_decode_usbsts(str, usbsts));
+ 
+ 	ep->ep_state &= ~EP_STOP_CMD_PENDING;
+ 
+diff --git a/drivers/usb/host/xhci-trace.h b/drivers/usb/host/xhci-trace.h
+index 627abd236dbe1..a5da020772977 100644
+--- a/drivers/usb/host/xhci-trace.h
++++ b/drivers/usb/host/xhci-trace.h
+@@ -25,8 +25,6 @@
+ #include "xhci.h"
+ #include "xhci-dbgcap.h"
+ 
+-#define XHCI_MSG_MAX	500
+-
+ DECLARE_EVENT_CLASS(xhci_log_msg,
+ 	TP_PROTO(struct va_format *vaf),
+ 	TP_ARGS(vaf),
+@@ -122,6 +120,7 @@ DECLARE_EVENT_CLASS(xhci_log_trb,
+ 		__field(u32, field1)
+ 		__field(u32, field2)
+ 		__field(u32, field3)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->type = ring->type;
+@@ -131,7 +130,7 @@ DECLARE_EVENT_CLASS(xhci_log_trb,
+ 		__entry->field3 = le32_to_cpu(trb->field[3]);
+ 	),
+ 	TP_printk("%s: %s", xhci_ring_type_string(__entry->type),
+-			xhci_decode_trb(__entry->field0, __entry->field1,
++		  xhci_decode_trb(__get_str(str), XHCI_MSG_MAX, __entry->field0, __entry->field1,
+ 					__entry->field2, __entry->field3)
+ 	)
+ );
+@@ -323,6 +322,7 @@ DECLARE_EVENT_CLASS(xhci_log_ep_ctx,
+ 		__field(u32, info2)
+ 		__field(u64, deq)
+ 		__field(u32, tx_info)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->info = le32_to_cpu(ctx->ep_info);
+@@ -330,8 +330,8 @@ DECLARE_EVENT_CLASS(xhci_log_ep_ctx,
+ 		__entry->deq = le64_to_cpu(ctx->deq);
+ 		__entry->tx_info = le32_to_cpu(ctx->tx_info);
+ 	),
+-	TP_printk("%s", xhci_decode_ep_context(__entry->info,
+-		__entry->info2, __entry->deq, __entry->tx_info)
++	TP_printk("%s", xhci_decode_ep_context(__get_str(str),
++		__entry->info, __entry->info2, __entry->deq, __entry->tx_info)
+ 	)
+ );
+ 
+@@ -368,6 +368,7 @@ DECLARE_EVENT_CLASS(xhci_log_slot_ctx,
+ 		__field(u32, info2)
+ 		__field(u32, tt_info)
+ 		__field(u32, state)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->info = le32_to_cpu(ctx->dev_info);
+@@ -375,9 +376,9 @@ DECLARE_EVENT_CLASS(xhci_log_slot_ctx,
+ 		__entry->tt_info = le64_to_cpu(ctx->tt_info);
+ 		__entry->state = le32_to_cpu(ctx->dev_state);
+ 	),
+-	TP_printk("%s", xhci_decode_slot_context(__entry->info,
+-			__entry->info2, __entry->tt_info,
+-			__entry->state)
++	TP_printk("%s", xhci_decode_slot_context(__get_str(str),
++			__entry->info, __entry->info2,
++			__entry->tt_info, __entry->state)
+ 	)
+ );
+ 
+@@ -432,12 +433,13 @@ DECLARE_EVENT_CLASS(xhci_log_ctrl_ctx,
+ 	TP_STRUCT__entry(
+ 		__field(u32, drop)
+ 		__field(u32, add)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->drop = le32_to_cpu(ctrl_ctx->drop_flags);
+ 		__entry->add = le32_to_cpu(ctrl_ctx->add_flags);
+ 	),
+-	TP_printk("%s", xhci_decode_ctrl_ctx(__entry->drop, __entry->add)
++	TP_printk("%s", xhci_decode_ctrl_ctx(__get_str(str), __entry->drop, __entry->add)
+ 	)
+ );
+ 
+@@ -523,6 +525,7 @@ DECLARE_EVENT_CLASS(xhci_log_portsc,
+ 		    TP_STRUCT__entry(
+ 				     __field(u32, portnum)
+ 				     __field(u32, portsc)
++				     __dynamic_array(char, str, XHCI_MSG_MAX)
+ 				     ),
+ 		    TP_fast_assign(
+ 				   __entry->portnum = portnum;
+@@ -530,7 +533,7 @@ DECLARE_EVENT_CLASS(xhci_log_portsc,
+ 				   ),
+ 		    TP_printk("port-%d: %s",
+ 			      __entry->portnum,
+-			      xhci_decode_portsc(__entry->portsc)
++			      xhci_decode_portsc(__get_str(str), __entry->portsc)
+ 			      )
+ );
+ 
+@@ -555,13 +558,14 @@ DECLARE_EVENT_CLASS(xhci_log_doorbell,
+ 	TP_STRUCT__entry(
+ 		__field(u32, slot)
+ 		__field(u32, doorbell)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->slot = slot;
+ 		__entry->doorbell = doorbell;
+ 	),
+ 	TP_printk("Ring doorbell for %s",
+-		xhci_decode_doorbell(__entry->slot, __entry->doorbell)
++		  xhci_decode_doorbell(__get_str(str), __entry->slot, __entry->doorbell)
+ 	)
+ );
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 3c7d281672aec..dca6181c33fdb 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -22,6 +22,9 @@
+ #include	"xhci-ext-caps.h"
+ #include "pci-quirks.h"
+ 
++/* max buffer size for trace and debug messages */
++#define XHCI_MSG_MAX		500
++
+ /* xHCI PCI Configuration Registers */
+ #define XHCI_SBRN_OFFSET	(0x60)
+ 
+@@ -2235,15 +2238,14 @@ static inline char *xhci_slot_state_string(u32 state)
+ 	}
+ }
+ 
+-static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+-		u32 field3)
++static inline const char *xhci_decode_trb(char *str, size_t size,
++					  u32 field0, u32 field1, u32 field2, u32 field3)
+ {
+-	static char str[256];
+ 	int type = TRB_FIELD_TO_TYPE(field3);
+ 
+ 	switch (type) {
+ 	case TRB_LINK:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"LINK %08x%08x intr %d type '%s' flags %c:%c:%c:%c",
+ 			field1, field0, GET_INTR_TARGET(field2),
+ 			xhci_trb_type_string(type),
+@@ -2260,7 +2262,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	case TRB_HC_EVENT:
+ 	case TRB_DEV_NOTE:
+ 	case TRB_MFINDEX_WRAP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"TRB %08x%08x status '%s' len %d slot %d ep %d type '%s' flags %c:%c",
+ 			field1, field0,
+ 			xhci_trb_comp_code_string(GET_COMP_CODE(field2)),
+@@ -2273,7 +2275,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 
+ 		break;
+ 	case TRB_SETUP:
+-		sprintf(str, "bRequestType %02x bRequest %02x wValue %02x%02x wIndex %02x%02x wLength %d length %d TD size %d intr %d type '%s' flags %c:%c:%c",
++		snprintf(str, size,
++			"bRequestType %02x bRequest %02x wValue %02x%02x wIndex %02x%02x wLength %d length %d TD size %d intr %d type '%s' flags %c:%c:%c",
+ 				field0 & 0xff,
+ 				(field0 & 0xff00) >> 8,
+ 				(field0 & 0xff000000) >> 24,
+@@ -2290,7 +2293,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 				field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DATA:
+-		sprintf(str, "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c",
++		snprintf(str, size,
++			 "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c",
+ 				field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 				GET_INTR_TARGET(field2),
+ 				xhci_trb_type_string(type),
+@@ -2303,7 +2307,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 				field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_STATUS:
+-		sprintf(str, "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c",
++		snprintf(str, size,
++			 "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c",
+ 				field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 				GET_INTR_TARGET(field2),
+ 				xhci_trb_type_string(type),
+@@ -2316,7 +2321,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	case TRB_ISOC:
+ 	case TRB_EVENT_DATA:
+ 	case TRB_TR_NOOP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c:%c",
+ 			field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 			GET_INTR_TARGET(field2),
+@@ -2333,21 +2338,21 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 
+ 	case TRB_CMD_NOOP:
+ 	case TRB_ENABLE_SLOT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: flags %c",
+ 			xhci_trb_type_string(type),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DISABLE_SLOT:
+ 	case TRB_NEG_BANDWIDTH:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_ADDR_DEV:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2356,7 +2361,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_CONFIG_EP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2365,7 +2370,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_EVAL_CONTEXT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2373,7 +2378,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_EP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d ep %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2394,7 +2399,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_DEQ:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: deq %08x%08x stream %d slot %d ep %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2405,14 +2410,14 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_DEV:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_FORCE_EVENT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: event %08x%08x vf intr %d vf id %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2421,14 +2426,14 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_LT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: belt %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_BELT(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_GET_BW:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d speed %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2437,7 +2442,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_FORCE_HEADER:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: info %08x%08x%08x pkt type %d roothub port %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field2, field1, field0 & 0xffffffe0,
+@@ -2446,7 +2451,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	default:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"type '%s' -> raw %08x %08x %08x %08x",
+ 			xhci_trb_type_string(type),
+ 			field0, field1, field2, field3);
+@@ -2455,10 +2460,9 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_ctrl_ctx(unsigned long drop,
+-					       unsigned long add)
++static inline const char *xhci_decode_ctrl_ctx(char *str,
++		unsigned long drop, unsigned long add)
+ {
+-	static char	str[1024];
+ 	unsigned int	bit;
+ 	int		ret = 0;
+ 
+@@ -2484,10 +2488,9 @@ static inline const char *xhci_decode_ctrl_ctx(unsigned long drop,
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_slot_context(u32 info, u32 info2,
+-		u32 tt_info, u32 state)
++static inline const char *xhci_decode_slot_context(char *str,
++		u32 info, u32 info2, u32 tt_info, u32 state)
+ {
+-	static char str[1024];
+ 	u32 speed;
+ 	u32 hub;
+ 	u32 mtt;
+@@ -2571,9 +2574,8 @@ static inline const char *xhci_portsc_link_state_string(u32 portsc)
+ 	return "Unknown";
+ }
+ 
+-static inline const char *xhci_decode_portsc(u32 portsc)
++static inline const char *xhci_decode_portsc(char *str, u32 portsc)
+ {
+-	static char str[256];
+ 	int ret;
+ 
+ 	ret = sprintf(str, "%s %s %s Link:%s PortSpeed:%d ",
+@@ -2617,9 +2619,8 @@ static inline const char *xhci_decode_portsc(u32 portsc)
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_usbsts(u32 usbsts)
++static inline const char *xhci_decode_usbsts(char *str, u32 usbsts)
+ {
+-	static char str[256];
+ 	int ret = 0;
+ 
+ 	if (usbsts == ~(u32)0)
+@@ -2646,9 +2647,8 @@ static inline const char *xhci_decode_usbsts(u32 usbsts)
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_doorbell(u32 slot, u32 doorbell)
++static inline const char *xhci_decode_doorbell(char *str, u32 slot, u32 doorbell)
+ {
+-	static char str[256];
+ 	u8 ep;
+ 	u16 stream;
+ 	int ret;
+@@ -2715,10 +2715,9 @@ static inline const char *xhci_ep_type_string(u8 type)
+ 	}
+ }
+ 
+-static inline const char *xhci_decode_ep_context(u32 info, u32 info2, u64 deq,
+-		u32 tx_info)
++static inline const char *xhci_decode_ep_context(char *str, u32 info,
++		u32 info2, u64 deq, u32 tx_info)
+ {
+-	static char str[1024];
+ 	int ret;
+ 
+ 	u32 esit;
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index 562f4357831ee..6403f01947b28 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -227,11 +227,13 @@ static void mtu3_set_speed(struct mtu3 *mtu, enum usb_device_speed speed)
+ 		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
+ 		break;
+ 	case USB_SPEED_SUPER:
++		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
+ 		mtu3_clrbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
+ 			     SSUSB_U3_PORT_SSP_SPEED);
+ 		break;
+ 	case USB_SPEED_SUPER_PLUS:
+-			mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
++		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
++		mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
+ 			     SSUSB_U3_PORT_SSP_SPEED);
+ 		break;
+ 	default:
+diff --git a/drivers/usb/mtu3/mtu3_gadget.c b/drivers/usb/mtu3/mtu3_gadget.c
+index 5e21ba05ebf0b..a399fd84c71f2 100644
+--- a/drivers/usb/mtu3/mtu3_gadget.c
++++ b/drivers/usb/mtu3/mtu3_gadget.c
+@@ -64,14 +64,12 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 	u32 interval = 0;
+ 	u32 mult = 0;
+ 	u32 burst = 0;
+-	int max_packet;
+ 	int ret;
+ 
+ 	desc = mep->desc;
+ 	comp_desc = mep->comp_desc;
+ 	mep->type = usb_endpoint_type(desc);
+-	max_packet = usb_endpoint_maxp(desc);
+-	mep->maxp = max_packet & GENMASK(10, 0);
++	mep->maxp = usb_endpoint_maxp(desc);
+ 
+ 	switch (mtu->g.speed) {
+ 	case USB_SPEED_SUPER:
+@@ -92,7 +90,7 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 				usb_endpoint_xfer_int(desc)) {
+ 			interval = desc->bInterval;
+ 			interval = clamp_val(interval, 1, 16) - 1;
+-			burst = (max_packet & GENMASK(12, 11)) >> 11;
++			mult = usb_endpoint_maxp_mult(desc) - 1;
+ 		}
+ 		break;
+ 	default:
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 00576bae183d3..0c321996c6eb0 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -2720,6 +2720,7 @@ int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u
+ 		rv = 1;
+ 	} else if (im) {
+ 		if (src_addr) {
++			spin_lock_bh(&im->lock);
+ 			for (psf = im->sources; psf; psf = psf->sf_next) {
+ 				if (psf->sf_inaddr == src_addr)
+ 					break;
+@@ -2730,6 +2731,7 @@ int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u
+ 					im->sfcount[MCAST_EXCLUDE];
+ 			else
+ 				rv = im->sfcount[MCAST_EXCLUDE] != 0;
++			spin_unlock_bh(&im->lock);
+ 		} else
+ 			rv = 1; /* unspecified source; tentatively allow */
+ 	}
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 326d1b0ea5e69..db65f77eb131f 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1898,6 +1898,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ed, 2),	/* Kingston HyperX Cloud Alpha S */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f47, 2),	/* JBL Quantum 800 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-14 15:37 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-14 15:37 UTC (permalink / raw
  To: gentoo-commits

commit:     bac991f4736e0a8f6712313af04b8b4cd873d3b5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 14 15:37:01 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Sep 14 15:37:01 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bac991f4

Add BMQ Scheduler Patch 5.14-r1

BMQ(BitMap Queue) Scheduler.
A new CPU scheduler developed from PDS(incld).
Inspired by the scheduler in zircon.

Set defaults for BMQ. Add archs as people test, default to N

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |    7 +
 5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch | 9514 ++++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch       |   13 +
 3 files changed, 9534 insertions(+)

diff --git a/0000_README b/0000_README
index 4ad6164..f4fbe66 100644
--- a/0000_README
+++ b/0000_README
@@ -87,3 +87,10 @@ Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
 
+Patch:  5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch
+From:   https://gitlab.com/alfredchen/linux-prjc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. Add archs as people test, default to N

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch
new file mode 100644
index 0000000..4c6f75c
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch
@@ -0,0 +1,9514 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index bdb22006f713..d755d7df632f 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4947,6 +4947,12 @@
+ 
+ 	sbni=		[NET] Granch SBNI12 leased line adapter
+ 
++	sched_timeslice=
++			[KNL] Time slice in ms for Project C BMQ/PDS scheduler.
++			Format: integer 2, 4
++			Default: 4
++			See Documentation/scheduler/sched-BMQ.txt
++
+ 	sched_verbose	[KNL] Enables verbose scheduler debug messages.
+ 
+ 	schedstats=	[KNL,X86] Enable or disable scheduled statistics.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 426162009ce9..15ac2d7e47cd 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1542,3 +1542,13 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield will perform.
++
++  0 - No yield.
++  1 - Deboost and requeue task. (default)
++  2 - Set run queue skip task.
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index e5b5f7709d48..284b3c4b7d90 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -476,7 +476,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ec8d07d88641..b12f660404fd 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -681,12 +681,18 @@ struct task_struct {
+ 	unsigned int			ptrace;
+ 
+ #ifdef CONFIG_SMP
+-	int				on_cpu;
+ 	struct __call_single_node	wake_entry;
++#endif
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
++	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+ 	/* Current CPU: */
+ 	unsigned int			cpu;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -700,6 +706,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -708,6 +715,20 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	int				sq_idx;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	const struct sched_class	*sched_class;
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+@@ -718,6 +739,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1417,6 +1439,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ static inline struct pid *task_pid(struct task_struct *task)
+ {
+ 	return task->thread_pid;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 1aff00b65f3c..216fdf2fe90c 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -1,5 +1,24 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -19,6 +38,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..6af9ae681116 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,32 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(7)
++
++#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
++#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
++#endif
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index e5af028c08b4..0a7565d0d3cf 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/init/Kconfig b/init/Kconfig
+index 55f9f7738ebb..9a9b244d3ca3 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -786,9 +786,39 @@ config GENERIC_SCHED_CLOCK
+ 
+ menu "Scheduler features"
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -874,6 +904,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -966,6 +997,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -988,6 +1020,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config UCLAMP_TASK_GROUP
+@@ -1231,6 +1264,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 562f2ef8d157..177b63db4ce0 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -75,9 +75,15 @@ struct task_struct init_task
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.cpus_mask	= CPU_MASK_ALL,
+@@ -87,6 +93,17 @@ struct task_struct init_task
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++	.sq_idx		= 15,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -94,6 +111,7 @@ struct task_struct init_task
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index 5876e30c5740..7594d0a31869 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -102,7 +102,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index adb5190c4429..8c02bce63146 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -636,7 +636,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1032,7 +1032,7 @@ static void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index 51530d5b15a8..e542d71bb94b 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -139,7 +139,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 9a89e7f36acb..7fe34c56bd08 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -122,7 +122,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -143,7 +143,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
+index 3a4beb9395c4..98a709628cb3 100644
+--- a/kernel/livepatch/transition.c
++++ b/kernel/livepatch/transition.c
+@@ -307,7 +307,11 @@ static bool klp_try_switch_task(struct task_struct *task)
+ 	 */
+ 	rq = task_rq_lock(task, &flags);
+ 
++#ifdef	CONFIG_SCHED_ALT
++	if (task_running(task) && task != current) {
++#else
+ 	if (task_running(rq, task) && task != current) {
++#endif
+ 		snprintf(err_buf, STACK_ERR_BUF_SIZE,
+ 			 "%s: %s:%d is running\n", __func__, task->comm,
+ 			 task->pid);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index ad0db322ed3b..350b0e506c17 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -227,14 +227,18 @@ static __always_inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
+  * Only use with rt_mutex_waiter_{less,equal}()
+  */
+ #define task_to_waiter(p)	\
+-	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
++	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = __tsk_deadline(p) }
+ 
+ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ 						struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -243,16 +247,22 @@ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ 						 struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -261,8 +271,10 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ #define __node_2_waiter(node) \
+@@ -654,7 +666,7 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task,
+ 	 * the values of the node being removed.
+ 	 */
+ 	waiter->prio = task->prio;
+-	waiter->deadline = task->dl.deadline;
++	waiter->deadline = __tsk_deadline(task);
+ 
+ 	rt_mutex_enqueue(lock, waiter);
+ 
+@@ -925,7 +937,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex *lock,
+ 	waiter->task = task;
+ 	waiter->lock = lock;
+ 	waiter->prio = task->prio;
+-	waiter->deadline = task->dl.deadline;
++	waiter->deadline = __tsk_deadline(task);
+ 
+ 	/* Get the top priority waiter on the lock */
+ 	if (rt_mutex_has_waiters(lock))
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 978fcfca5871..0425ee149b4d 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -22,14 +22,21 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y)
+ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
+ endif
+ 
+-obj-y += core.o loadavg.o clock.o cputime.o
+-obj-y += idle.o fair.o rt.o deadline.o
+-obj-y += wait.o wait_bit.o swait.o completion.o
+-
+-obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o pelt.o
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
++obj-y += core.o
++obj-y += fair.o rt.o deadline.o
++obj-$(CONFIG_SMP) += cpudeadline.o stop_task.o
+ obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
+-obj-$(CONFIG_SCHEDSTATS) += stats.o
++endif
+ obj-$(CONFIG_SCHED_DEBUG) += debug.o
++obj-y += loadavg.o clock.o cputime.o
++obj-y += idle.o
++obj-y += wait.o wait_bit.o swait.o completion.o
++obj-$(CONFIG_SMP) += cpupri.o pelt.o topology.o
++obj-$(CONFIG_SCHEDSTATS) += stats.o
+ obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
+ obj-$(CONFIG_CPU_FREQ) += cpufreq.o
+ obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..900889c838ea
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7248 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++
++#include <linux/sched/rt.h>
++
++#include <linux/context_tracking.h>
++#include <linux/compat.h>
++#include <linux/blkdev.h>
++#include <linux/delayacct.h>
++#include <linux/freezer.h>
++#include <linux/init_task.h>
++#include <linux/kprobes.h>
++#include <linux/mmu_context.h>
++#include <linux/nmi.h>
++#include <linux/profile.h>
++#include <linux/rcupdate_wait.h>
++#include <linux/security.h>
++#include <linux/syscalls.h>
++#include <linux/wait_bit.h>
++
++#include <linux/kcov.h>
++#include <linux/scs.h>
++
++#include <asm/switch_to.h>
++
++#include "../workqueue_internal.h"
++#include "../../fs/io-wq.h"
++#include "../smpboot.h"
++
++#include "pelt.h"
++#include "smp.h"
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v5.14-r1"
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
++u64 sched_timeslice_ns __read_mostly = (4 << 20);
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++static int __init sched_timeslice(char *str)
++{
++	int timeslice_ms;
++
++	get_option(&str, &timeslice_ms);
++	if (2 != timeslice_ms)
++		timeslice_ms = 4;
++	sched_timeslice_ns = timeslice_ms << 20;
++	sched_timeslice_imp(timeslice_ms);
++
++	return 0;
++}
++early_param("sched_timeslice", sched_timeslice);
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Choose what sort of yield sched_yield will perform.
++ * 0: No yield.
++ * 1: Deboost and requeue task. (default)
++ * 2: Set rq skip task.
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
++#endif
++static cpumask_t sched_rq_watermark[SCHED_BITS] ____cacheline_aligned_in_smp;
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_BITS);
++	for(i = 0; i < SCHED_BITS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
++	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
++	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
++}
++
++/* water mark related functions */
++static inline void update_sched_rq_watermark(struct rq *rq)
++{
++	unsigned long watermark = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	unsigned long last_wm = rq->watermark;
++	unsigned long i;
++	int cpu;
++
++	if (watermark == last_wm)
++		return;
++
++	rq->watermark = watermark;
++	cpu = cpu_of(rq);
++	if (watermark < last_wm) {
++		for (i = last_wm; i > watermark; i--)
++			cpumask_clear_cpu(cpu, sched_rq_watermark + SCHED_BITS - 1 - i);
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    IDLE_TASK_SCHED_PRIO == last_wm)
++			cpumask_andnot(&sched_sg_idle_mask,
++				       &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		return;
++	}
++	/* last_wm < watermark */
++	for (i = watermark; i > last_wm; i--)
++		cpumask_set_cpu(cpu, sched_rq_watermark + SCHED_BITS - 1 - i);
++#ifdef CONFIG_SCHED_SMT
++	if (static_branch_likely(&sched_smt_present) &&
++	    IDLE_TASK_SCHED_PRIO == watermark) {
++		cpumask_t tmp;
++
++		cpumask_and(&tmp, cpu_smt_mask(cpu), sched_rq_watermark);
++		if (cpumask_equal(&tmp, cpu_smt_mask(cpu)))
++			cpumask_or(&sched_sg_idle_mask,
++				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
++	}
++#endif
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	unsigned long idx = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	const struct list_head *head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct *
++sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	unsigned long idx = p->sq_idx;
++	struct list_head *head = &rq->queue.heads[idx];
++
++	if (list_is_last(&p->sq_node, head)) {
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++static inline struct task_struct *rq_runnable_task(struct rq *rq)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (unlikely(next == rq->skip))
++		next = sched_rq_next_task(next, rq);
++
++	return next;
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task()/
++ *    cpu_cgroup_fork():	p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq
++*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void
++__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq
++*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
++			  unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq &&
++				   rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
++			      unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void
++rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void
++rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	update_rq_time_edge(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)		\
++	psi_dequeue(p, flags & DEQUEUE_SLEEP);			\
++	sched_info_dequeue(rq, p);				\
++								\
++	list_del(&p->sq_node);					\
++	if (list_empty(&rq->queue.heads[p->sq_idx])) {		\
++		clear_bit(sched_idx2prio(p->sq_idx, rq),	\
++			  rq->queue.bitmap);			\
++		func;						\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
++	sched_info_enqueue(rq, p);					\
++	psi_enqueue(p, flags);						\
++									\
++	p->sq_idx = task_sched_prio_idx(p, rq);				\
++	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
++	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_rq_watermark(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags);
++	update_sched_rq_watermark(rq);
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	int idx;
++
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++
++	idx = task_sched_prio_idx(p, rq);
++
++	list_del(&p->sq_node);
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
++	if (idx != p->sq_idx) {
++		if (list_empty(&rq->queue.heads[p->sq_idx]))
++			clear_bit(sched_idx2prio(p->sq_idx, rq),
++				  rq->queue.bitmap);
++		p->sq_idx = idx;
++		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		update_sched_rq_watermark(rq);
++	}
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _old, _val = *_ptr;			\
++									\
++		for (;;) {						\
++			_old = cmpxchg(_ptr, _val, _val | _mask);	\
++			if (_old == _val)				\
++				break;					\
++			_val = _old;					\
++		}							\
++	_old;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) old, val = READ_ONCE(ti->flags);
++
++	for (;;) {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++		old = cmpxchg(&ti->flags, val, val | _TIF_NEED_RESCHED);
++		if (old == val)
++			break;
++		val = old;
++	}
++	return true;
++}
++
++#else
++static bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++void nohz_balance_enter_idle(int cpu) {}
++
++void select_nohz_load_balancer(int stop_tick) {}
++
++void set_cpu_sd_state_idle(void) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++
++	if (housekeeping_cpu(cpu, HK_FLAG_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, housekeeping_cpumask(HK_FLAG_TIMER))
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_FLAG_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void check_preempt_curr(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
++		static_prio + MAX_PRIORITY_ADJ;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++	p->on_rq = 0;
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++#ifdef CONFIG_THREAD_INFO_IN_TASK
++	WRITE_ONCE(p->cpu, cpu);
++#else
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	if (task_cpu(p) == new_cpu)
++		return;
++	trace_sched_migrate_task(p, new_cpu);
++	rseq_migrate(p);
++	perf_event_task_migrate(p);
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	preempt_disable();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	preempt_disable();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
++				   new_cpu)
++{
++	lockdep_assert_held(&rq->lock);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	BUG_ON(task_cpu(p) != new_cpu);
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++	check_preempt_curr(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
++				 dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	update_rq_clock(rq);
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_from_idle();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p))
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask)
++{
++	cpumask_copy(&p->cpus_mask, new_mask);
++	p->nr_cpus_allowed = cpumask_weight(new_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, new_mask);
++}
++
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	__do_set_cpus_allowed(p, new_mask);
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * If @match_state is nonzero, it's the @p->state value just checked and
++ * not expected to change.  If it changes, i.e. @p might have woken up,
++ * then return zero.  When we succeed in waiting for @p to be off its CPU,
++ * we return a positive number (its total switch count).  If a second call
++ * a short while later returns the same number, the caller can be sure that
++ * @p has remained unscheduled the whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	bool running, on_rq;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_running(p) && p == rq->curr) {
++			if (match_state && unlikely(READ_ONCE(p->__state) != match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_running(p);
++		on_rq = p->on_rq;
++		ncsw = 0;
++		if (!match_state || READ_ONCE(p->__state) == match_state)
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(on_rq)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	int cpu;
++
++	preempt_disable();
++	cpu = task_cpu(p);
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (!cpu_active(dest_cpu))
++				continue;
++			if (cpumask_test_cpu(dest_cpu, p->cpus_ptr))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (IS_ENABLED(CONFIG_CPUSETS)) {
++				cpuset_cpus_allowed_fallback(p);
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, cpu_possible_mask);
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t chk_mask, tmp;
++
++	if (unlikely(!cpumask_and(&chk_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&tmp, &chk_mask, &sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&tmp, &chk_mask, sched_rq_watermark) ||
++	    cpumask_and(&tmp, &chk_mask,
++			sched_rq_watermark + SCHED_BITS - task_sched_prio(p)))
++		return best_mask_cpu(task_cpu(p), &tmp);
++
++	return best_mask_cpu(task_cpu(p), &chk_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * proper CPU and schedule it away if the CPU it's executing on
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  const struct cpumask *new_mask,
++				  u32 flags)
++{
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	int dest_cpu;
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, new_mask);
++
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (cpumask_test_cpu(task_cpu(p), new_mask))
++		goto out;
++
++	if (p->migration_disabled) {
++		if (likely(p->cpus_ptr != &p->cpus_mask))
++			__do_set_cpus_ptr(p, &p->cpus_mask);
++		p->migration_disabled = 0;
++		p->migration_flags |= MDF_FORCE_ENABLED;
++		/* When p is migrate_disabled, rq->lock should be held */
++		rq->nr_pinned--;
++	}
++
++	if (task_running(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++		struct migration_arg arg = { p, dest_cpu };
++
++		/* Need help from migration thread: drop lock and wait. */
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++		stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++		return 0;
++	}
++	if (task_on_rq_queued(p)) {
++		/*
++		 * OK, since we're going to drop the lock immediately
++		 * afterwards anyway.
++		 */
++		update_rq_clock(rq);
++		rq = move_queued_task(rq, p, dest_cpu);
++		lock = &rq->lock;
++	}
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	return __set_cpus_allowed_ptr(p, new_mask, 0);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       const struct cpumask *new_mask,
++		       u32 flags)
++{
++	return set_cpus_allowed_ptr(p, new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu)
++		__schedstat_inc(rq->ttwu_local);
++	else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++}
++
++/*
++ * Mark the task runnable and perform wakeup-preemption.
++ */
++static inline void
++ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	check_preempt_curr(rq);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	ttwu_do_wakeup(rq, p, 0);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		/* check_preempt_curr() may use rq clock */
++		update_rq_clock(rq);
++		ttwu_do_wakeup(rq, p, wake_flags);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	/*
++	 * rq::ttwu_pending racy indication of out-standing wakeups.
++	 * Races such that false-negatives are possible, since they
++	 * are shorter lived that false-positives would be.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++void send_call_function_single_ipi(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (!set_nr_if_polling(rq->idle))
++		arch_send_call_function_single_ipi(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(int cpu, int wake_flags)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	/*
++	 * If the task is descheduling and the only running task on the
++	 * CPU then use the wakelist to offload the task activation to
++	 * the soon-to-be-idle CPU as the current CPU is likely busy.
++	 * nr_running is checked to avoid unnecessary task stacking.
++	 */
++	if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
++		if (WARN_ON_ONCE(cpu == smp_processor_id()))
++			return false;
++
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	rcu_read_lock();
++
++	if (!is_idle_task(rcu_dereference(rq->curr)))
++		goto out;
++
++	if (set_nr_if_polling(rq->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++	} else {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		if (is_idle_task(rq->curr))
++			smp_send_reschedule(cpu);
++		/* Else CPU is not idle, do nothing here */
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++
++out:
++	rcu_read_unlock();
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++static int try_to_wake_up(struct task_struct *p, unsigned int state,
++			  int wake_flags)
++{
++	unsigned long flags;
++	int cpu, success = 0;
++
++	preempt_disable();
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!(READ_ONCE(p->__state) & state))
++			goto out;
++
++		success = 1;
++		trace_sched_waking(p);
++		WRITE_ONCE(p->__state, TASK_RUNNING);
++		trace_sched_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	smp_mb__after_spinlock();
++	if (!(READ_ONCE(p->__state) & state))
++		goto unlock;
++
++	trace_sched_waking(p);
++
++	/* We're going to change ->state: */
++	success = 1;
++
++	/*
++	 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++	 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++	 * in smp_cond_load_acquire() below.
++	 *
++	 * sched_ttwu_pending()			try_to_wake_up()
++	 *   STORE p->on_rq = 1			  LOAD p->state
++	 *   UNLOCK rq->lock
++	 *
++	 * __schedule() (switch to task 'p')
++	 *   LOCK rq->lock			  smp_rmb();
++	 *   smp_mb__after_spinlock();
++	 *   UNLOCK rq->lock
++	 *
++	 * [task p]
++	 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++	 *
++	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++	 * __schedule().  See the comment for smp_mb__after_spinlock().
++	 *
++	 * A similar smb_rmb() lives in try_invoke_on_locked_down_task().
++	 */
++	smp_rmb();
++	if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++		goto unlock;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++	 * possible to, falsely, observe p->on_cpu == 0.
++	 *
++	 * One must be running (->on_cpu == 1) in order to remove oneself
++	 * from the runqueue.
++	 *
++	 * __schedule() (switch to task 'p')	try_to_wake_up()
++	 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++	 *   UNLOCK rq->lock
++	 *
++	 * __schedule() (put 'p' to sleep)
++	 *   LOCK rq->lock			  smp_rmb();
++	 *   smp_mb__after_spinlock();
++	 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++	 *
++	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++	 * __schedule().  See the comment for smp_mb__after_spinlock().
++	 *
++	 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++	 * schedule()'s deactivate_task() has 'happened' and p will no longer
++	 * care about it's own p->state. See the comment in __schedule().
++	 */
++	smp_acquire__after_ctrl_dep();
++
++	/*
++	 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++	 * == 0), which means we need to do an enqueue, change p->state to
++	 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++	 * enqueue, such as ttwu_queue_wakelist().
++	 */
++	WRITE_ONCE(p->__state, TASK_WAKING);
++
++	/*
++	 * If the owning (remote) CPU is still in the middle of schedule() with
++	 * this task as prev, considering queueing p on the remote CPUs wake_list
++	 * which potentially sends an IPI instead of spinning on p->on_cpu to
++	 * let the waker make forward progress. This is safe because IRQs are
++	 * disabled and the IPI will deliver after on_cpu is cleared.
++	 *
++	 * Ensure we load task_cpu(p) after p->on_cpu:
++	 *
++	 * set_task_cpu(p, cpu);
++	 *   STORE p->cpu = @cpu
++	 * __schedule() (switch to task 'p')
++	 *   LOCK rq->lock
++	 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++	 *   STORE p->on_cpu = 1                LOAD p->cpu
++	 *
++	 * to ensure we observe the correct CPU on which the task is currently
++	 * scheduling.
++	 */
++	if (smp_load_acquire(&p->on_cpu) &&
++	    ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
++		goto unlock;
++
++	/*
++	 * If the owning (remote) CPU is still in the middle of schedule() with
++	 * this task as prev, wait until it's done referencing the task.
++	 *
++	 * Pairs with the smp_store_release() in finish_task().
++	 *
++	 * This ensures that tasks getting woken will be fully ordered against
++	 * their previous state and preserve Program Order.
++	 */
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++	sched_task_ttwu(p);
++
++	cpu = select_task_rq(p);
++
++	if (cpu != task_cpu(p)) {
++		if (p->in_iowait) {
++			delayacct_blkio_end(p);
++			atomic_dec(&task_rq(p)->nr_iowait);
++		}
++
++		wake_flags |= WF_MIGRATED;
++		psi_ttwu_dequeue(p);
++		set_task_cpu(p, cpu);
++	}
++#else
++	cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++	ttwu_queue(p, cpu, wake_flags);
++unlock:
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++	preempt_enable();
++
++	return success;
++}
++
++/**
++ * try_invoke_on_locked_down_task - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * If the specified task can be quickly locked into a definite state
++ * (either sleeping or on a given runqueue), arrange to keep it in that
++ * state while invoking @func(@arg).  This function can use ->on_rq and
++ * task_curr() to work out what the state is, if required.  Given that
++ * @func can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ *	@false if the task slipped out from under the locks.
++ *	@true if the task was locked onto a runqueue or is sleeping.
++ *		However, @func can override this by returning @false.
++ */
++bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg)
++{
++	struct rq_flags rf;
++	bool ret = false;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++	if (p->on_rq) {
++		rq = __task_rq_lock(p, &rf);
++		if (task_rq(p) == rq)
++			ret = func(p, arg);
++		__task_rq_unlock(rq, &rf);
++	} else {
++		switch (READ_ONCE(p->__state)) {
++		case TASK_RUNNING:
++		case TASK_WAKING:
++			break;
++		default:
++			smp_rmb(); // See smp_rmb() comment in try_to_wake_up().
++			if (!p->on_rq)
++				ret = func(p, arg);
++		}
++	}
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++	/*
++	 * The child is not yet in the pid-hash so no cgroup attach races,
++	 * and the cgroup is pinned to this child due to cgroup_fork()
++	 * is ran before sched_fork().
++	 *
++	 * Silence PROVE_RCU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sched_timeslice_ns;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_post_fork(struct task_struct *p) {}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++int sysctl_schedstats(struct ctl_table *table, int write,
++			 void __user *buffer, size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	check_preempt_curr(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the ttwu() WF_ON_CPU case and its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++	void (*func)(struct rq *rq);
++	struct callback_head *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++struct callback_head balance_push_callback = {
++	.next = NULL,
++	.func = (void (*)(struct callback_head *))balance_push,
++};
++
++static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
++{
++	struct callback_head *head = rq->balance_callback;
++
++	if (head) {
++		lockdep_assert_held(&rq->lock);
++		rq->balance_callback = NULL;
++	}
++
++	return head;
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, splice_balance_callbacks(rq));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	long prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/*
++		 * Remove function-return probe instances associated with this
++		 * task and put them back on the free list.
++		 */
++		kprobe_flush_task(prev);
++
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++	struct task_struct *p = current;
++	unsigned long flags;
++	int dest_cpu;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	dest_cpu = cpumask_any(p->cpus_ptr);
++	if (dest_cpu == smp_processor_id())
++		goto unlock;
++
++	if (likely(cpu_active(dest_cpu))) {
++		struct migration_arg arg = { p, dest_cpu };
++
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
++		return;
++	}
++unlock:
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void scheduler_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	u64 resched_latency;
++
++	arch_scale_freq_tick();
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	rq->last_tick = rq->clock;
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++}
++
++#ifdef CONFIG_SCHED_SMT
++static inline int active_load_balance_cpu_stop(void *data)
++{
++	struct rq *rq = this_rq();
++	struct task_struct *p = data;
++	cpumask_t tmp;
++	unsigned long flags;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	rq->active_balance = 0;
++	/* _something_ may have changed the task, double check again */
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
++	    !is_migration_disabled(p)) {
++		int cpu = cpu_of(rq);
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock(&p->pi_lock);
++
++	local_irq_restore(flags);
++
++	return 0;
++}
++
++/* sg_balance_trigger - trigger slibing group balance for @cpu */
++static inline int sg_balance_trigger(const int cpu)
++{
++	struct rq *rq= cpu_rq(cpu);
++	unsigned long flags;
++	struct task_struct *curr;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++	curr = rq->curr;
++	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
++	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
++	      !is_migration_disabled(curr) && (!rq->active_balance);
++
++	if (res)
++		rq->active_balance = 1;
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res)
++		stop_one_cpu_nowait(cpu, active_load_balance_cpu_stop,
++				    curr, &rq->active_balance_work);
++	return res;
++}
++
++/*
++ * sg_balance_check - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance_check(struct rq *rq)
++{
++	cpumask_t chk;
++	int cpu = cpu_of(rq);
++
++	/* exit when cpu is offline */
++	if (unlikely(!rq->online))
++		return;
++
++	/*
++	 * Only cpu in slibing idle group will do the checking and then
++	 * find potential cpus which can migrate the current running task
++	 */
++	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
++	    cpumask_andnot(&chk, cpu_online_mask, sched_rq_watermark) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &chk) &&
++			    sg_balance_trigger(i))
++				return;
++		}
++	}
++}
++#endif /* CONFIG_SCHED_SMT */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr;
++	unsigned long flags;
++	u64 delta;
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (!tick_nohz_tick_stopped_cpu(cpu))
++		goto out_requeue;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	curr = rq->curr;
++	if (cpu_is_offline(cpu))
++		goto out_unlock;
++
++	update_rq_clock(rq);
++	if (!is_idle_task(curr)) {
++		/*
++		 * Make sure the next tick runs within a reasonable
++		 * amount of time.
++		 */
++		delta = rq_clock_task(rq) - curr->last_ran;
++		WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++	}
++	scheduler_task_tick(rq);
++
++	calc_load_nohz_remote(rq);
++out_unlock:
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++out_requeue:
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_FLAG_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_FLAG_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	cancel_delayed_work_sync(&twork->work);
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)
++	    && in_atomic_preempt_off()) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	if (panic_on_warn)
++		panic("scheduling while atomic\n");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_rq_watermark[0].bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#define SCHED_RQ_NR_MIGRATION (32U)
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ * Will try to migrate mininal of half of @rq nr_running tasks and
++ * SCHED_RQ_NR_MIGRATION to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, SCHED_RQ_NR_MIGRATION);
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	struct cpumask *topo_mask, *end_mask;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_rq_watermark(rq);
++				cpufreq_update_util(rq, 0);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu, struct task_struct *prev)
++{
++	struct task_struct *next;
++
++	if (unlikely(rq->skip)) {
++		next = rq_runnable_task(rq);
++		if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++			if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++				rq->skip = NULL;
++				schedstat_inc(rq->sched_goidle);
++				return next;
++#ifdef	CONFIG_SMP
++			}
++			next = rq_runnable_task(rq);
++#endif
++		}
++		rq->skip = NULL;
++#ifdef CONFIG_HIGH_RES_TIMERS
++		hrtick_start(rq, next->time_slice);
++#endif
++		return next;
++	}
++
++	next = sched_rq_first_task(rq);
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu,
++	 * next);*/
++	return next;
++}
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler scheduler_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(bool preempt)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, preempt);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(preempt);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that:
++	 *
++	 *  - we form a control dependency vs deactivate_task() below.
++	 *  - ptrace_{,un}freeze_traced() can change ->state underneath us.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!preempt && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev->flags & PF_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu, prev);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++		next->last_ran = rq->clock_task;
++		rq->last_ts_switch = rq->clock;
++
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
++		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 */
++		++*switch_count;
++
++		psi_sched_switch(prev, next, !task_on_rq_queued(prev));
++
++		trace_sched_switch(preempt, prev, next);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	sg_balance_check(rq);
++#endif
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(false);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	unsigned int task_flags;
++
++	if (task_is_running(tsk))
++		return;
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker went to sleep, notify and ask workqueue whether
++	 * it wants to wake up a task to maintain concurrency.
++	 * As this function is called inside the schedule() context,
++	 * we disable preemption to avoid it calling schedule() again
++	 * in the possible wakeup of a kworker and because wq_worker_sleeping()
++	 * requires it.
++	 */
++	if (task_flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		preempt_disable();
++		if (task_flags & PF_WQ_WORKER)
++			wq_worker_sleeping(tsk);
++		else
++			io_wq_worker_sleeping(tsk);
++		preempt_enable_no_resched();
++	}
++
++	if (tsk_is_pi_blocked(tsk))
++		return;
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	if (blk_needs_flush_plug(tsk))
++		blk_schedule_flush_plug(tsk);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else
++			io_wq_worker_running(tsk);
++	}
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++	sched_submit_work(tsk);
++	do {
++		preempt_disable();
++		__schedule(false);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(false);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(true);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#endif
++
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(true);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#include <linux/entry-common.h>
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_none = 0,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_full;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++void sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	static_call_update(cond_resched, __cond_resched);
++	static_call_update(might_resched, __cond_resched);
++	static_call_update(preempt_schedule, __preempt_schedule_func);
++	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
++	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		static_call_update(cond_resched, __cond_resched);
++		static_call_update(might_resched, (void *)&__static_call_return0);
++		static_call_update(preempt_schedule, NULL);
++		static_call_update(preempt_schedule_notrace, NULL);
++		static_call_update(irqentry_exit_cond_resched, NULL);
++		pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		static_call_update(cond_resched, __cond_resched);
++		static_call_update(might_resched, __cond_resched);
++		static_call_update(preempt_schedule, NULL);
++		static_call_update(preempt_schedule_notrace, NULL);
++		static_call_update(irqentry_exit_cond_resched, NULL);
++		pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		static_call_update(cond_resched, (void *)&__static_call_return0);
++		static_call_update(might_resched, (void *)&__static_call_return0);
++		static_call_update(preempt_schedule, __preempt_schedule_func);
++		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
++		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
++		pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 1;
++	}
++
++	sched_dynamic_update(mode);
++	return 0;
++}
++__setup("preempt=", setup_preempt_mode);
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(true);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC);
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p) && task_sched_prio_idx(p, rq) != p->sq_idx) {
++		requeue_task(p, rq);
++		check_preempt_curr(rq);
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40] */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE) ||
++		capable(CAP_SYS_NICE));
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	bool match;
++
++	rcu_read_lock();
++	pcred = __task_cred(p);
++	match = (uid_eq(cred->euid, pcred->euid) ||
++		 uid_eq(cred->euid, pcred->uid));
++	rcu_read_unlock();
++	return match;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct callback_head *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	/*
++	 * Allow unprivileged RT tasks to decrease priority:
++	 */
++	if (user && !capable(CAP_SYS_NICE)) {
++		if (SCHED_FIFO == policy || SCHED_RR == policy) {
++			unsigned long rlim_rtprio =
++					task_rlimit(p, RLIMIT_RTPRIO);
++
++			/* Can't set/change the rt policy */
++			if (policy != p->policy && !rlim_rtprio)
++				return -EPERM;
++
++			/* Can't increase priority */
++			if (attr->sched_priority > p->rt_priority &&
++			    attr->sched_priority > rlim_rtprio)
++				return -EPERM;
++		}
++
++		/* Can't change other user's priorities */
++		if (!check_same_owner(p))
++			return -EPERM;
++
++		/* Normal users shall not reset the sched_reset_on_fork flag */
++		if (p->sched_reset_on_fork && !reset_on_fork)
++			return -EPERM;
++	}
++
++	if (user) {
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	if (pi)
++		cpuset_read_lock();
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		if (pi)
++			cpuset_read_unlock();
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		if (rt_effective_prio(p, newprio) == p->prio) {
++			__setscheduler_params(p, attr);
++			retval = 0;
++			goto unlock;
++		}
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi) {
++		cpuset_read_unlock();
++		rt_mutex_adjust_pi(p);
++	}
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	if (pi)
++		cpuset_read_unlock();
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++	struct task_struct *p;
++	int retval;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	rcu_read_lock();
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++	rcu_read_unlock();
++
++	if (likely(p)) {
++		retval = sched_setscheduler(p, policy, &lparam);
++		put_task_struct(p);
++	}
++
++	return retval;
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	rcu_read_lock();
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++	rcu_read_unlock();
++
++	if (likely(p)) {
++		retval = sched_setattr(p, &attr);
++		put_task_struct(p);
++	}
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		goto out_nounlock;
++
++	retval = -ESRCH;
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	if (p) {
++		retval = security_task_getscheduler(p);
++		if (!retval)
++			retval = p->policy;
++	}
++	rcu_read_unlock();
++
++out_nounlock:
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (!param || pid < 0)
++		goto out_nounlock;
++
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	retval = -ESRCH;
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	if (task_has_rt_policy(p))
++		lp.sched_priority = p->rt_priority;
++	rcu_read_unlock();
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++
++out_nounlock:
++	return retval;
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	retval = -ESRCH;
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	kattr.sched_policy = p->policy;
++	if (p->sched_reset_on_fork)
++		kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++	if (task_has_rt_policy(p))
++		kattr.sched_priority = p->rt_priority;
++	else
++		kattr.sched_nice = task_nice(p);
++
++#ifdef CONFIG_UCLAMP_TASK
++	kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++	kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++
++	rcu_read_unlock();
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	cpumask_var_t cpus_allowed, new_mask;
++	struct task_struct *p;
++	int retval;
++
++	rcu_read_lock();
++
++	p = find_process_by_pid(pid);
++	if (!p) {
++		rcu_read_unlock();
++		return -ESRCH;
++	}
++
++	/* Prevent p going away */
++	get_task_struct(p);
++	rcu_read_unlock();
++
++	if (p->flags & PF_NO_SETAFFINITY) {
++		retval = -EINVAL;
++		goto out_put_task;
++	}
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_put_task;
++	}
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++	retval = -EPERM;
++	if (!check_same_owner(p)) {
++		rcu_read_lock();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) {
++			rcu_read_unlock();
++			goto out_free_new_mask;
++		}
++		rcu_read_unlock();
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, in_mask, cpus_allowed);
++
++again:
++	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
++
++	if (!retval) {
++		cpuset_cpus_allowed(p, cpus_allowed);
++		if (!cpumask_subset(new_mask, cpus_allowed)) {
++			/*
++			 * We must have raced with a concurrent cpuset
++			 * update. Just reset the cpus_allowed to the
++			 * cpuset's cpus_allowed
++			 */
++			cpumask_copy(new_mask, cpus_allowed);
++			goto again;
++		}
++	}
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++out_put_task:
++	put_task_struct(p);
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	raw_spinlock_t *lock;
++	unsigned long flags;
++	int retval;
++
++	rcu_read_lock();
++
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	task_access_lock_irqsave(p, &lock, &flags);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++out_unlock:
++	rcu_read_unlock();
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min_t(size_t, len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, mask, retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	if (1 == sched_yield_type) {
++		if (!rt_task(current))
++			do_sched_yield_type_1(current, rq);
++	} else if (2 == sched_yield_type) {
++		if (rq->nr_running > 1)
++			rq->skip = current;
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (resched)
++			preempt_schedule_common();
++		else
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (resched)
++			preempt_schedule_common();
++		else
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (resched)
++			preempt_schedule_common();
++		else
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_schedule_flush_plug(current);
++
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	retval = -ESRCH;
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++	rcu_read_unlock();
++
++	*t = ns_to_timespec64(sched_timeslice_ns);
++	return 0;
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%5lu pid:%5d ppid:%6d flags:0x%08lx\n",
++		free, task_pid_nr(p), ppid,
++		(unsigned long)task_thread_info(p)->flags);
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && state == TASK_IDLE)
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	set_kthread_struct(idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++	scs_task_reset(idle);
++	kasan_unpoison_task_stack(idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, cpumask_of(cpu));
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p,
++		    const struct cpumask *cs_cpus_allowed)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++	SCHED_WARN_ON(rq->cpu != smp_processor_id());
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline.
++	 */
++	if (!cpu_dying(rq->cpu))
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online)
++		rq->online = false;
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	update_rq_clock(rq);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(&sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpumask_of(cpu));
++		tmp++;
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++		/*per_cpu(sd_llc_id, cpu) = cpu;*/
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		cpumask_complement(topo, mask)
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
++
++		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++
++		cpumask_complement(topo, cpumask_of(cpu));
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __read_mostly;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO ALT_SCHED_VERSION_MSG);
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_BITS; i++)
++		cpumask_copy(sched_rq_watermark + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->watermark = IDLE_TASK_SCHED_PRIO;
++		rq->skip = NULL;
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->active_balance = 0;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	psi_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++static inline int preempt_count_equals(int preempt_offset)
++{
++	int nested = preempt_count() + rcu_preempt_depth();
++
++	return (nested == preempt_offset);
++}
++
++void __might_sleep(const char *file, int line, int preempt_offset)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	___might_sleep(file, line, preempt_offset);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++void ___might_sleep(const char *file, int line, int preempt_offset)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((preempt_count_equals(preempt_offset) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	printk(KERN_ERR
++		"BUG: sleeping function called from invalid context at %s:%d\n",
++			file, line);
++	printk(KERN_ERR
++		"in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(), current->non_block_count,
++			current->pid, current->comm);
++
++	if (task_stack_end_corrupted(current))
++		printk(KERN_EMERG "Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++#ifdef CONFIG_DEBUG_PREEMPT
++	if (!preempt_count_equals(preempt_offset)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++#endif
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(___might_sleep);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for the IA64 MCA handling, or kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_IA64
++/**
++ * ia64_set_curr_task - set the current task for a given CPU.
++ * @cpu: the processor in question.
++ * @p: the task pointer to set.
++ *
++ * Description: This function must only be used when non-maskable interrupts
++ * are serviced on a separate stack.  It allows the architecture to switch the
++ * notion of the current task on a CPU in a non-blocking manner.  This function
++ * must be called with all CPU's synchronised, and interrupts disabled, the
++ * and caller must save the original value of the current task (see
++ * curr_task() above) and restore that value before reenabling interrupts and
++ * re-starting the system.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ */
++void ia64_set_curr_task(int cpu, struct task_struct *p)
++{
++	cpu_curr(cpu) = p;
++}
++
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs */
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++void sched_offline_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_offline_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_free_group(tg);
++}
++
++static void cpu_cgroup_fork(struct task_struct *task)
++{
++}
++
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{ }	/* Terminate */
++};
++
++
++static struct cftype cpu_files[] = {
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.fork		= cpu_cgroup_fork,
++	.can_attach	= cpu_cgroup_can_attach,
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_files,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1212a031700e
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,31 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..f03af9ab9123
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,692 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/sched.h>
++
++#include <linux/sched/clock.h>
++#include <linux/sched/cpufreq.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/signal.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/sysctl.h>
++#include <linux/sched/task.h>
++#include <linux/sched/topology.h>
++#include <linux/sched/wake_q.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <linux/cgroup.h>
++#include <linux/cpufreq.h>
++#include <linux/cpuidle.h>
++#include <linux/cpuset.h>
++#include <linux/ctype.h>
++#include <linux/debugfs.h>
++#include <linux/kthread.h>
++#include <linux/livepatch.h>
++#include <linux/membarrier.h>
++#include <linux/proc_fs.h>
++#include <linux/psi.h>
++#include <linux/slab.h>
++#include <linux/stop_machine.h>
++#include <linux/suspend.h>
++#include <linux/swait.h>
++#include <linux/syscalls.h>
++#include <linux/tsacct_kern.h>
++
++#include <asm/tlb.h>
++
++#ifdef CONFIG_PARAVIRT
++# include <asm/paravirt.h>
++#endif
++
++#include "cpupri.h"
++
++#include <trace/events/sched.h>
++
++#ifdef CONFIG_SCHED_BMQ
++/* bits:
++ * RT(0-99), (Low prio adj range, nice width, high prio adj range) / 2, cpu idle task */
++#define SCHED_BITS	(MAX_RT_PRIO + NICE_WIDTH / 2 + MAX_PRIORITY_ADJ + 1)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++/* bits: RT(0-99), reserved(100-127), NORMAL_PRIO_NUM, cpu idle task */
++#define SCHED_BITS	(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM + 1)
++#endif /* CONFIG_SCHED_PDS */
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_BITS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/*
++ * wake flags
++ */
++#define WF_SYNC		0x01		/* waker goes to sleep after wakeup */
++#define WF_FORK		0x02		/* child wakeup after fork */
++#define WF_MIGRATED	0x04		/* internal use, task got migrated */
++#define WF_ON_CPU	0x08		/* Wakee is on_rq */
++
++#define SCHED_QUEUE_BITS	(SCHED_BITS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_BITS];
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t lock;
++
++	struct task_struct __rcu *curr;
++	struct task_struct *idle, *stop, *skip;
++	struct mm_struct *prev_mm;
++
++	struct sched_queue	queue;
++#ifdef CONFIG_SCHED_PDS
++	u64			time_edge;
++#endif
++	unsigned long watermark;
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	int active_balance;
++	struct cpu_stop_work	active_balance_work;
++#endif
++	struct callback_head	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	u64 clock, last_tick;
++	u64 last_ts_switch;
++	u64 clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++};
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++	ITSELF_LEVEL_SPACE_HOLDER,
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DECLARE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++extern void flush_smp_call_function_from_idle(void);
++
++#else  /* !CONFIG_SMP */
++static inline void flush_smp_call_function_from_idle(void) { }
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_running(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++						  cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..be3ee4a553ca
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,111 @@
++#define ALT_SCHED_VERSION_MSG "sched/bmq: BMQ CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
++
++/*
++ * BMQ only routines
++ */
++#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
++#define boost_threshold(p)	(sched_timeslice_ns >>\
++				 (15 - MAX_PRIORITY_ADJ -  (p)->boost_prio))
++
++static inline void boost_task(struct task_struct *p)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	if (p->boost_prio > limit)
++		p->boost_prio--;
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MAX_RT_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO)? p->prio : MAX_RT_PRIO / 2 + (p->prio + p->boost_prio) / 2;
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return task_sched_prio(p);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p)) {
++		if (SCHED_RR != p->policy)
++			deboost_task(p);
++		requeue_task(p, rq);
++	}
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = (p->boost_prio < 0) ?
++		p->boost_prio + MAX_PRIORITY_ADJ : MAX_PRIORITY_ADJ;
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++#ifdef CONFIG_SMP
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
++		boost_task(p);
++}
++#endif
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	if (rq_switch_time(rq) < boost_threshold(p))
++		boost_task(p);
++}
++
++static inline void update_rq_time_edge(struct rq *rq) {}
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 57124614363d..4057e51cef45 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -57,6 +57,13 @@ struct sugov_cpu {
+ 	unsigned long		bw_dl;
+ 	unsigned long		max;
+ 
++#ifdef CONFIG_SCHED_ALT
++	/* For genenal cpu load util */
++	s32			load_history;
++	u64			load_block;
++	u64			load_stamp;
++#endif
++
+ 	/* The field below is for single-CPU policies only: */
+ #ifdef CONFIG_NO_HZ_COMMON
+ 	unsigned long		saved_idle_calls;
+@@ -161,6 +168,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
+ 	return cpufreq_driver_resolve_freq(policy, freq);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ {
+ 	struct rq *rq = cpu_rq(sg_cpu->cpu);
+@@ -172,6 +180,55 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ 					  FREQUENCY_UTIL, NULL);
+ }
+ 
++#else /* CONFIG_SCHED_ALT */
++
++#define SG_CPU_LOAD_HISTORY_BITS	(sizeof(s32) * 8ULL)
++#define SG_CPU_UTIL_SHIFT		(8)
++#define SG_CPU_LOAD_HISTORY_SHIFT	(SG_CPU_LOAD_HISTORY_BITS - 1 - SG_CPU_UTIL_SHIFT)
++#define SG_CPU_LOAD_HISTORY_TO_UTIL(l)	(((l) >> SG_CPU_LOAD_HISTORY_SHIFT) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (SG_CPU_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static void sugov_get_util(struct sugov_cpu *sg_cpu)
++{
++	unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);
++
++	sg_cpu->max = max;
++	sg_cpu->bw_dl = 0;
++	sg_cpu->util = SG_CPU_LOAD_HISTORY_TO_UTIL(sg_cpu->load_history) *
++		(max >> SG_CPU_UTIL_SHIFT);
++}
++
++static inline void sugov_cpu_load_update(struct sugov_cpu *sg_cpu, u64 time)
++{
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(sg_cpu->load_stamp),
++			SG_CPU_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(sg_cpu->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!cpu_rq(sg_cpu->cpu)->nr_running;
++
++	if (delta) {
++		sg_cpu->load_history = sg_cpu->load_history >> delta;
++
++		if (delta <= SG_CPU_UTIL_SHIFT) {
++			sg_cpu->load_block += (~BLOCK_MASK(sg_cpu->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(sg_cpu->load_block) ^ curr)
++				sg_cpu->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		sg_cpu->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		sg_cpu->load_block += (time - sg_cpu->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		sg_cpu->load_history ^= CURRENT_LOAD_BIT;
++	sg_cpu->load_stamp = time;
++}
++#endif /* CONFIG_SCHED_ALT */
++
+ /**
+  * sugov_iowait_reset() - Reset the IO boost status of a CPU.
+  * @sg_cpu: the sugov data for the CPU to boost
+@@ -312,13 +369,19 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+ 					      u64 time, unsigned int flags)
+ {
++#ifdef CONFIG_SCHED_ALT
++	sugov_cpu_load_update(sg_cpu, time);
++#endif /* CONFIG_SCHED_ALT */
++
+ 	sugov_iowait_boost(sg_cpu, time, flags);
+ 	sg_cpu->last_update = time;
+ 
+@@ -439,6 +502,10 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
+ 
+ 	raw_spin_lock(&sg_policy->update_lock);
+ 
++#ifdef CONFIG_SCHED_ALT
++	sugov_cpu_load_update(sg_cpu, time);
++#endif /* CONFIG_SCHED_ALT */
++
+ 	sugov_iowait_boost(sg_cpu, time, flags);
+ 	sg_cpu->last_update = time;
+ 
+@@ -599,6 +666,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+@@ -833,7 +901,9 @@ cpufreq_governor_init(schedutil_gov);
+ #ifdef CONFIG_ENERGY_MODEL
+ static void rebuild_sd_workfn(struct work_struct *work)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	rebuild_sched_domains_energy();
++#endif /* CONFIG_SCHED_ALT */
+ }
+ static DECLARE_WORK(rebuild_sd_work, rebuild_sd_workfn);
+ 
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 872e481d5098..f920c8b48ec1 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -123,7 +123,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -147,7 +147,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		cpustat[CPUTIME_NICE] += cputime;
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -270,7 +270,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -280,7 +280,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -612,7 +612,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	task_cputime(p, &cputime.utime, &cputime.stime);
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 0c5ec2776ddf..e3f4fe3f6e2c 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -8,6 +8,7 @@
+  */
+ #include "sched.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /proc/sched_debug and
+  * to the console
+@@ -210,6 +211,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -273,6 +275,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ static const struct seq_operations sched_debug_sops;
+@@ -288,6 +291,7 @@ static const struct file_operations sched_debug_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -297,12 +301,15 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency);
+ 	debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity);
+ 	debugfs_create_u32("wakeup_granularity_ns", 0644, debugfs_sched, &sysctl_sched_wakeup_granularity);
+@@ -330,11 +337,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1047,6 +1056,7 @@ void proc_sched_set_task(struct task_struct *p)
+ 	memset(&p->se.statistics, 0, sizeof(p->se.statistics));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 912b47aa99d8..7f6b13883c2a 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -403,6 +403,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -525,3 +526,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..0f1f0d708b77
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,127 @@
++#define ALT_SCHED_VERSION_MSG "sched/pds: PDS CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
++
++static int sched_timeslice_shift = 22;
++
++#define NORMAL_PRIO_MOD(x)	((x) & (NORMAL_PRIO_NUM - 1))
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms)
++{
++	if (2 == timeslice_ms)
++		sched_timeslice_shift = 21;
++}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	s64 delta = p->deadline - rq->time_edge + NORMAL_PRIO_NUM - NICE_WIDTH;
++
++	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", delta))
++		return NORMAL_PRIO_NUM - 1;
++
++	return (delta < 0) ? 0 : delta;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio :
++		MIN_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio : MIN_NORMAL_PRIO +
++		NORMAL_PRIO_MOD(task_sched_prio_normal(p, rq) + rq->time_edge);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == prio || prio < MAX_RT_PRIO) ? prio :
++		MIN_NORMAL_PRIO + NORMAL_PRIO_MOD((prio - MIN_NORMAL_PRIO) +
++						  rq->time_edge);
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return (idx < MAX_RT_PRIO) ? idx : MIN_NORMAL_PRIO +
++		NORMAL_PRIO_MOD((idx - MIN_NORMAL_PRIO) + NORMAL_PRIO_NUM -
++				NORMAL_PRIO_MOD(rq->time_edge));
++}
++
++static inline void sched_renew_deadline(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MAX_RT_PRIO)
++		p->deadline = (rq->clock >> sched_timeslice_shift) +
++			p->static_prio - (MAX_PRIO - NICE_WIDTH);
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void update_rq_time_edge(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++
++	if (now == old)
++		return;
++
++	delta = min_t(u64, NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	for_each_set_bit(prio, &rq->queue.bitmap[2], delta)
++		list_splice_tail_init(rq->queue.heads + MIN_NORMAL_PRIO +
++				      NORMAL_PRIO_MOD(prio + old), &head);
++
++	rq->queue.bitmap[2] = (NORMAL_PRIO_NUM == delta) ? 0UL :
++		rq->queue.bitmap[2] >> delta;
++	rq->time_edge = now;
++	if (!list_empty(&head)) {
++		u64 idx = MIN_NORMAL_PRIO + NORMAL_PRIO_MOD(now);
++		struct task_struct *p;
++
++		list_for_each_entry(p, &head, sq_node)
++			p->sq_idx = idx;
++
++		list_splice(&head, rq->queue.heads + idx);
++		rq->queue.bitmap[2] |= 1UL;
++	}
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++	sched_renew_deadline(p, rq);
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + NICE_WIDTH - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_renew_deadline(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	time_slice_expired(p, rq);
++}
++
++#ifdef CONFIG_SMP
++static inline void sched_task_ttwu(struct task_struct *p) {}
++#endif
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index a554e3bbab2b..3e56f5e6ff5c 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -270,6 +270,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -387,8 +388,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * thermal:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index e06071bf3472..adf567df34d4 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 thermal_load_avg(struct rq *rq)
+@@ -42,6 +44,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return LOAD_AVG_MAX - 1024 + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -153,9 +156,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -173,6 +178,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index ddefb0419d7a..658c41b15d3c 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2,6 +2,10 @@
+ /*
+  * Scheduler internal types and methods:
+  */
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched.h>
+ 
+ #include <linux/sched/autogroup.h>
+@@ -3038,3 +3042,8 @@ extern int sched_dynamic_mode(const char *str);
+ extern void sched_dynamic_update(int mode);
+ #endif
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 3f93fc3b5648..528b71e144e9 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -22,8 +22,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -40,6 +42,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -68,6 +71,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index b77ad49dc14f..be9edf086412 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -4,6 +4,7 @@
+  */
+ #include "sched.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ DEFINE_MUTEX(sched_domains_mutex);
+ 
+ /* Protected by sched_domains_mutex: */
+@@ -1382,8 +1383,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1617,6 +1620,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1646,6 +1650,7 @@ void set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology = tl;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2451,3 +2456,17 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int __read_mostly		node_reclaim_distance = RECLAIM_DISTANCE;
++
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 272f4a272f8c..1c9455c8ecf6 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -122,6 +122,10 @@ static unsigned long long_max = LONG_MAX;
+ static int one_hundred = 100;
+ static int two_hundred = 200;
+ static int one_thousand = 1000;
++#ifdef CONFIG_SCHED_ALT
++static int __maybe_unused zero = 0;
++extern int sched_yield_type;
++#endif
+ #ifdef CONFIG_PRINTK
+ static int ten_thousand = 10000;
+ #endif
+@@ -1730,6 +1734,24 @@ int proc_do_static_key(struct ctl_table *table, int write,
+ }
+ 
+ static struct ctl_table kern_table[] = {
++#ifdef CONFIG_SCHED_ALT
++/* In ALT, only supported "sched_schedstats" */
++#ifdef CONFIG_SCHED_DEBUG
++#ifdef CONFIG_SMP
++#ifdef CONFIG_SCHEDSTATS
++	{
++		.procname	= "sched_schedstats",
++		.data		= NULL,
++		.maxlen		= sizeof(unsigned int),
++		.mode		= 0644,
++		.proc_handler	= sysctl_schedstats,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_ONE,
++	},
++#endif /* CONFIG_SCHEDSTATS */
++#endif /* CONFIG_SMP */
++#endif /* CONFIG_SCHED_DEBUG */
++#else  /* !CONFIG_SCHED_ALT */
+ 	{
+ 		.procname	= "sched_child_runs_first",
+ 		.data		= &sysctl_sched_child_runs_first,
+@@ -1860,6 +1882,7 @@ static struct ctl_table kern_table[] = {
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PROVE_LOCKING
+ 	{
+ 		.procname	= "prove_locking",
+@@ -2436,6 +2459,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= &zero,
++		.extra2		= &two,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 4a66725b1d4a..cb80ed5c1f5c 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -1940,8 +1940,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+ 	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 517be7fd175e..de3afe8e0800 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -216,7 +216,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -801,6 +801,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -808,6 +809,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		__group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -835,8 +837,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -850,7 +854,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1086,8 +1090,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index adf7ef194005..11c8f36e281b 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1052,10 +1052,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 0000000..d449eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2021-04-27 07:38:30.556467045 -0400
++++ b/init/Kconfig	2021-04-27 07:39:32.956412800 -0400
+@@ -780,8 +780,9 @@ config GENERIC_SCHED_CLOCK
+ menu "Scheduler features"
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-15 11:58 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-15 11:58 UTC (permalink / raw
  To: gentoo-commits

commit:     756955cf3ec599943c85ce5eed917d9441d0d6a9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 15 11:58:20 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 15 11:58:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=756955cf

Linuxpatch 5.14.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     8 +
 1003_linux-5.14.4.patch | 13171 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 13179 insertions(+)

diff --git a/0000_README b/0000_README
index f4fbe66..79faaf3 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,14 @@ Patch:  1002_linux-5.14.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.3
 
+Patch:  1002_linux-5.14.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.3
+
+Patch:  1003_linux-5.14.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.14.4.patch b/1003_linux-5.14.4.patch
new file mode 100644
index 0000000..2f4c377
--- /dev/null
+++ b/1003_linux-5.14.4.patch
@@ -0,0 +1,13171 @@
+diff --git a/Documentation/fault-injection/provoke-crashes.rst b/Documentation/fault-injection/provoke-crashes.rst
+index a20ba5d939320..18de17354206a 100644
+--- a/Documentation/fault-injection/provoke-crashes.rst
++++ b/Documentation/fault-injection/provoke-crashes.rst
+@@ -29,7 +29,7 @@ recur_count
+ cpoint_name
+ 	Where in the kernel to trigger the action. It can be
+ 	one of INT_HARDWARE_ENTRY, INT_HW_IRQ_EN, INT_TASKLET_ENTRY,
+-	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_DISPATCH_CMD,
++	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_QUEUE_RQ,
+ 	IDE_CORE_CP, or DIRECT
+ 
+ cpoint_type
+diff --git a/Makefile b/Makefile
+index 8715942fccb4a..e16a1a80074cd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
+index aa24cac8e5be5..44b03a5e24166 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
+@@ -2832,7 +2832,7 @@
+ 
+ &emmc {
+ 	status = "okay";
+-	clk-phase-mmc-hs200 = <180>, <180>;
++	clk-phase-mmc-hs200 = <210>, <228>;
+ };
+ 
+ &fsim0 {
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index 7e90d713f5e58..6dde51c2aed3f 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -208,12 +208,12 @@
+ 	};
+ 
+ 	pinctrl_hvi3c3_default: hvi3c3_default {
+-		function = "HVI3C3";
++		function = "I3C3";
+ 		groups = "HVI3C3";
+ 	};
+ 
+ 	pinctrl_hvi3c4_default: hvi3c4_default {
+-		function = "HVI3C4";
++		function = "I3C4";
+ 		groups = "HVI3C4";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index edca66c232c15..ebbc9b23aef1c 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -92,6 +92,8 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 		status = "okay"; /* Conflict with pwm0. */
+ 
+ 		red {
+@@ -537,6 +539,10 @@
+ 				 AT91_PIOA 19 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)	/* PA19 DAT2 periph A with pullup */
+ 				 AT91_PIOA 20 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)>;	/* PA20 DAT3 periph A with pullup */
+ 		};
++		pinctrl_sdmmc0_cd: sdmmc0_cd {
++			atmel,pins =
++				<AT91_PIOA 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
+ 	};
+ 
+ 	sdmmc1 {
+@@ -569,6 +575,14 @@
+ 				      AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 		};
+ 	};
++
++	leds {
++		pinctrl_gpio_leds: gpio_leds {
++			atmel,pins = <AT91_PIOB 11 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOB 12 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOB 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
++	};
+ }; /* pinctrl */
+ 
+ &pwm0 {
+@@ -580,7 +594,7 @@
+ &sdmmc0 {
+ 	bus-width = <4>;
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&pinctrl_sdmmc0_default>;
++	pinctrl-0 = <&pinctrl_sdmmc0_default &pinctrl_sdmmc0_cd>;
+ 	status = "okay";
+ 	cd-gpios = <&pioA 23 GPIO_ACTIVE_LOW>;
+ 	disable-wp;
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index 9c55a921263bd..cc55d1684322b 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -57,6 +57,8 @@
+ 			};
+ 
+ 			spi0: spi@f0004000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi0_cs>;
+ 				cs-gpios = <&pioD 13 0>, <0>, <0>, <&pioD 16 0>;
+ 				status = "okay";
+ 			};
+@@ -169,6 +171,8 @@
+ 			};
+ 
+ 			spi1: spi@f8008000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi1_cs>;
+ 				cs-gpios = <&pioC 25 0>;
+ 				status = "okay";
+ 			};
+@@ -248,6 +252,26 @@
+ 							<AT91_PIOE 3 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
+ 							 AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 					};
++
++					pinctrl_gpio_leds: gpio_leds_default {
++						atmel,pins =
++							<AT91_PIOE 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 24 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_spi0_cs: spi0_cs_default {
++						atmel,pins =
++							<AT91_PIOD 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_spi1_cs: spi1_cs_default {
++						atmel,pins = <AT91_PIOC 25 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_vcc_mmc0_reg_gpio: vcc_mmc0_reg_gpio_default {
++						atmel,pins = <AT91_PIOE 2 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -339,6 +363,8 @@
+ 
+ 	vcc_mmc0_reg: fixedregulator_mmc0 {
+ 		compatible = "regulator-fixed";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_vcc_mmc0_reg_gpio>;
+ 		gpio = <&pioE 2 GPIO_ACTIVE_LOW>;
+ 		regulator-name = "mmc0-card-supply";
+ 		regulator-min-microvolt = <3300000>;
+@@ -362,6 +388,9 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
++		status = "okay";
+ 
+ 		d2 {
+ 			label = "d2";
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index 0b3ad1b580b83..e42dae06b5826 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -90,6 +90,8 @@
+ 			};
+ 
+ 			spi1: spi@fc018000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi0_cs>;
+ 				cs-gpios = <&pioB 21 0>;
+ 				status = "okay";
+ 			};
+@@ -147,6 +149,19 @@
+ 						atmel,pins =
+ 							<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
+ 					};
++					pinctrl_spi0_cs: spi0_cs_default {
++						atmel,pins =
++							<AT91_PIOB 21 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++					pinctrl_gpio_leds: gpio_leds_default {
++						atmel,pins =
++							<AT91_PIOD 30 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 15 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++					pinctrl_vcc_mmc1_reg: vcc_mmc1_reg {
++						atmel,pins =
++							<AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -252,6 +267,8 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 		status = "okay";
+ 
+ 		d8 {
+@@ -278,6 +295,8 @@
+ 
+ 	vcc_mmc1_reg: fixedregulator_mmc1 {
+ 		compatible = "regulator-fixed";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_vcc_mmc1_reg>;
+ 		gpio = <&pioE 4 GPIO_ACTIVE_LOW>;
+ 		regulator-name = "VDD MCI1";
+ 		regulator-min-microvolt = <3300000>;
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 157a950a55d38..686c7b7c79d55 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -304,8 +304,13 @@
+ 					  "pp2", "ppmmu2", "pp4", "ppmmu4",
+ 					  "pp5", "ppmmu5", "pp6", "ppmmu6";
+ 			resets = <&reset RESET_MALI>;
++
+ 			clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>;
+ 			clock-names = "bus", "core";
++
++			assigned-clocks = <&clkc CLKID_MALI>;
++			assigned-clock-rates = <318750000>;
++
+ 			operating-points-v2 = <&gpu_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+diff --git a/arch/arm/boot/dts/meson8b-ec100.dts b/arch/arm/boot/dts/meson8b-ec100.dts
+index 8e48ccc6b634e..7e8ddc6f1252b 100644
+--- a/arch/arm/boot/dts/meson8b-ec100.dts
++++ b/arch/arm/boot/dts/meson8b-ec100.dts
+@@ -148,7 +148,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 0 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -232,7 +232,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 1 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm/boot/dts/meson8b-mxq.dts b/arch/arm/boot/dts/meson8b-mxq.dts
+index f3937d55472d4..7adedd3258c33 100644
+--- a/arch/arm/boot/dts/meson8b-mxq.dts
++++ b/arch/arm/boot/dts/meson8b-mxq.dts
+@@ -34,6 +34,8 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
++		pwm-supply = <&vcc_5v>;
++
+ 		pwms = <&pwm_cd 0 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+ 
+@@ -79,7 +81,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 1 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm/boot/dts/meson8b-odroidc1.dts b/arch/arm/boot/dts/meson8b-odroidc1.dts
+index c440ef94e0820..04356bc639faf 100644
+--- a/arch/arm/boot/dts/meson8b-odroidc1.dts
++++ b/arch/arm/boot/dts/meson8b-odroidc1.dts
+@@ -131,7 +131,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&p5v0>;
++		pwm-supply = <&p5v0>;
+ 
+ 		pwms = <&pwm_cd 0 12218 0>;
+ 		pwm-dutycycle-range = <91 0>;
+@@ -163,7 +163,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&p5v0>;
++		pwm-supply = <&p5v0>;
+ 
+ 		pwms = <&pwm_cd 1 12218 0>;
+ 		pwm-dutycycle-range = <91 0>;
+diff --git a/arch/arm64/boot/dts/exynos/exynos7.dtsi b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+index 10244e59d56dd..56a0bb7eb0e69 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+@@ -102,7 +102,7 @@
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+ 			reg =	<0x11001000 0x1000>,
+-				<0x11002000 0x1000>,
++				<0x11002000 0x2000>,
+ 				<0x11004000 0x2000>,
+ 				<0x11006000 0x2000>;
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index a05b1ab2dd12c..04da07ae44208 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -135,6 +135,23 @@
+ 	pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
+ 	status = "okay";
+ 	reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
++	/*
++	 * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
++	 * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
++	 * 2 size cells and also expects that the second range starts at 16 MB offset. If these
++	 * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
++	 * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
++	 * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
++	 * This bug is not present in U-Boot ports for other Armada 3700 devices and is fixed in
++	 * U-Boot version 2021.07. See relevant U-Boot commits (the last one contains fix):
++	 * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
++	 * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
++	 * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
++	 */
++	#address-cells = <3>;
++	#size-cells = <2>;
++	ranges = <0x81000000 0 0xe8000000   0 0xe8000000   0 0x01000000   /* Port 0 IO */
++		  0x82000000 0 0xe9000000   0 0xe9000000   0 0x07000000>; /* Port 0 MEM */
+ 
+ 	/* enabled by U-Boot if PCIe module is present */
+ 	status = "disabled";
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 5db81a416cd65..9acc5d2b5a002 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -489,8 +489,15 @@
+ 			#interrupt-cells = <1>;
+ 			msi-parent = <&pcie0>;
+ 			msi-controller;
+-			ranges = <0x82000000 0 0xe8000000   0 0xe8000000 0 0x1000000 /* Port 0 MEM */
+-				  0x81000000 0 0xe9000000   0 0xe9000000 0 0x10000>; /* Port 0 IO*/
++			/*
++			 * The 128 MiB address range [0xe8000000-0xf0000000] is
++			 * dedicated for PCIe and can be assigned to 8 windows
++			 * with size a power of two. Use one 64 KiB window for
++			 * IO at the end and the remaining seven windows
++			 * (totaling 127 MiB) for MEM.
++			 */
++			ranges = <0x82000000 0 0xe8000000   0 0xe8000000   0 0x07f00000   /* Port 0 MEM */
++				  0x81000000 0 0xefff0000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
+ 			interrupt-map-mask = <0 0 0 7>;
+ 			interrupt-map = <0 0 0 1 &pcie_intc 0>,
+ 					<0 0 0 2 &pcie_intc 1>,
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+index 6f9c071475513..a758e4d226122 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+@@ -23,7 +23,7 @@ ap_h1_spi: &spi0 {};
+ 	adau7002: audio-codec-1 {
+ 		compatible = "adi,adau7002";
+ 		IOVDD-supply = <&pp1800_l15a>;
+-		wakeup-delay-ms = <15>;
++		wakeup-delay-ms = <80>;
+ 		#sound-dai-cells = <0>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 188c5768a55ae..c08f074106994 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -1437,9 +1437,9 @@
+ 
+ 		cpufreq_hw: cpufreq@18591000 {
+ 			compatible = "qcom,cpufreq-epss";
+-			reg = <0 0x18591000 0 0x1000>,
+-			      <0 0x18592000 0 0x1000>,
+-			      <0 0x18593000 0 0x1000>;
++			reg = <0 0x18591100 0 0x900>,
++			      <0 0x18592100 0 0x900>,
++			      <0 0x18593100 0 0x900>;
+ 			clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ 			clock-names = "xo", "alternate";
+ 			#freq-domain-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 4798368b02efb..9a6eff1813a68 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -2210,7 +2210,7 @@
+ 				 <&gcc GCC_USB3_PHY_SEC_BCR>;
+ 			reset-names = "phy", "common";
+ 
+-			usb_2_ssphy: lane@88eb200 {
++			usb_2_ssphy: lanes@88eb200 {
+ 				reg = <0 0x088eb200 0 0x200>,
+ 				      <0 0x088eb400 0 0x200>,
+ 				      <0 0x088eb800 0 0x800>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 0d16392bb9767..dbc174d424e26 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -666,12 +666,10 @@
+ 			clocks = <&rpmhcc RPMH_IPA_CLK>;
+ 			clock-names = "core";
+ 
+-			interconnects = <&aggre2_noc MASTER_IPA &gem_noc SLAVE_LLCC>,
+-					<&mc_virt MASTER_LLCC &mc_virt SLAVE_EBI1>,
++			interconnects = <&aggre2_noc MASTER_IPA &mc_virt SLAVE_EBI1>,
+ 					<&gem_noc MASTER_APPSS_PROC &config_noc SLAVE_IPA_CFG>;
+-			interconnect-names = "ipa_to_llcc",
+-					     "llcc_to_ebi1",
+-					     "appss_to_ipa";
++			interconnect-names = "memory",
++					     "config";
+ 
+ 			qcom,smem-states = <&ipa_smp2p_out 0>,
+ 					   <&ipa_smp2p_out 1>;
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+index 202c4fc88bd51..dde3a07bc417c 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+@@ -20,6 +20,7 @@
+ 	pinctrl-names = "default";
+ 	phy-handle = <&phy0>;
+ 	tx-internal-delay-ps = <2000>;
++	rx-internal-delay-ps = <1800>;
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
+index 6783c3ad08567..57784341f39d7 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
+@@ -277,10 +277,6 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <28 IRQ_TYPE_LEVEL_LOW>;
+ 
+-		/* Depends on LVDS */
+-		max-clock = <135000000>;
+-		min-vrefresh = <50>;
+-
+ 		adi,input-depth = <8>;
+ 		adi,input-colorspace = "rgb";
+ 		adi,input-clock = "1x";
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 0ca72f5cda41b..5d1fc9c4bca5e 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -15,6 +15,7 @@
+ #include <linux/fs.h>
+ #include <linux/mman.h>
+ #include <linux/sched.h>
++#include <linux/kmemleak.h>
+ #include <linux/kvm.h>
+ #include <linux/kvm_irqfd.h>
+ #include <linux/irqbypass.h>
+@@ -1986,6 +1987,12 @@ static int finalize_hyp_mode(void)
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Exclude HYP BSS from kmemleak so that it doesn't get peeked
++	 * at, which would end badly once the section is inaccessible.
++	 * None of other sections should ever be introspected.
++	 */
++	kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);
+ 	ret = pkvm_mark_hyp_section(__hyp_bss);
+ 	if (ret)
+ 		return ret;
+diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
+index 2c580204f1dc9..95a18cec14a35 100644
+--- a/arch/arm64/kvm/vgic/vgic-v2.c
++++ b/arch/arm64/kvm/vgic/vgic-v2.c
+@@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
+ 		u32 val = cpuif->vgic_lr[lr];
+ 		u32 cpuid, intid = val & GICH_LR_VIRTUALID;
+ 		struct vgic_irq *irq;
++		bool deactivated;
+ 
+ 		/* Extract the source vCPU id from the LR */
+ 		cpuid = val & GICH_LR_PHYSID_CPUID;
+@@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
+ 
+ 		raw_spin_lock(&irq->irq_lock);
+ 
+-		/* Always preserve the active bit */
++		/* Always preserve the active bit, note deactivation */
++		deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
+ 		irq->active = !!(val & GICH_LR_ACTIVE_BIT);
+ 
+ 		if (irq->active && vgic_irq_is_sgi(intid))
+@@ -96,36 +98,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
+ 		if (irq->config == VGIC_CONFIG_LEVEL && !(val & GICH_LR_STATE))
+ 			irq->pending_latch = false;
+ 
+-		/*
+-		 * Level-triggered mapped IRQs are special because we only
+-		 * observe rising edges as input to the VGIC.
+-		 *
+-		 * If the guest never acked the interrupt we have to sample
+-		 * the physical line and set the line level, because the
+-		 * device state could have changed or we simply need to
+-		 * process the still pending interrupt later.
+-		 *
+-		 * If this causes us to lower the level, we have to also clear
+-		 * the physical active state, since we will otherwise never be
+-		 * told when the interrupt becomes asserted again.
+-		 *
+-		 * Another case is when the interrupt requires a helping hand
+-		 * on deactivation (no HW deactivation, for example).
+-		 */
+-		if (vgic_irq_is_mapped_level(irq)) {
+-			bool resample = false;
+-
+-			if (val & GICH_LR_PENDING_BIT) {
+-				irq->line_level = vgic_get_phys_line_level(irq);
+-				resample = !irq->line_level;
+-			} else if (vgic_irq_needs_resampling(irq) &&
+-				   !(irq->active || irq->pending_latch)) {
+-				resample = true;
+-			}
+-
+-			if (resample)
+-				vgic_irq_set_phys_active(irq, false);
+-		}
++		/* Handle resampling for mapped interrupts if required */
++		vgic_irq_handle_resampling(irq, deactivated, val & GICH_LR_PENDING_BIT);
+ 
+ 		raw_spin_unlock(&irq->irq_lock);
+ 		vgic_put_irq(vcpu->kvm, irq);
+diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
+index 66004f61cd83d..21a6207fb2eed 100644
+--- a/arch/arm64/kvm/vgic/vgic-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-v3.c
+@@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
+ 		u32 intid, cpuid;
+ 		struct vgic_irq *irq;
+ 		bool is_v2_sgi = false;
++		bool deactivated;
+ 
+ 		cpuid = val & GICH_LR_PHYSID_CPUID;
+ 		cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
+@@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
+ 
+ 		raw_spin_lock(&irq->irq_lock);
+ 
+-		/* Always preserve the active bit */
++		/* Always preserve the active bit, note deactivation */
++		deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
+ 		irq->active = !!(val & ICH_LR_ACTIVE_BIT);
+ 
+ 		if (irq->active && is_v2_sgi)
+@@ -89,36 +91,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
+ 		if (irq->config == VGIC_CONFIG_LEVEL && !(val & ICH_LR_STATE))
+ 			irq->pending_latch = false;
+ 
+-		/*
+-		 * Level-triggered mapped IRQs are special because we only
+-		 * observe rising edges as input to the VGIC.
+-		 *
+-		 * If the guest never acked the interrupt we have to sample
+-		 * the physical line and set the line level, because the
+-		 * device state could have changed or we simply need to
+-		 * process the still pending interrupt later.
+-		 *
+-		 * If this causes us to lower the level, we have to also clear
+-		 * the physical active state, since we will otherwise never be
+-		 * told when the interrupt becomes asserted again.
+-		 *
+-		 * Another case is when the interrupt requires a helping hand
+-		 * on deactivation (no HW deactivation, for example).
+-		 */
+-		if (vgic_irq_is_mapped_level(irq)) {
+-			bool resample = false;
+-
+-			if (val & ICH_LR_PENDING_BIT) {
+-				irq->line_level = vgic_get_phys_line_level(irq);
+-				resample = !irq->line_level;
+-			} else if (vgic_irq_needs_resampling(irq) &&
+-				   !(irq->active || irq->pending_latch)) {
+-				resample = true;
+-			}
+-
+-			if (resample)
+-				vgic_irq_set_phys_active(irq, false);
+-		}
++		/* Handle resampling for mapped interrupts if required */
++		vgic_irq_handle_resampling(irq, deactivated, val & ICH_LR_PENDING_BIT);
+ 
+ 		raw_spin_unlock(&irq->irq_lock);
+ 		vgic_put_irq(vcpu->kvm, irq);
+diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
+index 111bff47e4710..42a6ac78fe959 100644
+--- a/arch/arm64/kvm/vgic/vgic.c
++++ b/arch/arm64/kvm/vgic/vgic.c
+@@ -1022,3 +1022,41 @@ bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int vintid)
+ 
+ 	return map_is_active;
+ }
++
++/*
++ * Level-triggered mapped IRQs are special because we only observe rising
++ * edges as input to the VGIC.
++ *
++ * If the guest never acked the interrupt we have to sample the physical
++ * line and set the line level, because the device state could have changed
++ * or we simply need to process the still pending interrupt later.
++ *
++ * We could also have entered the guest with the interrupt active+pending.
++ * On the next exit, we need to re-evaluate the pending state, as it could
++ * otherwise result in a spurious interrupt by injecting a now potentially
++ * stale pending state.
++ *
++ * If this causes us to lower the level, we have to also clear the physical
++ * active state, since we will otherwise never be told when the interrupt
++ * becomes asserted again.
++ *
++ * Another case is when the interrupt requires a helping hand on
++ * deactivation (no HW deactivation, for example).
++ */
++void vgic_irq_handle_resampling(struct vgic_irq *irq,
++				bool lr_deactivated, bool lr_pending)
++{
++	if (vgic_irq_is_mapped_level(irq)) {
++		bool resample = false;
++
++		if (unlikely(vgic_irq_needs_resampling(irq))) {
++			resample = !(irq->active || irq->pending_latch);
++		} else if (lr_pending || (lr_deactivated && irq->line_level)) {
++			irq->line_level = vgic_get_phys_line_level(irq);
++			resample = !irq->line_level;
++		}
++
++		if (resample)
++			vgic_irq_set_phys_active(irq, false);
++	}
++}
+diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
+index dc1f3d1657ee9..14a9218641f57 100644
+--- a/arch/arm64/kvm/vgic/vgic.h
++++ b/arch/arm64/kvm/vgic/vgic.h
+@@ -169,6 +169,8 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active);
+ bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,
+ 			   unsigned long flags);
+ void vgic_kick_vcpus(struct kvm *kvm);
++void vgic_irq_handle_resampling(struct vgic_irq *irq,
++				bool lr_deactivated, bool lr_pending);
+ 
+ int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr,
+ 		      phys_addr_t addr, phys_addr_t alignment);
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index 29e946394fdb4..277d61a094637 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -26,6 +26,7 @@ config COLDFIRE
+ 	bool "Coldfire CPU family support"
+ 	select ARCH_HAVE_CUSTOM_GPIO_H
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_MULDIV64
+ 	select GENERIC_CSUM
+ 	select GPIOLIB
+@@ -39,6 +40,7 @@ config M68000
+ 	bool
+ 	depends on !MMU
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_MULDIV64
+ 	select CPU_HAS_NO_UNALIGNED
+ 	select GENERIC_CSUM
+@@ -54,6 +56,7 @@ config M68000
+ config MCPU32
+ 	bool
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_UNALIGNED
+ 	select CPU_NO_EFFICIENT_FFS
+ 	help
+@@ -383,7 +386,7 @@ config ADVANCED
+ 
+ config RMW_INSNS
+ 	bool "Use read-modify-write instructions"
+-	depends on ADVANCED
++	depends on ADVANCED && !CPU_HAS_NO_CAS
+ 	help
+ 	  This allows to use certain instructions that work with indivisible
+ 	  read-modify-write bus cycles. While this is faster than the
+@@ -450,6 +453,9 @@ config M68K_L2_CACHE
+ config CPU_HAS_NO_BITFIELDS
+ 	bool
+ 
++config CPU_HAS_NO_CAS
++	bool
++
+ config CPU_HAS_NO_MULDIV64
+ 	bool
+ 
+diff --git a/arch/m68k/coldfire/clk.c b/arch/m68k/coldfire/clk.c
+index 2ed841e941113..d03b6c4aa86b4 100644
+--- a/arch/m68k/coldfire/clk.c
++++ b/arch/m68k/coldfire/clk.c
+@@ -78,7 +78,7 @@ int clk_enable(struct clk *clk)
+ 	unsigned long flags;
+ 
+ 	if (!clk)
+-		return -EINVAL;
++		return 0;
+ 
+ 	spin_lock_irqsave(&clk_lock, flags);
+ 	if ((clk->enabled++ == 0) && clk->clk_ops)
+diff --git a/arch/m68k/emu/nfeth.c b/arch/m68k/emu/nfeth.c
+index d2875e32abfca..79e55421cfb18 100644
+--- a/arch/m68k/emu/nfeth.c
++++ b/arch/m68k/emu/nfeth.c
+@@ -254,8 +254,8 @@ static void __exit nfeth_cleanup(void)
+ 
+ 	for (i = 0; i < MAX_UNIT; i++) {
+ 		if (nfeth_dev[i]) {
+-			unregister_netdev(nfeth_dev[0]);
+-			free_netdev(nfeth_dev[0]);
++			unregister_netdev(nfeth_dev[i]);
++			free_netdev(nfeth_dev[i]);
+ 		}
+ 	}
+ 	free_irq(nfEtherIRQ, nfeth_interrupt);
+diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
+index 8637bf8a2f652..cfba83d230fde 100644
+--- a/arch/m68k/include/asm/atomic.h
++++ b/arch/m68k/include/asm/atomic.h
+@@ -48,7 +48,7 @@ static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
+ 			"	casl %2,%1,%0\n"			\
+ 			"	jne 1b"					\
+ 			: "+m" (*v), "=&d" (t), "=&d" (tmp)		\
+-			: "g" (i), "2" (arch_atomic_read(v)));		\
++			: "di" (i), "2" (arch_atomic_read(v)));		\
+ 	return t;							\
+ }
+ 
+@@ -63,7 +63,7 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
+ 			"	casl %2,%1,%0\n"			\
+ 			"	jne 1b"					\
+ 			: "+m" (*v), "=&d" (t), "=&d" (tmp)		\
+-			: "g" (i), "2" (arch_atomic_read(v)));		\
++			: "di" (i), "2" (arch_atomic_read(v)));		\
+ 	return tmp;							\
+ }
+ 
+diff --git a/arch/parisc/boot/compressed/misc.c b/arch/parisc/boot/compressed/misc.c
+index 2d395998f524a..7ee49f5881d15 100644
+--- a/arch/parisc/boot/compressed/misc.c
++++ b/arch/parisc/boot/compressed/misc.c
+@@ -26,7 +26,7 @@
+ extern char input_data[];
+ extern int input_len;
+ /* output_len is inserted by the linker possibly at an unaligned address */
+-extern __le32 output_len __aligned(1);
++extern char output_len;
+ extern char _text, _end;
+ extern char _bss, _ebss;
+ extern char _startcode_end;
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index 161a9e12bfb86..630eab0fa1760 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -957,6 +957,7 @@ struct kvm_arch{
+ 	atomic64_t cmma_dirty_pages;
+ 	/* subset of available cpu features enabled by user space */
+ 	DECLARE_BITMAP(cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS);
++	/* indexed by vcpu_idx */
+ 	DECLARE_BITMAP(idle_mask, KVM_MAX_VCPUS);
+ 	struct kvm_s390_gisa_interrupt gisa_int;
+ 	struct kvm_s390_pv pv;
+diff --git a/arch/s390/include/asm/lowcore.h b/arch/s390/include/asm/lowcore.h
+index 47bde5a20a41c..11213c8bfca56 100644
+--- a/arch/s390/include/asm/lowcore.h
++++ b/arch/s390/include/asm/lowcore.h
+@@ -124,7 +124,8 @@ struct lowcore {
+ 	/* Restart function and parameter. */
+ 	__u64	restart_fn;			/* 0x0370 */
+ 	__u64	restart_data;			/* 0x0378 */
+-	__u64	restart_source;			/* 0x0380 */
++	__u32	restart_source;			/* 0x0380 */
++	__u32	restart_flags;			/* 0x0384 */
+ 
+ 	/* Address space pointer. */
+ 	__u64	kernel_asce;			/* 0x0388 */
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index ddc7858bbce40..879b8e3f609cd 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -26,6 +26,8 @@
+ #define _CIF_MCCK_GUEST		BIT(CIF_MCCK_GUEST)
+ #define _CIF_DEDICATED_CPU	BIT(CIF_DEDICATED_CPU)
+ 
++#define RESTART_FLAG_CTLREGS	_AC(1 << 0, U)
++
+ #ifndef __ASSEMBLY__
+ 
+ #include <linux/cpumask.h>
+diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c
+index 77ff2130cb045..dc53b0452ce2f 100644
+--- a/arch/s390/kernel/asm-offsets.c
++++ b/arch/s390/kernel/asm-offsets.c
+@@ -116,6 +116,7 @@ int main(void)
+ 	OFFSET(__LC_RESTART_FN, lowcore, restart_fn);
+ 	OFFSET(__LC_RESTART_DATA, lowcore, restart_data);
+ 	OFFSET(__LC_RESTART_SOURCE, lowcore, restart_source);
++	OFFSET(__LC_RESTART_FLAGS, lowcore, restart_flags);
+ 	OFFSET(__LC_KERNEL_ASCE, lowcore, kernel_asce);
+ 	OFFSET(__LC_USER_ASCE, lowcore, user_asce);
+ 	OFFSET(__LC_LPP, lowcore, lpp);
+diff --git a/arch/s390/kernel/debug.c b/arch/s390/kernel/debug.c
+index 09b6c6402f9b7..05b765b8038eb 100644
+--- a/arch/s390/kernel/debug.c
++++ b/arch/s390/kernel/debug.c
+@@ -24,6 +24,7 @@
+ #include <linux/export.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
++#include <linux/minmax.h>
+ #include <linux/debugfs.h>
+ 
+ #include <asm/debug.h>
+@@ -92,6 +93,8 @@ static int debug_hex_ascii_format_fn(debug_info_t *id, struct debug_view *view,
+ 				     char *out_buf, const char *in_buf);
+ static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view,
+ 				   char *out_buf, debug_sprintf_entry_t *curr_event);
++static void debug_areas_swap(debug_info_t *a, debug_info_t *b);
++static void debug_events_append(debug_info_t *dest, debug_info_t *src);
+ 
+ /* globals */
+ 
+@@ -311,24 +314,6 @@ static debug_info_t *debug_info_create(const char *name, int pages_per_area,
+ 		goto out;
+ 
+ 	rc->mode = mode & ~S_IFMT;
+-
+-	/* create root directory */
+-	rc->debugfs_root_entry = debugfs_create_dir(rc->name,
+-						    debug_debugfs_root_entry);
+-
+-	/* append new element to linked list */
+-	if (!debug_area_first) {
+-		/* first element in list */
+-		debug_area_first = rc;
+-		rc->prev = NULL;
+-	} else {
+-		/* append element to end of list */
+-		debug_area_last->next = rc;
+-		rc->prev = debug_area_last;
+-	}
+-	debug_area_last = rc;
+-	rc->next = NULL;
+-
+ 	refcount_set(&rc->ref_count, 1);
+ out:
+ 	return rc;
+@@ -388,27 +373,10 @@ static void debug_info_get(debug_info_t *db_info)
+  */
+ static void debug_info_put(debug_info_t *db_info)
+ {
+-	int i;
+-
+ 	if (!db_info)
+ 		return;
+-	if (refcount_dec_and_test(&db_info->ref_count)) {
+-		for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
+-			if (!db_info->views[i])
+-				continue;
+-			debugfs_remove(db_info->debugfs_entries[i]);
+-		}
+-		debugfs_remove(db_info->debugfs_root_entry);
+-		if (db_info == debug_area_first)
+-			debug_area_first = db_info->next;
+-		if (db_info == debug_area_last)
+-			debug_area_last = db_info->prev;
+-		if (db_info->prev)
+-			db_info->prev->next = db_info->next;
+-		if (db_info->next)
+-			db_info->next->prev = db_info->prev;
++	if (refcount_dec_and_test(&db_info->ref_count))
+ 		debug_info_free(db_info);
+-	}
+ }
+ 
+ /*
+@@ -632,6 +600,31 @@ static int debug_close(struct inode *inode, struct file *file)
+ 	return 0; /* success */
+ }
+ 
++/* Create debugfs entries and add to internal list. */
++static void _debug_register(debug_info_t *id)
++{
++	/* create root directory */
++	id->debugfs_root_entry = debugfs_create_dir(id->name,
++						    debug_debugfs_root_entry);
++
++	/* append new element to linked list */
++	if (!debug_area_first) {
++		/* first element in list */
++		debug_area_first = id;
++		id->prev = NULL;
++	} else {
++		/* append element to end of list */
++		debug_area_last->next = id;
++		id->prev = debug_area_last;
++	}
++	debug_area_last = id;
++	id->next = NULL;
++
++	debug_register_view(id, &debug_level_view);
++	debug_register_view(id, &debug_flush_view);
++	debug_register_view(id, &debug_pages_view);
++}
++
+ /**
+  * debug_register_mode() - creates and initializes debug area.
+  *
+@@ -661,19 +654,16 @@ debug_info_t *debug_register_mode(const char *name, int pages_per_area,
+ 	if ((uid != 0) || (gid != 0))
+ 		pr_warn("Root becomes the owner of all s390dbf files in sysfs\n");
+ 	BUG_ON(!initialized);
+-	mutex_lock(&debug_mutex);
+ 
+ 	/* create new debug_info */
+ 	rc = debug_info_create(name, pages_per_area, nr_areas, buf_size, mode);
+-	if (!rc)
+-		goto out;
+-	debug_register_view(rc, &debug_level_view);
+-	debug_register_view(rc, &debug_flush_view);
+-	debug_register_view(rc, &debug_pages_view);
+-out:
+-	if (!rc)
++	if (rc) {
++		mutex_lock(&debug_mutex);
++		_debug_register(rc);
++		mutex_unlock(&debug_mutex);
++	} else {
+ 		pr_err("Registering debug feature %s failed\n", name);
+-	mutex_unlock(&debug_mutex);
++	}
+ 	return rc;
+ }
+ EXPORT_SYMBOL(debug_register_mode);
+@@ -702,6 +692,27 @@ debug_info_t *debug_register(const char *name, int pages_per_area,
+ }
+ EXPORT_SYMBOL(debug_register);
+ 
++/* Remove debugfs entries and remove from internal list. */
++static void _debug_unregister(debug_info_t *id)
++{
++	int i;
++
++	for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
++		if (!id->views[i])
++			continue;
++		debugfs_remove(id->debugfs_entries[i]);
++	}
++	debugfs_remove(id->debugfs_root_entry);
++	if (id == debug_area_first)
++		debug_area_first = id->next;
++	if (id == debug_area_last)
++		debug_area_last = id->prev;
++	if (id->prev)
++		id->prev->next = id->next;
++	if (id->next)
++		id->next->prev = id->prev;
++}
++
+ /**
+  * debug_unregister() - give back debug area.
+  *
+@@ -715,8 +726,10 @@ void debug_unregister(debug_info_t *id)
+ 	if (!id)
+ 		return;
+ 	mutex_lock(&debug_mutex);
+-	debug_info_put(id);
++	_debug_unregister(id);
+ 	mutex_unlock(&debug_mutex);
++
++	debug_info_put(id);
+ }
+ EXPORT_SYMBOL(debug_unregister);
+ 
+@@ -726,35 +739,28 @@ EXPORT_SYMBOL(debug_unregister);
+  */
+ static int debug_set_size(debug_info_t *id, int nr_areas, int pages_per_area)
+ {
+-	debug_entry_t ***new_areas;
++	debug_info_t *new_id;
+ 	unsigned long flags;
+-	int rc = 0;
+ 
+ 	if (!id || (nr_areas <= 0) || (pages_per_area < 0))
+ 		return -EINVAL;
+-	if (pages_per_area > 0) {
+-		new_areas = debug_areas_alloc(pages_per_area, nr_areas);
+-		if (!new_areas) {
+-			pr_info("Allocating memory for %i pages failed\n",
+-				pages_per_area);
+-			rc = -ENOMEM;
+-			goto out;
+-		}
+-	} else {
+-		new_areas = NULL;
++
++	new_id = debug_info_alloc("", pages_per_area, nr_areas, id->buf_size,
++				  id->level, ALL_AREAS);
++	if (!new_id) {
++		pr_info("Allocating memory for %i pages failed\n",
++			pages_per_area);
++		return -ENOMEM;
+ 	}
++
+ 	spin_lock_irqsave(&id->lock, flags);
+-	debug_areas_free(id);
+-	id->areas = new_areas;
+-	id->nr_areas = nr_areas;
+-	id->pages_per_area = pages_per_area;
+-	id->active_area = 0;
+-	memset(id->active_entries, 0, sizeof(int)*id->nr_areas);
+-	memset(id->active_pages, 0, sizeof(int)*id->nr_areas);
++	debug_events_append(new_id, id);
++	debug_areas_swap(new_id, id);
++	debug_info_free(new_id);
+ 	spin_unlock_irqrestore(&id->lock, flags);
+ 	pr_info("%s: set new size (%i pages)\n", id->name, pages_per_area);
+-out:
+-	return rc;
++
++	return 0;
+ }
+ 
+ /**
+@@ -821,6 +827,42 @@ static inline debug_entry_t *get_active_entry(debug_info_t *id)
+ 				  id->active_entries[id->active_area]);
+ }
+ 
++/* Swap debug areas of a and b. */
++static void debug_areas_swap(debug_info_t *a, debug_info_t *b)
++{
++	swap(a->nr_areas, b->nr_areas);
++	swap(a->pages_per_area, b->pages_per_area);
++	swap(a->areas, b->areas);
++	swap(a->active_area, b->active_area);
++	swap(a->active_pages, b->active_pages);
++	swap(a->active_entries, b->active_entries);
++}
++
++/* Append all debug events in active area from source to destination log. */
++static void debug_events_append(debug_info_t *dest, debug_info_t *src)
++{
++	debug_entry_t *from, *to, *last;
++
++	if (!src->areas || !dest->areas)
++		return;
++
++	/* Loop over all entries in src, starting with oldest. */
++	from = get_active_entry(src);
++	last = from;
++	do {
++		if (from->clock != 0LL) {
++			to = get_active_entry(dest);
++			memset(to, 0, dest->entry_size);
++			memcpy(to, from, min(src->entry_size,
++					     dest->entry_size));
++			proceed_active_entry(dest);
++		}
++
++		proceed_active_entry(src);
++		from = get_active_entry(src);
++	} while (from != last);
++}
++
+ /*
+  * debug_finish_entry:
+  * - set timestamp, caller address, cpu number etc.
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 5a2f70cbd3a9d..b9716a7e326d0 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -624,12 +624,15 @@ ENTRY(mcck_int_handler)
+ 4:	j	4b
+ ENDPROC(mcck_int_handler)
+ 
+-#
+-# PSW restart interrupt handler
+-#
+ ENTRY(restart_int_handler)
+ 	ALTERNATIVE "", ".insn s,0xb2800000,_LPP_OFFSET", 40
+ 	stg	%r15,__LC_SAVE_AREA_RESTART
++	TSTMSK	__LC_RESTART_FLAGS,RESTART_FLAG_CTLREGS,4
++	jz	0f
++	la	%r15,4095
++	lctlg	%c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r15)
++0:	larl	%r15,.Lstosm_tmp
++	stosm	0(%r15),0x04			# turn dat on, keep irqs off
+ 	lg	%r15,__LC_RESTART_STACK
+ 	xc	STACK_FRAME_OVERHEAD(__PT_SIZE,%r15),STACK_FRAME_OVERHEAD(%r15)
+ 	stmg	%r0,%r14,STACK_FRAME_OVERHEAD+__PT_R0(%r15)
+@@ -638,7 +641,7 @@ ENTRY(restart_int_handler)
+ 	xc	0(STACK_FRAME_OVERHEAD,%r15),0(%r15)
+ 	lg	%r1,__LC_RESTART_FN		# load fn, parm & source cpu
+ 	lg	%r2,__LC_RESTART_DATA
+-	lg	%r3,__LC_RESTART_SOURCE
++	lgf	%r3,__LC_RESTART_SOURCE
+ 	ltgr	%r3,%r3				# test source cpu address
+ 	jm	1f				# negative -> skip source stop
+ 0:	sigp	%r4,%r3,SIGP_SENSE		# sigp sense to source cpu
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index 50e2c21e0ec94..911cd39123514 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -179,8 +179,6 @@ static inline int __diag308(unsigned long subcode, void *addr)
+ 
+ int diag308(unsigned long subcode, void *addr)
+ {
+-	if (IS_ENABLED(CONFIG_KASAN))
+-		__arch_local_irq_stosm(0x04); /* enable DAT */
+ 	diag_stat_inc(DIAG_STAT_X308);
+ 	return __diag308(subcode, addr);
+ }
+@@ -1843,7 +1841,6 @@ static struct kobj_attribute on_restart_attr = __ATTR_RW(on_restart);
+ 
+ static void __do_restart(void *ignore)
+ {
+-	__arch_local_irq_stosm(0x04); /* enable DAT */
+ 	smp_send_stop();
+ #ifdef CONFIG_CRASH_DUMP
+ 	crash_kexec(NULL);
+diff --git a/arch/s390/kernel/machine_kexec.c b/arch/s390/kernel/machine_kexec.c
+index 1005a6935fbe3..c1fbc979e0e8b 100644
+--- a/arch/s390/kernel/machine_kexec.c
++++ b/arch/s390/kernel/machine_kexec.c
+@@ -263,7 +263,6 @@ static void __do_machine_kexec(void *data)
+  */
+ static void __machine_kexec(void *data)
+ {
+-	__arch_local_irq_stosm(0x04); /* enable DAT */
+ 	pfault_fini();
+ 	tracing_off();
+ 	debug_locks_off();
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index ff0f9e8389162..ee23908f1b960 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -421,7 +421,7 @@ static void __init setup_lowcore_dat_off(void)
+ 	lc->restart_stack = (unsigned long) restart_stack;
+ 	lc->restart_fn = (unsigned long) do_restart;
+ 	lc->restart_data = 0;
+-	lc->restart_source = -1UL;
++	lc->restart_source = -1U;
+ 
+ 	mcck_stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE);
+ 	if (!mcck_stack)
+@@ -450,12 +450,19 @@ static void __init setup_lowcore_dat_off(void)
+ 
+ static void __init setup_lowcore_dat_on(void)
+ {
++	struct lowcore *lc = lowcore_ptr[0];
++
+ 	__ctl_clear_bit(0, 28);
+ 	S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT;
+ 	S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT;
+ 	S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT;
+ 	S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT;
++	__ctl_store(S390_lowcore.cregs_save_area, 0, 15);
+ 	__ctl_set_bit(0, 28);
++	mem_assign_absolute(S390_lowcore.restart_flags, RESTART_FLAG_CTLREGS);
++	mem_assign_absolute(S390_lowcore.program_new_psw, lc->program_new_psw);
++	memcpy_absolute(&S390_lowcore.cregs_save_area, lc->cregs_save_area,
++			sizeof(S390_lowcore.cregs_save_area));
+ }
+ 
+ static struct resource code_resource = {
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 8984711f72ede..8e8ace899407c 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -252,6 +252,7 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu)
+ 	cpumask_set_cpu(cpu, &init_mm.context.cpu_attach_mask);
+ 	cpumask_set_cpu(cpu, mm_cpumask(&init_mm));
+ 	lc->cpu_nr = cpu;
++	lc->restart_flags = RESTART_FLAG_CTLREGS;
+ 	lc->spinlock_lockval = arch_spin_lockval(cpu);
+ 	lc->spinlock_index = 0;
+ 	lc->percpu_offset = __per_cpu_offset[cpu];
+@@ -297,7 +298,7 @@ static void pcpu_start_fn(struct pcpu *pcpu, void (*func)(void *), void *data)
+ 	lc->restart_stack = lc->nodat_stack;
+ 	lc->restart_fn = (unsigned long) func;
+ 	lc->restart_data = (unsigned long) data;
+-	lc->restart_source = -1UL;
++	lc->restart_source = -1U;
+ 	pcpu_sigp_retry(pcpu, SIGP_RESTART, 0);
+ }
+ 
+@@ -311,12 +312,12 @@ static void __pcpu_delegate(pcpu_delegate_fn *func, void *data)
+ 	func(data);	/* should not return */
+ }
+ 
+-static void __no_sanitize_address pcpu_delegate(struct pcpu *pcpu,
+-						pcpu_delegate_fn *func,
+-						void *data, unsigned long stack)
++static void pcpu_delegate(struct pcpu *pcpu,
++			  pcpu_delegate_fn *func,
++			  void *data, unsigned long stack)
+ {
+ 	struct lowcore *lc = lowcore_ptr[pcpu - pcpu_devices];
+-	unsigned long source_cpu = stap();
++	unsigned int source_cpu = stap();
+ 
+ 	__load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_DAT);
+ 	if (pcpu->address == source_cpu) {
+@@ -569,6 +570,9 @@ static void smp_ctl_bit_callback(void *info)
+ 	__ctl_load(cregs, 0, 15);
+ }
+ 
++static DEFINE_SPINLOCK(ctl_lock);
++static unsigned long ctlreg;
++
+ /*
+  * Set a bit in a control register of all cpus
+  */
+@@ -576,6 +580,11 @@ void smp_ctl_set_bit(int cr, int bit)
+ {
+ 	struct ec_creg_mask_parms parms = { 1UL << bit, -1UL, cr };
+ 
++	spin_lock(&ctl_lock);
++	memcpy_absolute(&ctlreg, &S390_lowcore.cregs_save_area[cr], sizeof(ctlreg));
++	__set_bit(bit, &ctlreg);
++	memcpy_absolute(&S390_lowcore.cregs_save_area[cr], &ctlreg, sizeof(ctlreg));
++	spin_unlock(&ctl_lock);
+ 	on_each_cpu(smp_ctl_bit_callback, &parms, 1);
+ }
+ EXPORT_SYMBOL(smp_ctl_set_bit);
+@@ -587,6 +596,11 @@ void smp_ctl_clear_bit(int cr, int bit)
+ {
+ 	struct ec_creg_mask_parms parms = { 0, ~(1UL << bit), cr };
+ 
++	spin_lock(&ctl_lock);
++	memcpy_absolute(&ctlreg, &S390_lowcore.cregs_save_area[cr], sizeof(ctlreg));
++	__clear_bit(bit, &ctlreg);
++	memcpy_absolute(&S390_lowcore.cregs_save_area[cr], &ctlreg, sizeof(ctlreg));
++	spin_unlock(&ctl_lock);
+ 	on_each_cpu(smp_ctl_bit_callback, &parms, 1);
+ }
+ EXPORT_SYMBOL(smp_ctl_clear_bit);
+@@ -895,14 +909,13 @@ static void smp_init_secondary(void)
+ /*
+  *	Activate a secondary processor.
+  */
+-static void __no_sanitize_address smp_start_secondary(void *cpuvoid)
++static void smp_start_secondary(void *cpuvoid)
+ {
+ 	S390_lowcore.restart_stack = (unsigned long) restart_stack;
+ 	S390_lowcore.restart_fn = (unsigned long) do_restart;
+ 	S390_lowcore.restart_data = 0;
+-	S390_lowcore.restart_source = -1UL;
+-	__ctl_load(S390_lowcore.cregs_save_area, 0, 15);
+-	__load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_DAT);
++	S390_lowcore.restart_source = -1U;
++	S390_lowcore.restart_flags = 0;
+ 	call_on_stack_noreturn(smp_init_secondary, S390_lowcore.kernel_stack);
+ }
+ 
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index d548d60caed25..16256e17a544a 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -419,13 +419,13 @@ static unsigned long deliverable_irqs(struct kvm_vcpu *vcpu)
+ static void __set_cpu_idle(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
+-	set_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	set_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
+-	clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
+@@ -3050,18 +3050,18 @@ int kvm_s390_get_irq_state(struct kvm_vcpu *vcpu, __u8 __user *buf, int len)
+ 
+ static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
+ {
+-	int vcpu_id, online_vcpus = atomic_read(&kvm->online_vcpus);
++	int vcpu_idx, online_vcpus = atomic_read(&kvm->online_vcpus);
+ 	struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
+ 	struct kvm_vcpu *vcpu;
+ 
+-	for_each_set_bit(vcpu_id, kvm->arch.idle_mask, online_vcpus) {
+-		vcpu = kvm_get_vcpu(kvm, vcpu_id);
++	for_each_set_bit(vcpu_idx, kvm->arch.idle_mask, online_vcpus) {
++		vcpu = kvm_get_vcpu(kvm, vcpu_idx);
+ 		if (psw_ioint_disabled(vcpu))
+ 			continue;
+ 		deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+ 		if (deliverable_mask) {
+ 			/* lately kicked but not yet running */
+-			if (test_and_set_bit(vcpu_id, gi->kicked_mask))
++			if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
+ 				return;
+ 			kvm_s390_vcpu_wakeup(vcpu);
+ 			return;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 4527ac7b5961d..8580543c5bc33 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4044,7 +4044,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu)
+ 		kvm_s390_patch_guest_per_regs(vcpu);
+ 	}
+ 
+-	clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.gisa_int.kicked_mask);
++	clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.gisa_int.kicked_mask);
+ 
+ 	vcpu->arch.sie_block->icptcode = 0;
+ 	cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index 9fad25109b0dd..ecd741ee3276e 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -79,7 +79,7 @@ static inline int is_vcpu_stopped(struct kvm_vcpu *vcpu)
+ 
+ static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
+ {
+-	return test_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	return test_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static inline int kvm_is_ucontrol(struct kvm *kvm)
+diff --git a/arch/s390/mm/kasan_init.c b/arch/s390/mm/kasan_init.c
+index a0fdc6dc5f9d0..cc3af046c14e5 100644
+--- a/arch/s390/mm/kasan_init.c
++++ b/arch/s390/mm/kasan_init.c
+@@ -107,6 +107,9 @@ static void __init kasan_early_pgtable_populate(unsigned long address,
+ 		sgt_prot &= ~_SEGMENT_ENTRY_NOEXEC;
+ 	}
+ 
++	/*
++	 * The first 1MB of 1:1 mapping is mapped with 4KB pages
++	 */
+ 	while (address < end) {
+ 		pg_dir = pgd_offset_k(address);
+ 		if (pgd_none(*pg_dir)) {
+@@ -157,30 +160,26 @@ static void __init kasan_early_pgtable_populate(unsigned long address,
+ 
+ 		pm_dir = pmd_offset(pu_dir, address);
+ 		if (pmd_none(*pm_dir)) {
+-			if (mode == POPULATE_ZERO_SHADOW &&
+-			    IS_ALIGNED(address, PMD_SIZE) &&
++			if (IS_ALIGNED(address, PMD_SIZE) &&
+ 			    end - address >= PMD_SIZE) {
+-				pmd_populate(&init_mm, pm_dir,
+-						kasan_early_shadow_pte);
+-				address = (address + PMD_SIZE) & PMD_MASK;
+-				continue;
+-			}
+-			/* the first megabyte of 1:1 is mapped with 4k pages */
+-			if (has_edat && address && end - address >= PMD_SIZE &&
+-			    mode != POPULATE_ZERO_SHADOW) {
+-				void *page;
+-
+-				if (mode == POPULATE_ONE2ONE) {
+-					page = (void *)address;
+-				} else {
+-					page = kasan_early_alloc_segment();
+-					memset(page, 0, _SEGMENT_SIZE);
++				if (mode == POPULATE_ZERO_SHADOW) {
++					pmd_populate(&init_mm, pm_dir, kasan_early_shadow_pte);
++					address = (address + PMD_SIZE) & PMD_MASK;
++					continue;
++				} else if (has_edat && address) {
++					void *page;
++
++					if (mode == POPULATE_ONE2ONE) {
++						page = (void *)address;
++					} else {
++						page = kasan_early_alloc_segment();
++						memset(page, 0, _SEGMENT_SIZE);
++					}
++					pmd_val(*pm_dir) = __pa(page) | sgt_prot;
++					address = (address + PMD_SIZE) & PMD_MASK;
++					continue;
+ 				}
+-				pmd_val(*pm_dir) = __pa(page) | sgt_prot;
+-				address = (address + PMD_SIZE) & PMD_MASK;
+-				continue;
+ 			}
+-
+ 			pt_dir = kasan_early_pte_alloc();
+ 			pmd_populate(&init_mm, pm_dir, pt_dir);
+ 		} else if (pmd_large(*pm_dir)) {
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 8fcb7ecb7225a..77cd965cffefa 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -661,9 +661,10 @@ int zpci_enable_device(struct zpci_dev *zdev)
+ {
+ 	int rc;
+ 
+-	rc = clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES);
+-	if (rc)
++	if (clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES)) {
++		rc = -EIO;
+ 		goto out;
++	}
+ 
+ 	rc = zpci_dma_init_device(zdev);
+ 	if (rc)
+@@ -684,7 +685,7 @@ int zpci_disable_device(struct zpci_dev *zdev)
+ 	 * The zPCI function may already be disabled by the platform, this is
+ 	 * detected in clp_disable_fh() which becomes a no-op.
+ 	 */
+-	return clp_disable_fh(zdev);
++	return clp_disable_fh(zdev) ? -EIO : 0;
+ }
+ 
+ /**
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index d3331596ddbe1..0a0e8b8293bef 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -213,15 +213,19 @@ out:
+ }
+ 
+ static int clp_refresh_fh(u32 fid);
+-/*
+- * Enable/Disable a given PCI function and update its function handle if
+- * necessary
++/**
++ * clp_set_pci_fn() - Execute a command on a PCI function
++ * @zdev: Function that will be affected
++ * @nr_dma_as: DMA address space number
++ * @command: The command code to execute
++ *
++ * Returns: 0 on success, < 0 for Linux errors (e.g. -ENOMEM), and
++ * > 0 for non-success platform responses
+  */
+ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
+ {
+ 	struct clp_req_rsp_set_pci *rrb;
+ 	int rc, retries = 100;
+-	u32 fid = zdev->fid;
+ 
+ 	rrb = clp_alloc_block(GFP_KERNEL);
+ 	if (!rrb)
+@@ -245,17 +249,16 @@ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
+ 		}
+ 	} while (rrb->response.hdr.rsp == CLP_RC_SETPCIFN_BUSY);
+ 
+-	if (rc || rrb->response.hdr.rsp != CLP_RC_OK) {
+-		zpci_err("Set PCI FN:\n");
+-		zpci_err_clp(rrb->response.hdr.rsp, rc);
+-	}
+-
+ 	if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
+ 		zdev->fh = rrb->response.fh;
+-	} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY &&
+-			rrb->response.fh == 0) {
++	} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY) {
+ 		/* Function is already in desired state - update handle */
+-		rc = clp_refresh_fh(fid);
++		rc = clp_refresh_fh(zdev->fid);
++	} else {
++		zpci_err("Set PCI FN:\n");
++		zpci_err_clp(rrb->response.hdr.rsp, rc);
++		if (!rc)
++			rc = rrb->response.hdr.rsp;
+ 	}
+ 	clp_free_block(rrb);
+ 	return rc;
+@@ -301,17 +304,13 @@ int clp_enable_fh(struct zpci_dev *zdev, u8 nr_dma_as)
+ 
+ 	rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_PCI_FN);
+ 	zpci_dbg(3, "ena fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
+-	if (rc)
+-		goto out;
+-
+-	if (zpci_use_mio(zdev)) {
++	if (!rc && zpci_use_mio(zdev)) {
+ 		rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_MIO);
+ 		zpci_dbg(3, "ena mio fid:%x, fh:%x, rc:%d\n",
+ 				zdev->fid, zdev->fh, rc);
+ 		if (rc)
+ 			clp_disable_fh(zdev);
+ 	}
+-out:
+ 	return rc;
+ }
+ 
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 2144e54a6c892..388643ca2177e 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -849,6 +849,8 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
+ 		return -EINVAL;
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
++	if (err)
++		return err;
+ 
+ 	if (unlikely(tail > 0 && walk.nbytes < walk.total)) {
+ 		int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2;
+@@ -862,7 +864,10 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
+ 		skcipher_request_set_crypt(&subreq, req->src, req->dst,
+ 					   blocks * AES_BLOCK_SIZE, req->iv);
+ 		req = &subreq;
++
+ 		err = skcipher_walk_virt(&walk, req, false);
++		if (err)
++			return err;
+ 	} else {
+ 		tail = 0;
+ 	}
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index c682b09b18fa0..482a9931d1e65 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3838,26 +3838,32 @@ clear_attr_update:
+ 	return ret;
+ }
+ 
+-static int skx_iio_set_mapping(struct intel_uncore_type *type)
+-{
+-	return pmu_iio_set_mapping(type, &skx_iio_mapping_group);
+-}
+-
+-static void skx_iio_cleanup_mapping(struct intel_uncore_type *type)
++static void
++pmu_iio_cleanup_mapping(struct intel_uncore_type *type, struct attribute_group *ag)
+ {
+-	struct attribute **attr = skx_iio_mapping_group.attrs;
++	struct attribute **attr = ag->attrs;
+ 
+ 	if (!attr)
+ 		return;
+ 
+ 	for (; *attr; attr++)
+ 		kfree((*attr)->name);
+-	kfree(attr_to_ext_attr(*skx_iio_mapping_group.attrs));
+-	kfree(skx_iio_mapping_group.attrs);
+-	skx_iio_mapping_group.attrs = NULL;
++	kfree(attr_to_ext_attr(*ag->attrs));
++	kfree(ag->attrs);
++	ag->attrs = NULL;
+ 	kfree(type->topology);
+ }
+ 
++static int skx_iio_set_mapping(struct intel_uncore_type *type)
++{
++	return pmu_iio_set_mapping(type, &skx_iio_mapping_group);
++}
++
++static void skx_iio_cleanup_mapping(struct intel_uncore_type *type)
++{
++	pmu_iio_cleanup_mapping(type, &skx_iio_mapping_group);
++}
++
+ static struct intel_uncore_type skx_uncore_iio = {
+ 	.name			= "iio",
+ 	.num_counters		= 4,
+@@ -4501,6 +4507,11 @@ static int snr_iio_set_mapping(struct intel_uncore_type *type)
+ 	return pmu_iio_set_mapping(type, &snr_iio_mapping_group);
+ }
+ 
++static void snr_iio_cleanup_mapping(struct intel_uncore_type *type)
++{
++	pmu_iio_cleanup_mapping(type, &snr_iio_mapping_group);
++}
++
+ static struct intel_uncore_type snr_uncore_iio = {
+ 	.name			= "iio",
+ 	.num_counters		= 4,
+@@ -4517,7 +4528,7 @@ static struct intel_uncore_type snr_uncore_iio = {
+ 	.attr_update		= snr_iio_attr_update,
+ 	.get_topology		= snr_iio_get_topology,
+ 	.set_mapping		= snr_iio_set_mapping,
+-	.cleanup_mapping	= skx_iio_cleanup_mapping,
++	.cleanup_mapping	= snr_iio_cleanup_mapping,
+ };
+ 
+ static struct intel_uncore_type snr_uncore_irp = {
+@@ -5092,6 +5103,11 @@ static int icx_iio_set_mapping(struct intel_uncore_type *type)
+ 	return pmu_iio_set_mapping(type, &icx_iio_mapping_group);
+ }
+ 
++static void icx_iio_cleanup_mapping(struct intel_uncore_type *type)
++{
++	pmu_iio_cleanup_mapping(type, &icx_iio_mapping_group);
++}
++
+ static struct intel_uncore_type icx_uncore_iio = {
+ 	.name			= "iio",
+ 	.num_counters		= 4,
+@@ -5109,7 +5125,7 @@ static struct intel_uncore_type icx_uncore_iio = {
+ 	.attr_update		= icx_iio_attr_update,
+ 	.get_topology		= icx_iio_get_topology,
+ 	.set_mapping		= icx_iio_set_mapping,
+-	.cleanup_mapping	= skx_iio_cleanup_mapping,
++	.cleanup_mapping	= icx_iio_cleanup_mapping,
+ };
+ 
+ static struct intel_uncore_type icx_uncore_irp = {
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index 0607ec4f50914..da9321548f6f1 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -265,6 +265,7 @@ enum mcp_flags {
+ 	MCP_TIMESTAMP	= BIT(0),	/* log time stamp */
+ 	MCP_UC		= BIT(1),	/* log uncorrected errors */
+ 	MCP_DONTLOG	= BIT(2),	/* only clear, don't log */
++	MCP_QUEUE_LOG	= BIT(3),	/* only queue to genpool */
+ };
+ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 22791aadc085c..8cb7816d03b4c 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -817,7 +817,10 @@ log_it:
+ 		if (mca_cfg.dont_log_ce && !mce_usable_address(&m))
+ 			goto clear_it;
+ 
+-		mce_log(&m);
++		if (flags & MCP_QUEUE_LOG)
++			mce_gen_pool_add(&m);
++		else
++			mce_log(&m);
+ 
+ clear_it:
+ 		/*
+@@ -1639,10 +1642,12 @@ static void __mcheck_cpu_init_generic(void)
+ 		m_fl = MCP_DONTLOG;
+ 
+ 	/*
+-	 * Log the machine checks left over from the previous reset.
++	 * Log the machine checks left over from the previous reset. Log them
++	 * only, do not start processing them. That will happen in mcheck_late_init()
++	 * when all consumers have been registered on the notifier chain.
+ 	 */
+ 	bitmap_fill(all_banks, MAX_NR_BANKS);
+-	machine_check_poll(MCP_UC | m_fl, &all_banks);
++	machine_check_poll(MCP_UC | MCP_QUEUE_LOG | m_fl, &all_banks);
+ 
+ 	cr4_set_bits(X86_CR4_MCE);
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 47b7652702397..c268fb59f7794 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -323,12 +323,6 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte)
+ static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
+                                   struct x86_exception *exception)
+ {
+-	/* Check if guest physical address doesn't exceed guest maximum */
+-	if (kvm_vcpu_is_illegal_gpa(vcpu, gpa)) {
+-		exception->error_code |= PFERR_RSVD_MASK;
+-		return UNMAPPED_GVA;
+-	}
+-
+         return gpa;
+ }
+ 
+@@ -2852,6 +2846,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
+ 			      kvm_pfn_t pfn, int max_level)
+ {
+ 	struct kvm_lpage_info *linfo;
++	int host_level;
+ 
+ 	max_level = min(max_level, max_huge_page_level);
+ 	for ( ; max_level > PG_LEVEL_4K; max_level--) {
+@@ -2863,7 +2858,8 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
+ 	if (max_level == PG_LEVEL_4K)
+ 		return PG_LEVEL_4K;
+ 
+-	return host_pfn_mapping_level(kvm, gfn, pfn, slot);
++	host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot);
++	return min(host_level, max_level);
+ }
+ 
+ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
+@@ -2887,17 +2883,12 @@ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
+ 	if (!slot)
+ 		return PG_LEVEL_4K;
+ 
+-	level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, gfn, pfn, max_level);
+-	if (level == PG_LEVEL_4K)
+-		return level;
+-
+-	*req_level = level = min(level, max_level);
+-
+ 	/*
+ 	 * Enforce the iTLB multihit workaround after capturing the requested
+ 	 * level, which will be used to do precise, accurate accounting.
+ 	 */
+-	if (huge_page_disallowed)
++	*req_level = level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, gfn, pfn, max_level);
++	if (level == PG_LEVEL_4K || huge_page_disallowed)
+ 		return PG_LEVEL_4K;
+ 
+ 	/*
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index d80cb122b5f38..0a1fa42d03aa6 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -412,6 +412,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
+ 	bool was_leaf = was_present && is_last_spte(old_spte, level);
+ 	bool is_leaf = is_present && is_last_spte(new_spte, level);
+ 	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
++	bool was_large, is_large;
+ 
+ 	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
+ 	WARN_ON(level < PG_LEVEL_4K);
+@@ -445,13 +446,6 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
+ 
+ 	trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte);
+ 
+-	if (is_large_pte(old_spte) != is_large_pte(new_spte)) {
+-		if (is_large_pte(old_spte))
+-			atomic64_sub(1, (atomic64_t*)&kvm->stat.lpages);
+-		else
+-			atomic64_add(1, (atomic64_t*)&kvm->stat.lpages);
+-	}
+-
+ 	/*
+ 	 * The only times a SPTE should be changed from a non-present to
+ 	 * non-present state is when an MMIO entry is installed/modified/
+@@ -477,6 +471,18 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
+ 		return;
+ 	}
+ 
++	/*
++	 * Update large page stats if a large page is being zapped, created, or
++	 * is replacing an existing shadow page.
++	 */
++	was_large = was_leaf && is_large_pte(old_spte);
++	is_large = is_leaf && is_large_pte(new_spte);
++	if (was_large != is_large) {
++		if (was_large)
++			atomic64_sub(1, (atomic64_t *)&kvm->stat.lpages);
++		else
++			atomic64_add(1, (atomic64_t *)&kvm->stat.lpages);
++	}
+ 
+ 	if (was_leaf && is_dirty_spte(old_spte) &&
+ 	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index b3f77d18eb5aa..ac1803dac4357 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2223,12 +2223,11 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 			 ~PIN_BASED_VMX_PREEMPTION_TIMER);
+ 
+ 	/* Posted interrupts setting is only taken from vmcs12.  */
+-	if (nested_cpu_has_posted_intr(vmcs12)) {
++	vmx->nested.pi_pending = false;
++	if (nested_cpu_has_posted_intr(vmcs12))
+ 		vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
+-		vmx->nested.pi_pending = false;
+-	} else {
++	else
+ 		exec_control &= ~PIN_BASED_POSTED_INTR;
+-	}
+ 	pin_controls_set(vmx, exec_control);
+ 
+ 	/*
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 927a552393b96..256f8cab4b8b4 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6368,6 +6368,9 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 
++	if (vmx->emulation_required)
++		return;
++
+ 	if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
+ 		handle_external_interrupt_irqoff(vcpu);
+ 	else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e5d5c5ed7dd43..7ec7c2dce5065 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3316,6 +3316,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			if (!msr_info->host_initiated) {
+ 				s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
+ 				adjust_tsc_offset_guest(vcpu, adj);
++				/* Before back to guest, tsc_timestamp must be adjusted
++				 * as well, otherwise guest's percpu pvclock time could jump.
++				 */
++				kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
+ 			}
+ 			vcpu->arch.ia32_tsc_adjust_msr = data;
+ 		}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 7279559185630..673a634eadd9f 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2361,6 +2361,9 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
+ 	__rq = bfq_find_rq_fmerge(bfqd, bio, q);
+ 	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
++
++		if (blk_discard_mergable(__rq))
++			return ELEVATOR_DISCARD_MERGE;
+ 		return ELEVATOR_FRONT_MERGE;
+ 	}
+ 
+diff --git a/block/bio.c b/block/bio.c
+index 1fab762e079be..d95e3456ba0c5 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -979,6 +979,14 @@ static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter)
+ 	return 0;
+ }
+ 
++static void bio_put_pages(struct page **pages, size_t size, size_t off)
++{
++	size_t i, nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE);
++
++	for (i = 0; i < nr; i++)
++		put_page(pages[i]);
++}
++
+ #define PAGE_PTRS_PER_BVEC     (sizeof(struct bio_vec) / sizeof(struct page *))
+ 
+ /**
+@@ -1023,8 +1031,10 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ 			if (same_page)
+ 				put_page(page);
+ 		} else {
+-			if (WARN_ON_ONCE(bio_full(bio, len)))
+-                                return -EINVAL;
++			if (WARN_ON_ONCE(bio_full(bio, len))) {
++				bio_put_pages(pages + i, left, offset);
++				return -EINVAL;
++			}
+ 			__bio_add_page(bio, page, len, offset);
+ 		}
+ 		offset = 0;
+@@ -1069,6 +1079,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+ 		len = min_t(size_t, PAGE_SIZE - offset, left);
+ 		if (bio_add_hw_page(q, bio, page, len, offset,
+ 				max_append_sectors, &same_page) != len) {
++			bio_put_pages(pages + i, left, offset);
+ 			ret = -EINVAL;
+ 			break;
+ 		}
+diff --git a/block/blk-crypto.c b/block/blk-crypto.c
+index c5bdaafffa29f..103c2e2d50d67 100644
+--- a/block/blk-crypto.c
++++ b/block/blk-crypto.c
+@@ -332,7 +332,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+ 	if (mode->keysize == 0)
+ 		return -EINVAL;
+ 
+-	if (dun_bytes == 0 || dun_bytes > BLK_CRYPTO_MAX_IV_SIZE)
++	if (dun_bytes == 0 || dun_bytes > mode->ivsize)
+ 		return -EINVAL;
+ 
+ 	if (!is_power_of_2(data_unit_size))
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index a11b3b53717ef..eeba8422ae823 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -348,6 +348,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs)
+ 		trace_block_split(split, (*bio)->bi_iter.bi_sector);
+ 		submit_bio_noacct(*bio);
+ 		*bio = split;
++
++		blk_throtl_charge_bio_split(*bio);
+ 	}
+ }
+ 
+@@ -705,22 +707,6 @@ static void blk_account_io_merge_request(struct request *req)
+ 	}
+ }
+ 
+-/*
+- * Two cases of handling DISCARD merge:
+- * If max_discard_segments > 1, the driver takes every bio
+- * as a range and send them to controller together. The ranges
+- * needn't to be contiguous.
+- * Otherwise, the bios/requests will be handled as same as
+- * others which should be contiguous.
+- */
+-static inline bool blk_discard_mergable(struct request *req)
+-{
+-	if (req_op(req) == REQ_OP_DISCARD &&
+-	    queue_max_discard_segments(req->q) > 1)
+-		return true;
+-	return false;
+-}
+-
+ static enum elv_merge blk_try_req_merge(struct request *req,
+ 					struct request *next)
+ {
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index b1b22d863bdf8..55c49015e5333 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -178,6 +178,9 @@ struct throtl_grp {
+ 	unsigned int bad_bio_cnt; /* bios exceeding latency threshold */
+ 	unsigned long bio_cnt_reset_time;
+ 
++	atomic_t io_split_cnt[2];
++	atomic_t last_io_split_cnt[2];
++
+ 	struct blkg_rwstat stat_bytes;
+ 	struct blkg_rwstat stat_ios;
+ };
+@@ -777,6 +780,8 @@ static inline void throtl_start_new_slice_with_credit(struct throtl_grp *tg,
+ 	tg->bytes_disp[rw] = 0;
+ 	tg->io_disp[rw] = 0;
+ 
++	atomic_set(&tg->io_split_cnt[rw], 0);
++
+ 	/*
+ 	 * Previous slice has expired. We must have trimmed it after last
+ 	 * bio dispatch. That means since start of last slice, we never used
+@@ -799,6 +804,9 @@ static inline void throtl_start_new_slice(struct throtl_grp *tg, bool rw)
+ 	tg->io_disp[rw] = 0;
+ 	tg->slice_start[rw] = jiffies;
+ 	tg->slice_end[rw] = jiffies + tg->td->throtl_slice;
++
++	atomic_set(&tg->io_split_cnt[rw], 0);
++
+ 	throtl_log(&tg->service_queue,
+ 		   "[%c] new slice start=%lu end=%lu jiffies=%lu",
+ 		   rw == READ ? 'R' : 'W', tg->slice_start[rw],
+@@ -1031,6 +1039,9 @@ static bool tg_may_dispatch(struct throtl_grp *tg, struct bio *bio,
+ 				jiffies + tg->td->throtl_slice);
+ 	}
+ 
++	if (iops_limit != UINT_MAX)
++		tg->io_disp[rw] += atomic_xchg(&tg->io_split_cnt[rw], 0);
++
+ 	if (tg_with_in_bps_limit(tg, bio, bps_limit, &bps_wait) &&
+ 	    tg_with_in_iops_limit(tg, bio, iops_limit, &iops_wait)) {
+ 		if (wait)
+@@ -2052,12 +2063,14 @@ static void throtl_downgrade_check(struct throtl_grp *tg)
+ 	}
+ 
+ 	if (tg->iops[READ][LIMIT_LOW]) {
++		tg->last_io_disp[READ] += atomic_xchg(&tg->last_io_split_cnt[READ], 0);
+ 		iops = tg->last_io_disp[READ] * HZ / elapsed_time;
+ 		if (iops >= tg->iops[READ][LIMIT_LOW])
+ 			tg->last_low_overflow_time[READ] = now;
+ 	}
+ 
+ 	if (tg->iops[WRITE][LIMIT_LOW]) {
++		tg->last_io_disp[WRITE] += atomic_xchg(&tg->last_io_split_cnt[WRITE], 0);
+ 		iops = tg->last_io_disp[WRITE] * HZ / elapsed_time;
+ 		if (iops >= tg->iops[WRITE][LIMIT_LOW])
+ 			tg->last_low_overflow_time[WRITE] = now;
+@@ -2176,6 +2189,25 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
+ }
+ #endif
+ 
++void blk_throtl_charge_bio_split(struct bio *bio)
++{
++	struct blkcg_gq *blkg = bio->bi_blkg;
++	struct throtl_grp *parent = blkg_to_tg(blkg);
++	struct throtl_service_queue *parent_sq;
++	bool rw = bio_data_dir(bio);
++
++	do {
++		if (!parent->has_rules[rw])
++			break;
++
++		atomic_inc(&parent->io_split_cnt[rw]);
++		atomic_inc(&parent->last_io_split_cnt[rw]);
++
++		parent_sq = parent->service_queue.parent_sq;
++		parent = sq_to_tg(parent_sq);
++	} while (parent);
++}
++
+ bool blk_throtl_bio(struct bio *bio)
+ {
+ 	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
+diff --git a/block/blk.h b/block/blk.h
+index cb01429c162c6..f10cc9b2c27f7 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -289,11 +289,13 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_mask, int node);
+ extern int blk_throtl_init(struct request_queue *q);
+ extern void blk_throtl_exit(struct request_queue *q);
+ extern void blk_throtl_register_queue(struct request_queue *q);
++extern void blk_throtl_charge_bio_split(struct bio *bio);
+ bool blk_throtl_bio(struct bio *bio);
+ #else /* CONFIG_BLK_DEV_THROTTLING */
+ static inline int blk_throtl_init(struct request_queue *q) { return 0; }
+ static inline void blk_throtl_exit(struct request_queue *q) { }
+ static inline void blk_throtl_register_queue(struct request_queue *q) { }
++static inline void blk_throtl_charge_bio_split(struct bio *bio) { }
+ static inline bool blk_throtl_bio(struct bio *bio) { return false; }
+ #endif /* CONFIG_BLK_DEV_THROTTLING */
+ #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+diff --git a/block/elevator.c b/block/elevator.c
+index 52ada14cfe452..a5fe2615ec0f1 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -336,6 +336,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
+ 	__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
+ 	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
++
++		if (blk_discard_mergable(__rq))
++			return ELEVATOR_DISCARD_MERGE;
+ 		return ELEVATOR_BACK_MERGE;
+ 	}
+ 
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 36920670dccc3..3c3693c34f061 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -629,6 +629,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
+ 
+ 		if (elv_bio_merge_ok(__rq, bio)) {
+ 			*rq = __rq;
++			if (blk_discard_mergable(__rq))
++				return ELEVATOR_DISCARD_MERGE;
+ 			return ELEVATOR_FRONT_MERGE;
+ 		}
+ 	}
+diff --git a/certs/Makefile b/certs/Makefile
+index 359239a0ee9e3..f9344e52ecdae 100644
+--- a/certs/Makefile
++++ b/certs/Makefile
+@@ -57,11 +57,19 @@ endif
+ redirect_openssl	= 2>&1
+ quiet_redirect_openssl	= 2>&1
+ silent_redirect_openssl = 2>/dev/null
++openssl_available       = $(shell openssl help 2>/dev/null && echo yes)
+ 
+ # We do it this way rather than having a boolean option for enabling an
+ # external private key, because 'make randconfig' might enable such a
+ # boolean option and we unfortunately can't make it depend on !RANDCONFIG.
+ ifeq ($(CONFIG_MODULE_SIG_KEY),"certs/signing_key.pem")
++
++ifeq ($(openssl_available),yes)
++X509TEXT=$(shell openssl x509 -in "certs/signing_key.pem" -text 2>/dev/null)
++
++$(if $(findstring rsaEncryption,$(X509TEXT)),,$(shell rm -f "certs/signing_key.pem"))
++endif
++
+ $(obj)/signing_key.pem: $(obj)/x509.genkey
+ 	@$(kecho) "###"
+ 	@$(kecho) "### Now generating an X.509 key pair to be used for signing modules."
+diff --git a/crypto/ecc.h b/crypto/ecc.h
+index a006132646a43..1350e8eb6ac23 100644
+--- a/crypto/ecc.h
++++ b/crypto/ecc.h
+@@ -27,6 +27,7 @@
+ #define _CRYPTO_ECC_H
+ 
+ #include <crypto/ecc_curve.h>
++#include <asm/unaligned.h>
+ 
+ /* One digit is u64 qword. */
+ #define ECC_CURVE_NIST_P192_DIGITS  3
+@@ -46,13 +47,13 @@
+  * @out:      Output array
+  * @ndigits:  Number of digits to copy
+  */
+-static inline void ecc_swap_digits(const u64 *in, u64 *out, unsigned int ndigits)
++static inline void ecc_swap_digits(const void *in, u64 *out, unsigned int ndigits)
+ {
+ 	const __be64 *src = (__force __be64 *)in;
+ 	int i;
+ 
+ 	for (i = 0; i < ndigits; i++)
+-		out[i] = be64_to_cpu(src[ndigits - 1 - i]);
++		out[i] = get_unaligned_be64(&src[ndigits - 1 - i]);
+ }
+ 
+ /**
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index f8d06da78e4f3..6863e57b088d5 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -290,6 +290,11 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
+ 	}
+ 
+ 	ret = crypto_aead_setauthsize(tfm, authsize);
++	if (ret) {
++		pr_err("alg: aead: Failed to setauthsize for %s: %d\n", algo,
++		       ret);
++		goto out_free_tfm;
++	}
+ 
+ 	for (i = 0; i < num_mb; ++i)
+ 		if (testmgr_alloc_buf(data[i].xbuf)) {
+@@ -315,7 +320,7 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
+ 	for (i = 0; i < num_mb; ++i) {
+ 		data[i].req = aead_request_alloc(tfm, GFP_KERNEL);
+ 		if (!data[i].req) {
+-			pr_err("alg: skcipher: Failed to allocate request for %s\n",
++			pr_err("alg: aead: Failed to allocate request for %s\n",
+ 			       algo);
+ 			while (i--)
+ 				aead_request_free(data[i].req);
+@@ -567,13 +572,19 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+ 	sgout = &sg[9];
+ 
+ 	tfm = crypto_alloc_aead(algo, 0, 0);
+-
+ 	if (IS_ERR(tfm)) {
+ 		pr_err("alg: aead: Failed to load transform for %s: %ld\n", algo,
+ 		       PTR_ERR(tfm));
+ 		goto out_notfm;
+ 	}
+ 
++	ret = crypto_aead_setauthsize(tfm, authsize);
++	if (ret) {
++		pr_err("alg: aead: Failed to setauthsize for %s: %d\n", algo,
++		       ret);
++		goto out_noreq;
++	}
++
+ 	crypto_init_wait(&wait);
+ 	printk(KERN_INFO "\ntesting speed of %s (%s) %s\n", algo,
+ 			get_driver_name(crypto_aead, tfm), e);
+@@ -611,8 +622,13 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+ 					break;
+ 				}
+ 			}
++
+ 			ret = crypto_aead_setkey(tfm, key, *keysize);
+-			ret = crypto_aead_setauthsize(tfm, authsize);
++			if (ret) {
++				pr_err("setkey() failed flags=%x: %d\n",
++					crypto_aead_get_flags(tfm), ret);
++				goto out;
++			}
+ 
+ 			iv_len = crypto_aead_ivsize(tfm);
+ 			if (iv_len)
+@@ -622,15 +638,8 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+ 			printk(KERN_INFO "test %u (%d bit key, %d byte blocks): ",
+ 					i, *keysize * 8, bs);
+ 
+-
+ 			memset(tvmem[0], 0xff, PAGE_SIZE);
+ 
+-			if (ret) {
+-				pr_err("setkey() failed flags=%x\n",
+-						crypto_aead_get_flags(tfm));
+-				goto out;
+-			}
+-
+ 			sg_init_aead(sg, xbuf, bs + (enc ? 0 : authsize),
+ 				     assoc, aad_size);
+ 
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index 1f6007abcf18e..89c22bc550570 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -288,10 +288,18 @@ invalid_guid:
+ 
+ void __init init_prmt(void)
+ {
++	struct acpi_table_header *tbl;
+ 	acpi_status status;
+-	int mc = acpi_table_parse_entries(ACPI_SIG_PRMT, sizeof(struct acpi_table_prmt) +
++	int mc;
++
++	status = acpi_get_table(ACPI_SIG_PRMT, 0, &tbl);
++	if (ACPI_FAILURE(status))
++		return;
++
++	mc = acpi_table_parse_entries(ACPI_SIG_PRMT, sizeof(struct acpi_table_prmt) +
+ 					  sizeof (struct acpi_table_prmt_header),
+ 					  0, acpi_parse_prmt, 0);
++	acpi_put_table(tbl);
+ 	/*
+ 	 * Return immediately if PRMT table is not present or no PRM module found.
+ 	 */
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 61c762961ca8e..44f434acfce08 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5573,7 +5573,7 @@ int ata_host_start(struct ata_host *host)
+ 			have_stop = 1;
+ 	}
+ 
+-	if (host->ops->host_stop)
++	if (host->ops && host->ops->host_stop)
+ 		have_stop = 1;
+ 
+ 	if (have_stop) {
+diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
+index 2e5e7c9939334..8b2a0eb3f32a4 100644
+--- a/drivers/auxdisplay/hd44780.c
++++ b/drivers/auxdisplay/hd44780.c
+@@ -323,8 +323,8 @@ static int hd44780_remove(struct platform_device *pdev)
+ {
+ 	struct charlcd *lcd = platform_get_drvdata(pdev);
+ 
+-	kfree(lcd->drvdata);
+ 	charlcd_unregister(lcd);
++	kfree(lcd->drvdata);
+ 
+ 	kfree(lcd);
+ 	return 0;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 437cd61343b26..68ea1f949daa9 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -580,7 +580,8 @@ re_probe:
+ 			goto probe_failed;
+ 	}
+ 
+-	if (driver_sysfs_add(dev)) {
++	ret = driver_sysfs_add(dev);
++	if (ret) {
+ 		pr_err("%s: driver_sysfs_add(%s) failed\n",
+ 		       __func__, dev_name(dev));
+ 		goto probe_failed;
+@@ -602,15 +603,18 @@ re_probe:
+ 		goto probe_failed;
+ 	}
+ 
+-	if (device_add_groups(dev, drv->dev_groups)) {
++	ret = device_add_groups(dev, drv->dev_groups);
++	if (ret) {
+ 		dev_err(dev, "device_add_groups() failed\n");
+ 		goto dev_groups_failed;
+ 	}
+ 
+-	if (dev_has_sync_state(dev) &&
+-	    device_create_file(dev, &dev_attr_state_synced)) {
+-		dev_err(dev, "state_synced sysfs add failed\n");
+-		goto dev_sysfs_state_synced_failed;
++	if (dev_has_sync_state(dev)) {
++		ret = device_create_file(dev, &dev_attr_state_synced);
++		if (ret) {
++			dev_err(dev, "state_synced sysfs add failed\n");
++			goto dev_sysfs_state_synced_failed;
++		}
+ 	}
+ 
+ 	if (test_remove) {
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 68c549d712304..bdbedc6660a87 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -165,7 +165,7 @@ static inline int fw_state_wait(struct fw_priv *fw_priv)
+ 	return __fw_state_wait_common(fw_priv, MAX_SCHEDULE_TIMEOUT);
+ }
+ 
+-static int fw_cache_piggyback_on_request(const char *name);
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv);
+ 
+ static struct fw_priv *__allocate_fw_priv(const char *fw_name,
+ 					  struct firmware_cache *fwc,
+@@ -707,10 +707,8 @@ int assign_fw(struct firmware *fw, struct device *device)
+ 	 * on request firmware.
+ 	 */
+ 	if (!(fw_priv->opt_flags & FW_OPT_NOCACHE) &&
+-	    fw_priv->fwc->state == FW_LOADER_START_CACHE) {
+-		if (fw_cache_piggyback_on_request(fw_priv->fw_name))
+-			kref_get(&fw_priv->ref);
+-	}
++	    fw_priv->fwc->state == FW_LOADER_START_CACHE)
++		fw_cache_piggyback_on_request(fw_priv);
+ 
+ 	/* pass the pages buffer to driver at the last minute */
+ 	fw_set_page_data(fw_priv, fw);
+@@ -1259,11 +1257,11 @@ static int __fw_entry_found(const char *name)
+ 	return 0;
+ }
+ 
+-static int fw_cache_piggyback_on_request(const char *name)
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
+ {
+-	struct firmware_cache *fwc = &fw_cache;
++	const char *name = fw_priv->fw_name;
++	struct firmware_cache *fwc = fw_priv->fwc;
+ 	struct fw_cache_entry *fce;
+-	int ret = 0;
+ 
+ 	spin_lock(&fwc->name_lock);
+ 	if (__fw_entry_found(name))
+@@ -1271,13 +1269,12 @@ static int fw_cache_piggyback_on_request(const char *name)
+ 
+ 	fce = alloc_fw_cache_entry(name);
+ 	if (fce) {
+-		ret = 1;
+ 		list_add(&fce->list, &fwc->fw_names);
++		kref_get(&fw_priv->ref);
+ 		pr_debug("%s: fw: %s\n", __func__, name);
+ 	}
+ found:
+ 	spin_unlock(&fwc->name_lock);
+-	return ret;
+ }
+ 
+ static void free_fw_cache_entry(struct fw_cache_entry *fce)
+@@ -1508,9 +1505,8 @@ static inline void unregister_fw_pm_ops(void)
+ 	unregister_pm_notifier(&fw_cache.pm_notify);
+ }
+ #else
+-static int fw_cache_piggyback_on_request(const char *name)
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
+ {
+-	return 0;
+ }
+ static inline int register_fw_pm_ops(void)
+ {
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index fe3e38dd5324f..2fc826e97591e 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1667,7 +1667,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 			if (ret) {
+ 				dev_err(map->dev,
+ 					"Error in caching of register: %x ret: %d\n",
+-					reg + i, ret);
++					reg + regmap_get_offset(map, i), ret);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c
+index 6535614a7dc13..1df2b5801c3bc 100644
+--- a/drivers/bcma/main.c
++++ b/drivers/bcma/main.c
+@@ -236,6 +236,7 @@ EXPORT_SYMBOL(bcma_core_irq);
+ 
+ void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core)
+ {
++	device_initialize(&core->dev);
+ 	core->dev.release = bcma_release_core_dev;
+ 	core->dev.bus = &bcma_bus_type;
+ 	dev_set_name(&core->dev, "bcma%d:%d", bus->num, core->core_index);
+@@ -277,11 +278,10 @@ static void bcma_register_core(struct bcma_bus *bus, struct bcma_device *core)
+ {
+ 	int err;
+ 
+-	err = device_register(&core->dev);
++	err = device_add(&core->dev);
+ 	if (err) {
+ 		bcma_err(bus, "Could not register dev for core 0x%03X\n",
+ 			 core->id.id);
+-		put_device(&core->dev);
+ 		return;
+ 	}
+ 	core->dev_registered = true;
+@@ -372,7 +372,7 @@ void bcma_unregister_cores(struct bcma_bus *bus)
+ 	/* Now noone uses internally-handled cores, we can free them */
+ 	list_for_each_entry_safe(core, tmp, &bus->cores, list) {
+ 		list_del(&core->list);
+-		kfree(core);
++		put_device(&core->dev);
+ 	}
+ }
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 19f5d5a8b16a3..93708b1938e80 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -49,6 +49,7 @@
+ 
+ static DEFINE_IDR(nbd_index_idr);
+ static DEFINE_MUTEX(nbd_index_mutex);
++static struct workqueue_struct *nbd_del_wq;
+ static int nbd_total_devices = 0;
+ 
+ struct nbd_sock {
+@@ -113,6 +114,7 @@ struct nbd_device {
+ 	struct mutex config_lock;
+ 	struct gendisk *disk;
+ 	struct workqueue_struct *recv_workq;
++	struct work_struct remove_work;
+ 
+ 	struct list_head list;
+ 	struct task_struct *task_recv;
+@@ -233,7 +235,7 @@ static const struct device_attribute backend_attr = {
+ 	.show = backend_show,
+ };
+ 
+-static void nbd_dev_remove(struct nbd_device *nbd)
++static void nbd_del_disk(struct nbd_device *nbd)
+ {
+ 	struct gendisk *disk = nbd->disk;
+ 
+@@ -242,24 +244,60 @@ static void nbd_dev_remove(struct nbd_device *nbd)
+ 		blk_cleanup_disk(disk);
+ 		blk_mq_free_tag_set(&nbd->tag_set);
+ 	}
++}
++
++/*
++ * Place this in the last just before the nbd is freed to
++ * make sure that the disk and the related kobject are also
++ * totally removed to avoid duplicate creation of the same
++ * one.
++ */
++static void nbd_notify_destroy_completion(struct nbd_device *nbd)
++{
++	if (test_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags) &&
++	    nbd->destroy_complete)
++		complete(nbd->destroy_complete);
++}
+ 
++static void nbd_dev_remove_work(struct work_struct *work)
++{
++	struct nbd_device *nbd =
++		container_of(work, struct nbd_device, remove_work);
++
++	nbd_del_disk(nbd);
++
++	mutex_lock(&nbd_index_mutex);
+ 	/*
+-	 * Place this in the last just before the nbd is freed to
+-	 * make sure that the disk and the related kobject are also
+-	 * totally removed to avoid duplicate creation of the same
+-	 * one.
++	 * Remove from idr after del_gendisk() completes,
++	 * so if the same id is reused, the following
++	 * add_disk() will succeed.
+ 	 */
+-	if (test_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags) && nbd->destroy_complete)
+-		complete(nbd->destroy_complete);
++	idr_remove(&nbd_index_idr, nbd->index);
++
++	nbd_notify_destroy_completion(nbd);
++	mutex_unlock(&nbd_index_mutex);
+ 
+ 	kfree(nbd);
+ }
+ 
++static void nbd_dev_remove(struct nbd_device *nbd)
++{
++	/* Call del_gendisk() asynchrounously to prevent deadlock */
++	if (test_bit(NBD_DESTROY_ON_DISCONNECT, &nbd->flags)) {
++		queue_work(nbd_del_wq, &nbd->remove_work);
++		return;
++	}
++
++	nbd_del_disk(nbd);
++	idr_remove(&nbd_index_idr, nbd->index);
++	nbd_notify_destroy_completion(nbd);
++	kfree(nbd);
++}
++
+ static void nbd_put(struct nbd_device *nbd)
+ {
+ 	if (refcount_dec_and_mutex_lock(&nbd->refs,
+ 					&nbd_index_mutex)) {
+-		idr_remove(&nbd_index_idr, nbd->index);
+ 		nbd_dev_remove(nbd);
+ 		mutex_unlock(&nbd_index_mutex);
+ 	}
+@@ -1388,6 +1426,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 		       unsigned int cmd, unsigned long arg)
+ {
+ 	struct nbd_config *config = nbd->config;
++	loff_t bytesize;
+ 
+ 	switch (cmd) {
+ 	case NBD_DISCONNECT:
+@@ -1402,8 +1441,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 	case NBD_SET_SIZE:
+ 		return nbd_set_size(nbd, arg, config->blksize);
+ 	case NBD_SET_SIZE_BLOCKS:
+-		return nbd_set_size(nbd, arg * config->blksize,
+-				    config->blksize);
++		if (check_mul_overflow((loff_t)arg, config->blksize, &bytesize))
++			return -EINVAL;
++		return nbd_set_size(nbd, bytesize, config->blksize);
+ 	case NBD_SET_TIMEOUT:
+ 		nbd_set_cmd_timeout(nbd, arg);
+ 		return 0;
+@@ -1683,6 +1723,7 @@ static int nbd_dev_add(int index)
+ 	nbd->tag_set.flags = BLK_MQ_F_SHOULD_MERGE |
+ 		BLK_MQ_F_BLOCKING;
+ 	nbd->tag_set.driver_data = nbd;
++	INIT_WORK(&nbd->remove_work, nbd_dev_remove_work);
+ 	nbd->destroy_complete = NULL;
+ 	nbd->backend = NULL;
+ 
+@@ -1729,7 +1770,17 @@ static int nbd_dev_add(int index)
+ 	refcount_set(&nbd->refs, 1);
+ 	INIT_LIST_HEAD(&nbd->list);
+ 	disk->major = NBD_MAJOR;
++
++	/* Too big first_minor can cause duplicate creation of
++	 * sysfs files/links, since first_minor will be truncated to
++	 * byte in __device_add_disk().
++	 */
+ 	disk->first_minor = index << part_shift;
++	if (disk->first_minor > 0xff) {
++		err = -EINVAL;
++		goto out_free_idr;
++	}
++
+ 	disk->minors = 1 << part_shift;
+ 	disk->fops = &nbd_fops;
+ 	disk->private_data = nbd;
+@@ -2424,7 +2475,14 @@ static int __init nbd_init(void)
+ 	if (register_blkdev(NBD_MAJOR, "nbd"))
+ 		return -EIO;
+ 
++	nbd_del_wq = alloc_workqueue("nbd-del", WQ_UNBOUND, 0);
++	if (!nbd_del_wq) {
++		unregister_blkdev(NBD_MAJOR, "nbd");
++		return -ENOMEM;
++	}
++
+ 	if (genl_register_family(&nbd_genl_family)) {
++		destroy_workqueue(nbd_del_wq);
+ 		unregister_blkdev(NBD_MAJOR, "nbd");
+ 		return -EINVAL;
+ 	}
+@@ -2442,7 +2500,10 @@ static int nbd_exit_cb(int id, void *ptr, void *data)
+ 	struct list_head *list = (struct list_head *)data;
+ 	struct nbd_device *nbd = ptr;
+ 
+-	list_add_tail(&nbd->list, list);
++	/* Skip nbd that is being removed asynchronously */
++	if (refcount_read(&nbd->refs))
++		list_add_tail(&nbd->list, list);
++
+ 	return 0;
+ }
+ 
+@@ -2465,6 +2526,9 @@ static void __exit nbd_cleanup(void)
+ 		nbd_put(nbd);
+ 	}
+ 
++	/* Also wait for nbd_dev_remove_work() completes */
++	destroy_workqueue(nbd_del_wq);
++
+ 	idr_destroy(&nbd_index_idr);
+ 	genl_unregister_family(&nbd_genl_family);
+ 	unregister_blkdev(NBD_MAJOR, "nbd");
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 0255bf243ce55..bd37d6fb88c26 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2921,10 +2921,11 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
+ 	/* Read the Intel supported features and if new exception formats
+ 	 * supported, need to load the additional DDC config to enable.
+ 	 */
+-	btintel_read_debug_features(hdev, &features);
+-
+-	/* Set DDC mask for available debug features */
+-	btintel_set_debug_features(hdev, &features);
++	err = btintel_read_debug_features(hdev, &features);
++	if (!err) {
++		/* Set DDC mask for available debug features */
++		btintel_set_debug_features(hdev, &features);
++	}
+ 
+ 	/* Read the Intel version information after loading the FW  */
+ 	err = btintel_read_version(hdev, &ver);
+@@ -3017,10 +3018,11 @@ static int btusb_setup_intel_newgen(struct hci_dev *hdev)
+ 	/* Read the Intel supported features and if new exception formats
+ 	 * supported, need to load the additional DDC config to enable.
+ 	 */
+-	btintel_read_debug_features(hdev, &features);
+-
+-	/* Set DDC mask for available debug features */
+-	btintel_set_debug_features(hdev, &features);
++	err = btintel_read_debug_features(hdev, &features);
++	if (!err) {
++		/* Set DDC mask for available debug features */
++		btintel_set_debug_features(hdev, &features);
++	}
+ 
+ 	/* Read the Intel version information after loading the FW  */
+ 	err = btintel_read_version_tlv(hdev, &version);
+diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
+index 4308f9ca7a43d..d6ba644f6b00a 100644
+--- a/drivers/char/tpm/Kconfig
++++ b/drivers/char/tpm/Kconfig
+@@ -89,7 +89,6 @@ config TCG_TIS_SYNQUACER
+ config TCG_TIS_I2C_CR50
+ 	tristate "TPM Interface Specification 2.0 Interface (I2C - CR50)"
+ 	depends on I2C
+-	select TCG_CR50
+ 	help
+ 	  This is a driver for the Google cr50 I2C TPM interface which is a
+ 	  custom microcontroller and requires a custom i2c protocol interface
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 903604769de99..3af4c07a9342f 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -106,17 +106,12 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ 	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
+ 	u16 len;
+-	int sig;
+ 
+ 	if (!ibmvtpm->rtce_buf) {
+ 		dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
+ 		return 0;
+ 	}
+ 
+-	sig = wait_event_interruptible(ibmvtpm->wq, !ibmvtpm->tpm_processing_cmd);
+-	if (sig)
+-		return -EINTR;
+-
+ 	len = ibmvtpm->res_len;
+ 
+ 	if (count < len) {
+@@ -237,7 +232,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	 * set the processing flag before the Hcall, since we may get the
+ 	 * result (interrupt) before even being able to check rc.
+ 	 */
+-	ibmvtpm->tpm_processing_cmd = true;
++	ibmvtpm->tpm_processing_cmd = 1;
+ 
+ again:
+ 	rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+@@ -255,7 +250,7 @@ again:
+ 			goto again;
+ 		}
+ 		dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
+-		ibmvtpm->tpm_processing_cmd = false;
++		ibmvtpm->tpm_processing_cmd = 0;
+ 	}
+ 
+ 	spin_unlock(&ibmvtpm->rtce_lock);
+@@ -269,7 +264,9 @@ static void tpm_ibmvtpm_cancel(struct tpm_chip *chip)
+ 
+ static u8 tpm_ibmvtpm_status(struct tpm_chip *chip)
+ {
+-	return 0;
++	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++
++	return ibmvtpm->tpm_processing_cmd;
+ }
+ 
+ /**
+@@ -457,7 +454,7 @@ static const struct tpm_class_ops tpm_ibmvtpm = {
+ 	.send = tpm_ibmvtpm_send,
+ 	.cancel = tpm_ibmvtpm_cancel,
+ 	.status = tpm_ibmvtpm_status,
+-	.req_complete_mask = 0,
++	.req_complete_mask = 1,
+ 	.req_complete_val = 0,
+ 	.req_canceled = tpm_ibmvtpm_req_canceled,
+ };
+@@ -550,7 +547,7 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
+ 		case VTPM_TPM_COMMAND_RES:
+ 			/* len of the data in rtce buffer */
+ 			ibmvtpm->res_len = be16_to_cpu(crq->len);
+-			ibmvtpm->tpm_processing_cmd = false;
++			ibmvtpm->tpm_processing_cmd = 0;
+ 			wake_up_interruptible(&ibmvtpm->wq);
+ 			return;
+ 		default:
+@@ -688,8 +685,15 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 		goto init_irq_cleanup;
+ 	}
+ 
+-	if (!strcmp(id->compat, "IBM,vtpm20")) {
++
++	if (!strcmp(id->compat, "IBM,vtpm20"))
+ 		chip->flags |= TPM_CHIP_FLAG_TPM2;
++
++	rc = tpm_get_timeouts(chip);
++	if (rc)
++		goto init_irq_cleanup;
++
++	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ 		rc = tpm2_get_cc_attrs_tbl(chip);
+ 		if (rc)
+ 			goto init_irq_cleanup;
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.h b/drivers/char/tpm/tpm_ibmvtpm.h
+index b92aa7d3e93e7..51198b137461e 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.h
++++ b/drivers/char/tpm/tpm_ibmvtpm.h
+@@ -41,7 +41,7 @@ struct ibmvtpm_dev {
+ 	wait_queue_head_t wq;
+ 	u16 res_len;
+ 	u32 vtpm_version;
+-	bool tpm_processing_cmd;
++	u8 tpm_processing_cmd;
+ };
+ 
+ #define CRQ_RES_BUF_SIZE	PAGE_SIZE
+diff --git a/drivers/clk/mvebu/kirkwood.c b/drivers/clk/mvebu/kirkwood.c
+index 47680237d0beb..8bc893df47364 100644
+--- a/drivers/clk/mvebu/kirkwood.c
++++ b/drivers/clk/mvebu/kirkwood.c
+@@ -265,6 +265,7 @@ static const char *powersave_parents[] = {
+ static const struct clk_muxing_soc_desc kirkwood_mux_desc[] __initconst = {
+ 	{ "powersave", powersave_parents, ARRAY_SIZE(powersave_parents),
+ 		11, 1, 0 },
++	{ }
+ };
+ 
+ static struct clk *clk_muxing_get_src(
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index d7ed99f0001f8..dd0956ad969c1 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -579,7 +579,8 @@ static int sh_cmt_start(struct sh_cmt_channel *ch, unsigned long flag)
+ 	ch->flags |= flag;
+ 
+ 	/* setup timeout if no clockevent */
+-	if ((flag == FLAG_CLOCKSOURCE) && (!(ch->flags & FLAG_CLOCKEVENT)))
++	if (ch->cmt->num_channels == 1 &&
++	    flag == FLAG_CLOCKSOURCE && (!(ch->flags & FLAG_CLOCKEVENT)))
+ 		__sh_cmt_set_next(ch, ch->max_match_value);
+  out:
+ 	raw_spin_unlock_irqrestore(&ch->lock, flags);
+@@ -621,20 +622,25 @@ static struct sh_cmt_channel *cs_to_sh_cmt(struct clocksource *cs)
+ static u64 sh_cmt_clocksource_read(struct clocksource *cs)
+ {
+ 	struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
+-	unsigned long flags;
+ 	u32 has_wrapped;
+-	u64 value;
+-	u32 raw;
+ 
+-	raw_spin_lock_irqsave(&ch->lock, flags);
+-	value = ch->total_cycles;
+-	raw = sh_cmt_get_counter(ch, &has_wrapped);
++	if (ch->cmt->num_channels == 1) {
++		unsigned long flags;
++		u64 value;
++		u32 raw;
+ 
+-	if (unlikely(has_wrapped))
+-		raw += ch->match_value + 1;
+-	raw_spin_unlock_irqrestore(&ch->lock, flags);
++		raw_spin_lock_irqsave(&ch->lock, flags);
++		value = ch->total_cycles;
++		raw = sh_cmt_get_counter(ch, &has_wrapped);
++
++		if (unlikely(has_wrapped))
++			raw += ch->match_value + 1;
++		raw_spin_unlock_irqrestore(&ch->lock, flags);
++
++		return value + raw;
++	}
+ 
+-	return value + raw;
++	return sh_cmt_get_counter(ch, &has_wrapped);
+ }
+ 
+ static int sh_cmt_clocksource_enable(struct clocksource *cs)
+@@ -697,7 +703,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_channel *ch,
+ 	cs->disable = sh_cmt_clocksource_disable;
+ 	cs->suspend = sh_cmt_clocksource_suspend;
+ 	cs->resume = sh_cmt_clocksource_resume;
+-	cs->mask = CLOCKSOURCE_MASK(sizeof(u64) * 8);
++	cs->mask = CLOCKSOURCE_MASK(ch->cmt->info->width);
+ 	cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
+ 
+ 	dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index 09a9a77cce06b..81f9642777fb8 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -715,12 +715,13 @@ static ssize_t quad8_count_ceiling_write(struct counter_device *counter,
+ 	case 1:
+ 	case 3:
+ 		quad8_preset_register_set(priv, count->id, ceiling);
+-		break;
++		mutex_unlock(&priv->lock);
++		return len;
+ 	}
+ 
+ 	mutex_unlock(&priv->lock);
+ 
+-	return len;
++	return -EINVAL;
+ }
+ 
+ static ssize_t quad8_count_preset_enable_read(struct counter_device *counter,
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 018415b9840a9..d97cf02b1df75 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -157,11 +157,6 @@ struct sec_ctx {
+ 	struct device *dev;
+ };
+ 
+-enum sec_endian {
+-	SEC_LE = 0,
+-	SEC_32BE,
+-	SEC_64BE
+-};
+ 
+ enum sec_debug_file_index {
+ 	SEC_CLEAR_ENABLE,
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 490db7bccf619..a0cc46b649a39 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -312,31 +312,20 @@ static const struct pci_device_id sec_dev_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(pci, sec_dev_ids);
+ 
+-static u8 sec_get_endian(struct hisi_qm *qm)
++static void sec_set_endian(struct hisi_qm *qm)
+ {
+ 	u32 reg;
+ 
+-	/*
+-	 * As for VF, it is a wrong way to get endian setting by
+-	 * reading a register of the engine
+-	 */
+-	if (qm->pdev->is_virtfn) {
+-		dev_err_ratelimited(&qm->pdev->dev,
+-				    "cannot access a register in VF!\n");
+-		return SEC_LE;
+-	}
+ 	reg = readl_relaxed(qm->io_base + SEC_CONTROL_REG);
+-	/* BD little endian mode */
+-	if (!(reg & BIT(0)))
+-		return SEC_LE;
++	reg &= ~(BIT(1) | BIT(0));
++	if (!IS_ENABLED(CONFIG_64BIT))
++		reg |= BIT(1);
+ 
+-	/* BD 32-bits big endian mode */
+-	else if (!(reg & BIT(1)))
+-		return SEC_32BE;
+ 
+-	/* BD 64-bits big endian mode */
+-	else
+-		return SEC_64BE;
++	if (!IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
++		reg |= BIT(0);
++
++	writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG);
+ }
+ 
+ static void sec_open_sva_prefetch(struct hisi_qm *qm)
+@@ -429,9 +418,7 @@ static int sec_engine_init(struct hisi_qm *qm)
+ 		       qm->io_base + SEC_BD_ERR_CHK_EN_REG3);
+ 
+ 	/* config endian */
+-	reg = readl_relaxed(qm->io_base + SEC_CONTROL_REG);
+-	reg |= sec_get_endian(qm);
+-	writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG);
++	sec_set_endian(qm);
+ 
+ 	return 0;
+ }
+@@ -984,7 +971,8 @@ static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ err_alg_unregister:
+-	hisi_qm_alg_unregister(qm, &sec_devices);
++	if (qm->qp_num >= ctx_q_num)
++		hisi_qm_alg_unregister(qm, &sec_devices);
+ err_qm_stop:
+ 	sec_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index d6a7784d29888..f397cc5bf1021 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -170,15 +170,19 @@ static struct dcp *global_sdcp;
+ 
+ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ {
++	int dma_err;
+ 	struct dcp *sdcp = global_sdcp;
+ 	const int chan = actx->chan;
+ 	uint32_t stat;
+ 	unsigned long ret;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+-
+ 	dma_addr_t desc_phys = dma_map_single(sdcp->dev, desc, sizeof(*desc),
+ 					      DMA_TO_DEVICE);
+ 
++	dma_err = dma_mapping_error(sdcp->dev, desc_phys);
++	if (dma_err)
++		return dma_err;
++
+ 	reinit_completion(&sdcp->completion[chan]);
+ 
+ 	/* Clear status register. */
+@@ -216,18 +220,29 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 			   struct skcipher_request *req, int init)
+ {
++	dma_addr_t key_phys, src_phys, dst_phys;
+ 	struct dcp *sdcp = global_sdcp;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ 	struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+ 	int ret;
+ 
+-	dma_addr_t key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
+-					     2 * AES_KEYSIZE_128,
+-					     DMA_TO_DEVICE);
+-	dma_addr_t src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
+-					     DCP_BUF_SZ, DMA_TO_DEVICE);
+-	dma_addr_t dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
+-					     DCP_BUF_SZ, DMA_FROM_DEVICE);
++	key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
++				  2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, key_phys);
++	if (ret)
++		return ret;
++
++	src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
++				  DCP_BUF_SZ, DMA_TO_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, src_phys);
++	if (ret)
++		goto err_src;
++
++	dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
++				  DCP_BUF_SZ, DMA_FROM_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, dst_phys);
++	if (ret)
++		goto err_dst;
+ 
+ 	if (actx->fill % AES_BLOCK_SIZE) {
+ 		dev_err(sdcp->dev, "Invalid block size!\n");
+@@ -265,10 +280,12 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 	ret = mxs_dcp_start_dma(actx);
+ 
+ aes_done_run:
++	dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
++err_dst:
++	dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
++err_src:
+ 	dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
+ 			 DMA_TO_DEVICE);
+-	dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
+-	dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
+ 
+ 	return ret;
+ }
+@@ -557,6 +574,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
+ 	dma_addr_t buf_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_in_buf,
+ 					     DCP_BUF_SZ, DMA_TO_DEVICE);
+ 
++	ret = dma_mapping_error(sdcp->dev, buf_phys);
++	if (ret)
++		return ret;
++
+ 	/* Fill in the DMA descriptor. */
+ 	desc->control0 = MXS_DCP_CONTROL0_DECR_SEMAPHORE |
+ 		    MXS_DCP_CONTROL0_INTERRUPT |
+@@ -589,6 +610,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
+ 	if (rctx->fini) {
+ 		digest_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_out_buf,
+ 					     DCP_SHA_PAY_SZ, DMA_FROM_DEVICE);
++		ret = dma_mapping_error(sdcp->dev, digest_phys);
++		if (ret)
++			goto done_run;
++
+ 		desc->control0 |= MXS_DCP_CONTROL0_HASH_TERM;
+ 		desc->payload = digest_phys;
+ 	}
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 0dd4c6b157de9..9b968ac4ee7b6 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -1175,9 +1175,9 @@ static int omap_aes_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dd->lock);
+ 
+ 	INIT_LIST_HEAD(&dd->list);
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_add_tail(&dd->list, &dev_list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	/* Initialize crypto engine */
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+@@ -1264,9 +1264,9 @@ static int omap_aes_remove(struct platform_device *pdev)
+ 	if (!dd)
+ 		return -ENODEV;
+ 
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
+index bc8631363d725..be77656864e3f 100644
+--- a/drivers/crypto/omap-des.c
++++ b/drivers/crypto/omap-des.c
+@@ -1033,9 +1033,9 @@ static int omap_des_probe(struct platform_device *pdev)
+ 
+ 
+ 	INIT_LIST_HEAD(&dd->list);
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_add_tail(&dd->list, &dev_list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	/* Initialize des crypto engine */
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+@@ -1094,9 +1094,9 @@ static int omap_des_remove(struct platform_device *pdev)
+ 	if (!dd)
+ 		return -ENODEV;
+ 
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index dd53ad9987b0d..63beea7cdba5e 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -1736,7 +1736,7 @@ static void omap_sham_done_task(unsigned long data)
+ 		if (test_and_clear_bit(FLAGS_OUTPUT_READY, &dd->flags))
+ 			goto finish;
+ 	} else if (test_bit(FLAGS_DMA_READY, &dd->flags)) {
+-		if (test_and_clear_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
++		if (test_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
+ 			omap_sham_update_dma_stop(dd);
+ 			if (dd->err) {
+ 				err = dd->err;
+@@ -2144,9 +2144,9 @@ static int omap_sham_probe(struct platform_device *pdev)
+ 		(rev & dd->pdata->major_mask) >> dd->pdata->major_shift,
+ 		(rev & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
+ 
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_add_tail(&dd->list, &sham.dev_list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ 
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+ 	if (!dd->engine) {
+@@ -2194,9 +2194,9 @@ err_algs:
+ err_engine_start:
+ 	crypto_engine_exit(dd->engine);
+ err_engine:
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ err_pm:
+ 	pm_runtime_disable(dev);
+ 	if (!dd->polling_mode)
+@@ -2215,9 +2215,9 @@ static int omap_sham_remove(struct platform_device *pdev)
+ 	dd = platform_get_drvdata(pdev);
+ 	if (!dd)
+ 		return -ENODEV;
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+ 			crypto_unregister_ahash(
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+index 15f6b9bdfb221..ddf42fb326251 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+@@ -81,10 +81,10 @@ void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+index d231583428c91..7e202ef925231 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+@@ -81,10 +81,10 @@ void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h
+index c61476553728d..dd4a811b7e89f 100644
+--- a/drivers/crypto/qat/qat_common/adf_common_drv.h
++++ b/drivers/crypto/qat/qat_common/adf_common_drv.h
+@@ -198,8 +198,8 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev,
+ void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+ void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+ 
+-int adf_vf2pf_init(struct adf_accel_dev *accel_dev);
+-void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev);
++int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev);
++void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev);
+ int adf_init_pf_wq(void);
+ void adf_exit_pf_wq(void);
+ int adf_init_vf_wq(void);
+@@ -222,12 +222,12 @@ static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
+ {
+ }
+ 
+-static inline int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
++static inline int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
+ {
+ 	return 0;
+ }
+ 
+-static inline void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
++static inline void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ {
+ }
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_init.c b/drivers/crypto/qat/qat_common/adf_init.c
+index 744c40351428d..02864985dbb04 100644
+--- a/drivers/crypto/qat/qat_common/adf_init.c
++++ b/drivers/crypto/qat/qat_common/adf_init.c
+@@ -61,6 +61,7 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
+ 	struct service_hndl *service;
+ 	struct list_head *list_itr;
+ 	struct adf_hw_device_data *hw_data = accel_dev->hw_device;
++	int ret;
+ 
+ 	if (!hw_data) {
+ 		dev_err(&GET_DEV(accel_dev),
+@@ -127,9 +128,9 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	hw_data->enable_error_correction(accel_dev);
+-	hw_data->enable_vf2pf_comms(accel_dev);
++	ret = hw_data->enable_vf2pf_comms(accel_dev);
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(adf_dev_init);
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
+index e3ad5587be49e..daab02011717d 100644
+--- a/drivers/crypto/qat/qat_common/adf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_isr.c
+@@ -15,6 +15,8 @@
+ #include "adf_transport_access_macros.h"
+ #include "adf_transport_internal.h"
+ 
++#define ADF_MAX_NUM_VFS	32
++
+ static int adf_enable_msix(struct adf_accel_dev *accel_dev)
+ {
+ 	struct adf_accel_pci *pci_dev_info = &accel_dev->accel_pci_dev;
+@@ -72,7 +74,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
+ 		struct adf_bar *pmisc =
+ 			&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
+ 		void __iomem *pmisc_bar_addr = pmisc->virt_addr;
+-		u32 vf_mask;
++		unsigned long vf_mask;
+ 
+ 		/* Get the interrupt sources triggered by VFs */
+ 		vf_mask = ((ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU5) &
+@@ -93,8 +95,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
+ 			 * unless the VF is malicious and is attempting to
+ 			 * flood the host OS with VF2PF interrupts.
+ 			 */
+-			for_each_set_bit(i, (const unsigned long *)&vf_mask,
+-					 (sizeof(vf_mask) * BITS_PER_BYTE)) {
++			for_each_set_bit(i, &vf_mask, ADF_MAX_NUM_VFS) {
+ 				vf_info = accel_dev->pf.vf_info + i;
+ 
+ 				if (!__ratelimit(&vf_info->vf2pf_ratelimit)) {
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index a1b77bd7a8944..efa4bffb4f601 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -186,7 +186,6 @@ int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(adf_iov_putmsg);
+ 
+ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ {
+@@ -316,6 +315,8 @@ static int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
+ 	msg |= ADF_PFVF_COMPATIBILITY_VERSION << ADF_VF2PF_COMPAT_VER_REQ_SHIFT;
+ 	BUILD_BUG_ON(ADF_PFVF_COMPATIBILITY_VERSION > 255);
+ 
++	reinit_completion(&accel_dev->vf.iov_msg_completion);
++
+ 	/* Send request from VF to PF */
+ 	ret = adf_iov_putmsg(accel_dev, msg, 0);
+ 	if (ret) {
+diff --git a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+index e85bd62d134a4..3e25fac051b25 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+@@ -5,14 +5,14 @@
+ #include "adf_pf2vf_msg.h"
+ 
+ /**
+- * adf_vf2pf_init() - send init msg to PF
++ * adf_vf2pf_notify_init() - send init msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+  * Function sends an init message from the VF to a PF
+  *
+  * Return: 0 on success, error code otherwise.
+  */
+-int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
++int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
+ {
+ 	u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ 		(ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
+@@ -25,17 +25,17 @@ int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
+ 	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(adf_vf2pf_init);
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_init);
+ 
+ /**
+- * adf_vf2pf_shutdown() - send shutdown msg to PF
++ * adf_vf2pf_notify_shutdown() - send shutdown msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+  * Function sends a shutdown message from the VF to a PF
+  *
+  * Return: void
+  */
+-void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
++void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ {
+ 	u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ 	    (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
+@@ -45,4 +45,4 @@ void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Failed to send Shutdown event to PF\n");
+ }
+-EXPORT_SYMBOL_GPL(adf_vf2pf_shutdown);
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_shutdown);
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index 888388acb6bd3..3e4f64d248f9b 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -160,6 +160,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 	struct adf_bar *pmisc =
+ 			&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
+ 	void __iomem *pmisc_bar_addr = pmisc->virt_addr;
++	bool handled = false;
+ 	u32 v_int;
+ 
+ 	/* Read VF INT source CSR to determine the source of VF interrupt */
+@@ -172,7 +173,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 
+ 		/* Schedule tasklet to handle interrupt BH */
+ 		tasklet_hi_schedule(&accel_dev->vf.pf2vf_bh_tasklet);
+-		return IRQ_HANDLED;
++		handled = true;
+ 	}
+ 
+ 	/* Check bundle interrupt */
+@@ -184,10 +185,10 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 		csr_ops->write_csr_int_flag_and_col(bank->csr_addr,
+ 						    bank->bank_number, 0);
+ 		tasklet_hi_schedule(&bank->resp_handler);
+-		return IRQ_HANDLED;
++		handled = true;
+ 	}
+ 
+-	return IRQ_NONE;
++	return handled ? IRQ_HANDLED : IRQ_NONE;
+ }
+ 
+ static int adf_request_msi_irq(struct adf_accel_dev *accel_dev)
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+index f14fb82ed6dfc..744734caaf7b7 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+@@ -81,10 +81,10 @@ void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 6ce0ed2ffaaf1..b4a024cb8b97d 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -33,9 +33,9 @@
+ #define I10NM_GET_DIMMMTR(m, i, j)	\
+ 	readl((m)->mbase + ((m)->hbm_mc ? 0x80c : 0x2080c) + \
+ 	(i) * (m)->chan_mmio_sz + (j) * 4)
+-#define I10NM_GET_MCDDRTCFG(m, i, j)	\
++#define I10NM_GET_MCDDRTCFG(m, i)	\
+ 	readl((m)->mbase + ((m)->hbm_mc ? 0x970 : 0x20970) + \
+-	(i) * (m)->chan_mmio_sz + (j) * 4)
++	(i) * (m)->chan_mmio_sz)
+ #define I10NM_GET_MCMTR(m, i)		\
+ 	readl((m)->mbase + ((m)->hbm_mc ? 0xef8 : 0x20ef8) + \
+ 	(i) * (m)->chan_mmio_sz)
+@@ -321,10 +321,10 @@ static int i10nm_get_dimm_config(struct mem_ctl_info *mci,
+ 
+ 		ndimms = 0;
+ 		amap = I10NM_GET_AMAP(imc, i);
++		mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i);
+ 		for (j = 0; j < imc->num_dimms; j++) {
+ 			dimm = edac_get_dimm(mci, i, j, 0);
+ 			mtr = I10NM_GET_DIMMMTR(imc, i, j);
+-			mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i, j);
+ 			edac_dbg(1, "dimmmtr 0x%x mcddrtcfg 0x%x (mc%d ch%d dimm%d)\n",
+ 				 mtr, mcddrtcfg, imc->mc, i, j);
+ 
+diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
+index 27d56920b4690..67dbf4c312716 100644
+--- a/drivers/edac/mce_amd.c
++++ b/drivers/edac/mce_amd.c
+@@ -1246,6 +1246,9 @@ static int __init mce_amd_init(void)
+ 	    c->x86_vendor != X86_VENDOR_HYGON)
+ 		return -ENODEV;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	if (boot_cpu_has(X86_FEATURE_SMCA)) {
+ 		xec_mask = 0x3f;
+ 		goto out;
+diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
+index 250e016807422..4b8978b254f9a 100644
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -329,12 +329,18 @@ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+ 
+ 	fw = platform_get_drvdata(pdev);
+ 	if (!fw)
+-		return NULL;
++		goto err_put_device;
+ 
+ 	if (!kref_get_unless_zero(&fw->consumers))
+-		return NULL;
++		goto err_put_device;
++
++	put_device(&pdev->dev);
+ 
+ 	return fw;
++
++err_put_device:
++	put_device(&pdev->dev);
++	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(rpi_firmware_get);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+index b8655ff73a658..cc9c9f8b23b2c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+@@ -160,17 +160,28 @@ static int acp_poweron(struct generic_pm_domain *genpd)
+ 	return 0;
+ }
+ 
+-static struct device *get_mfd_cell_dev(const char *device_name, int r)
++static int acp_genpd_add_device(struct device *dev, void *data)
+ {
+-	char auto_dev_name[25];
+-	struct device *dev;
++	struct generic_pm_domain *gpd = data;
++	int ret;
+ 
+-	snprintf(auto_dev_name, sizeof(auto_dev_name),
+-		 "%s.%d.auto", device_name, r);
+-	dev = bus_find_device_by_name(&platform_bus_type, NULL, auto_dev_name);
+-	dev_info(dev, "device %s added to pm domain\n", auto_dev_name);
++	ret = pm_genpd_add_device(gpd, dev);
++	if (ret)
++		dev_err(dev, "Failed to add dev to genpd %d\n", ret);
+ 
+-	return dev;
++	return ret;
++}
++
++static int acp_genpd_remove_device(struct device *dev, void *data)
++{
++	int ret;
++
++	ret = pm_genpd_remove_device(dev);
++	if (ret)
++		dev_err(dev, "Failed to remove dev from genpd %d\n", ret);
++
++	/* Continue to remove */
++	return 0;
+ }
+ 
+ /**
+@@ -181,11 +192,10 @@ static struct device *get_mfd_cell_dev(const char *device_name, int r)
+  */
+ static int acp_hw_init(void *handle)
+ {
+-	int r, i;
++	int r;
+ 	uint64_t acp_base;
+ 	u32 val = 0;
+ 	u32 count = 0;
+-	struct device *dev;
+ 	struct i2s_platform_data *i2s_pdata = NULL;
+ 
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -341,15 +351,10 @@ static int acp_hw_init(void *handle)
+ 	if (r)
+ 		goto failure;
+ 
+-	for (i = 0; i < ACP_DEVS ; i++) {
+-		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+-		r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
+-		if (r) {
+-			dev_err(dev, "Failed to add dev to genpd\n");
+-			goto failure;
+-		}
+-	}
+-
++	r = device_for_each_child(adev->acp.parent, &adev->acp.acp_genpd->gpd,
++				  acp_genpd_add_device);
++	if (r)
++		goto failure;
+ 
+ 	/* Assert Soft reset of ACP */
+ 	val = cgs_read_register(adev->acp.cgs_device, mmACP_SOFT_RESET);
+@@ -410,10 +415,8 @@ failure:
+  */
+ static int acp_hw_fini(void *handle)
+ {
+-	int i, ret;
+ 	u32 val = 0;
+ 	u32 count = 0;
+-	struct device *dev;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+ 	/* return early if no ACP */
+@@ -458,13 +461,8 @@ static int acp_hw_fini(void *handle)
+ 		udelay(100);
+ 	}
+ 
+-	for (i = 0; i < ACP_DEVS ; i++) {
+-		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+-		ret = pm_genpd_remove_device(dev);
+-		/* If removal fails, dont giveup and try rest */
+-		if (ret)
+-			dev_err(dev, "remove dev from genpd failed\n");
+-	}
++	device_for_each_child(adev->acp.parent, NULL,
++			      acp_genpd_remove_device);
+ 
+ 	mfd_remove_devices(adev->acp.parent);
+ 	kfree(adev->acp.acp_res);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index e802f9a95f087..415be74df28c7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -55,7 +55,7 @@
+ 
+ #undef __SMU_DUMMY_MAP
+ #define __SMU_DUMMY_MAP(type)	#type
+-static const char* __smu_message_names[] = {
++static const char * const __smu_message_names[] = {
+ 	SMU_MESSAGE_TYPES
+ };
+ 
+@@ -76,55 +76,256 @@ static void smu_cmn_read_arg(struct smu_context *smu,
+ 	*arg = RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82);
+ }
+ 
+-int smu_cmn_wait_for_response(struct smu_context *smu)
++/* Redefine the SMU error codes here.
++ *
++ * Note that these definitions are redundant and should be removed
++ * when the SMU has exported a unified header file containing these
++ * macros, which header file we can just include and use the SMU's
++ * macros. At the moment, these error codes are defined by the SMU
++ * per-ASIC unfortunately, yet we're a one driver for all ASICs.
++ */
++#define SMU_RESP_NONE           0
++#define SMU_RESP_OK             1
++#define SMU_RESP_CMD_FAIL       0xFF
++#define SMU_RESP_CMD_UNKNOWN    0xFE
++#define SMU_RESP_CMD_BAD_PREREQ 0xFD
++#define SMU_RESP_BUSY_OTHER     0xFC
++#define SMU_RESP_DEBUG_END      0xFB
++
++/**
++ * __smu_cmn_poll_stat -- poll for a status from the SMU
++ * smu: a pointer to SMU context
++ *
++ * Returns the status of the SMU, which could be,
++ *    0, the SMU is busy with your previous command;
++ *    1, execution status: success, execution result: success;
++ * 0xFF, execution status: success, execution result: failure;
++ * 0xFE, unknown command;
++ * 0xFD, valid command, but bad (command) prerequisites;
++ * 0xFC, the command was rejected as the SMU is busy;
++ * 0xFB, "SMC_Result_DebugDataDumpEnd".
++ *
++ * The values here are not defined by macros, because I'd rather we
++ * include a single header file which defines them, which is
++ * maintained by the SMU FW team, so that we're impervious to firmware
++ * changes. At the moment those values are defined in various header
++ * files, one for each ASIC, yet here we're a single ASIC-agnostic
++ * interface. Such a change can be followed-up by a subsequent patch.
++ */
++static u32 __smu_cmn_poll_stat(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	uint32_t cur_value, i, timeout = adev->usec_timeout * 20;
++	int timeout = adev->usec_timeout * 20;
++	u32 reg;
+ 
+-	for (i = 0; i < timeout; i++) {
+-		cur_value = RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90);
+-		if ((cur_value & MP1_C2PMSG_90__CONTENT_MASK) != 0)
+-			return cur_value;
++	for ( ; timeout > 0; timeout--) {
++		reg = RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90);
++		if ((reg & MP1_C2PMSG_90__CONTENT_MASK) != 0)
++			break;
+ 
+ 		udelay(1);
+ 	}
+ 
+-	/* timeout means wrong logic */
+-	if (i == timeout)
+-		return -ETIME;
+-
+-	return RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90);
++	return reg;
+ }
+ 
+-int smu_cmn_send_msg_without_waiting(struct smu_context *smu,
+-				     uint16_t msg, uint32_t param)
++static void __smu_cmn_reg_print_error(struct smu_context *smu,
++				      u32 reg_c2pmsg_90,
++				      int msg_index,
++				      u32 param,
++				      enum smu_message_type msg)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	int ret;
++	const char *message = smu_get_message_name(smu, msg);
+ 
+-	ret = smu_cmn_wait_for_response(smu);
+-	if (ret != 0x1) {
+-		dev_err(adev->dev, "Msg issuing pre-check failed(0x%x) and "
+-		       "SMU may be not in the right state!\n", ret);
+-		if (ret != -ETIME)
+-			ret = -EIO;
+-		return ret;
++	switch (reg_c2pmsg_90) {
++	case SMU_RESP_NONE:
++		dev_err_ratelimited(adev->dev,
++				    "SMU: I'm not done with your previous command!");
++		break;
++	case SMU_RESP_OK:
++		/* The SMU executed the command. It completed with a
++		 * successful result.
++		 */
++		break;
++	case SMU_RESP_CMD_FAIL:
++		/* The SMU executed the command. It completed with an
++		 * unsuccessful result.
++		 */
++		break;
++	case SMU_RESP_CMD_UNKNOWN:
++		dev_err_ratelimited(adev->dev,
++				    "SMU: unknown command: index:%d param:0x%08X message:%s",
++				    msg_index, param, message);
++		break;
++	case SMU_RESP_CMD_BAD_PREREQ:
++		dev_err_ratelimited(adev->dev,
++				    "SMU: valid command, bad prerequisites: index:%d param:0x%08X message:%s",
++				    msg_index, param, message);
++		break;
++	case SMU_RESP_BUSY_OTHER:
++		dev_err_ratelimited(adev->dev,
++				    "SMU: I'm very busy for your command: index:%d param:0x%08X message:%s",
++				    msg_index, param, message);
++		break;
++	case SMU_RESP_DEBUG_END:
++		dev_err_ratelimited(adev->dev,
++				    "SMU: I'm debugging!");
++		break;
++	default:
++		dev_err_ratelimited(adev->dev,
++				    "SMU: response:0x%08X for index:%d param:0x%08X message:%s?",
++				    reg_c2pmsg_90, msg_index, param, message);
++		break;
+ 	}
++}
++
++static int __smu_cmn_reg2errno(struct smu_context *smu, u32 reg_c2pmsg_90)
++{
++	int res;
++
++	switch (reg_c2pmsg_90) {
++	case SMU_RESP_NONE:
++		/* The SMU is busy--still executing your command.
++		 */
++		res = -ETIME;
++		break;
++	case SMU_RESP_OK:
++		res = 0;
++		break;
++	case SMU_RESP_CMD_FAIL:
++		/* Command completed successfully, but the command
++		 * status was failure.
++		 */
++		res = -EIO;
++		break;
++	case SMU_RESP_CMD_UNKNOWN:
++		/* Unknown command--ignored by the SMU.
++		 */
++		res = -EOPNOTSUPP;
++		break;
++	case SMU_RESP_CMD_BAD_PREREQ:
++		/* Valid command--bad prerequisites.
++		 */
++		res = -EINVAL;
++		break;
++	case SMU_RESP_BUSY_OTHER:
++		/* The SMU is busy with other commands. The client
++		 * should retry in 10 us.
++		 */
++		res = -EBUSY;
++		break;
++	default:
++		/* Unknown or debug response from the SMU.
++		 */
++		res = -EREMOTEIO;
++		break;
++	}
++
++	return res;
++}
++
++static void __smu_cmn_send_msg(struct smu_context *smu,
++			       u16 msg,
++			       u32 param)
++{
++	struct amdgpu_device *adev = smu->adev;
+ 
+ 	WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90, 0);
+ 	WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82, param);
+ 	WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_66, msg);
++}
+ 
+-	return 0;
++/**
++ * smu_cmn_send_msg_without_waiting -- send the message; don't wait for status
++ * @smu: pointer to an SMU context
++ * @msg_index: message index
++ * @param: message parameter to send to the SMU
++ *
++ * Send a message to the SMU with the parameter passed. Do not wait
++ * for status/result of the message, thus the "without_waiting".
++ *
++ * Return 0 on success, -errno on error if we weren't able to _send_
++ * the message for some reason. See __smu_cmn_reg2errno() for details
++ * of the -errno.
++ */
++int smu_cmn_send_msg_without_waiting(struct smu_context *smu,
++				     uint16_t msg_index,
++				     uint32_t param)
++{
++	u32 reg;
++	int res;
++
++	if (smu->adev->no_hw_access)
++		return 0;
++
++	reg = __smu_cmn_poll_stat(smu);
++	res = __smu_cmn_reg2errno(smu, reg);
++	if (reg == SMU_RESP_NONE ||
++	    reg == SMU_RESP_BUSY_OTHER ||
++	    res == -EREMOTEIO)
++		goto Out;
++	__smu_cmn_send_msg(smu, msg_index, param);
++	res = 0;
++Out:
++	return res;
++}
++
++/**
++ * smu_cmn_wait_for_response -- wait for response from the SMU
++ * @smu: pointer to an SMU context
++ *
++ * Wait for status from the SMU.
++ *
++ * Return 0 on success, -errno on error, indicating the execution
++ * status and result of the message being waited for. See
++ * __smu_cmn_reg2errno() for details of the -errno.
++ */
++int smu_cmn_wait_for_response(struct smu_context *smu)
++{
++	u32 reg;
++
++	reg = __smu_cmn_poll_stat(smu);
++	return __smu_cmn_reg2errno(smu, reg);
+ }
+ 
++/**
++ * smu_cmn_send_smc_msg_with_param -- send a message with parameter
++ * @smu: pointer to an SMU context
++ * @msg: message to send
++ * @param: parameter to send to the SMU
++ * @read_arg: pointer to u32 to return a value from the SMU back
++ *            to the caller
++ *
++ * Send the message @msg with parameter @param to the SMU, wait for
++ * completion of the command, and return back a value from the SMU in
++ * @read_arg pointer.
++ *
++ * Return 0 on success, -errno on error, if we weren't able to send
++ * the message or if the message completed with some kind of
++ * error. See __smu_cmn_reg2errno() for details of the -errno.
++ *
++ * If we weren't able to send the message to the SMU, we also print
++ * the error to the standard log.
++ *
++ * Command completion status is printed only if the -errno is
++ * -EREMOTEIO, indicating that the SMU returned back an
++ * undefined/unknown/unspecified result. All other cases are
++ * well-defined, not printed, but instead given back to the client to
++ * decide what further to do.
++ *
++ * The return value, @read_arg is read back regardless, to give back
++ * more information to the client, which on error would most likely be
++ * @param, but we can't assume that. This also eliminates more
++ * conditionals.
++ */
+ int smu_cmn_send_smc_msg_with_param(struct smu_context *smu,
+ 				    enum smu_message_type msg,
+ 				    uint32_t param,
+ 				    uint32_t *read_arg)
+ {
+-	struct amdgpu_device *adev = smu->adev;
+-	int ret = 0, index = 0;
++	int res, index;
++	u32 reg;
+ 
+ 	if (smu->adev->no_hw_access)
+ 		return 0;
+@@ -136,31 +337,24 @@ int smu_cmn_send_smc_msg_with_param(struct smu_context *smu,
+ 		return index == -EACCES ? 0 : index;
+ 
+ 	mutex_lock(&smu->message_lock);
+-	ret = smu_cmn_send_msg_without_waiting(smu, (uint16_t)index, param);
+-	if (ret)
+-		goto out;
+-
+-	ret = smu_cmn_wait_for_response(smu);
+-	if (ret != 0x1) {
+-		if (ret == -ETIME) {
+-			dev_err(adev->dev, "message: %15s (%d) \tparam: 0x%08x is timeout (no response)\n",
+-				smu_get_message_name(smu, msg), index, param);
+-		} else {
+-			dev_err(adev->dev, "failed send message: %15s (%d) \tparam: 0x%08x response %#x\n",
+-				smu_get_message_name(smu, msg), index, param,
+-				ret);
+-			ret = -EIO;
+-		}
+-		goto out;
++	reg = __smu_cmn_poll_stat(smu);
++	res = __smu_cmn_reg2errno(smu, reg);
++	if (reg == SMU_RESP_NONE ||
++	    reg == SMU_RESP_BUSY_OTHER ||
++	    res == -EREMOTEIO) {
++		__smu_cmn_reg_print_error(smu, reg, index, param, msg);
++		goto Out;
+ 	}
+-
++	__smu_cmn_send_msg(smu, (uint16_t) index, param);
++	reg = __smu_cmn_poll_stat(smu);
++	res = __smu_cmn_reg2errno(smu, reg);
++	if (res == -EREMOTEIO)
++		__smu_cmn_reg_print_error(smu, reg, index, param, msg);
+ 	if (read_arg)
+ 		smu_cmn_read_arg(smu, read_arg);
+-
+-	ret = 0; /* 0 as driver return value */
+-out:
++Out:
+ 	mutex_unlock(&smu->message_lock);
+-	return ret;
++	return res;
+ }
+ 
+ int smu_cmn_send_smc_msg(struct smu_context *smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 9add5f16ff562..16993daa2ae04 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -27,7 +27,8 @@
+ 
+ #if defined(SWSMU_CODE_LAYER_L2) || defined(SWSMU_CODE_LAYER_L3) || defined(SWSMU_CODE_LAYER_L4)
+ int smu_cmn_send_msg_without_waiting(struct smu_context *smu,
+-				     uint16_t msg, uint32_t param);
++				     uint16_t msg_index,
++				     uint32_t param);
+ int smu_cmn_send_smc_msg_with_param(struct smu_context *smu,
+ 				    enum smu_message_type msg,
+ 				    uint32_t param,
+diff --git a/drivers/gpu/drm/bridge/ite-it66121.c b/drivers/gpu/drm/bridge/ite-it66121.c
+index 7149ed40af83c..2f2a09adb4bc8 100644
+--- a/drivers/gpu/drm/bridge/ite-it66121.c
++++ b/drivers/gpu/drm/bridge/ite-it66121.c
+@@ -536,6 +536,8 @@ static int it66121_bridge_attach(struct drm_bridge *bridge,
+ 		return -EINVAL;
+ 
+ 	ret = drm_bridge_attach(bridge->encoder, ctx->next_bridge, bridge, flags);
++	if (ret)
++		return ret;
+ 
+ 	ret = regmap_write_bits(ctx->regmap, IT66121_CLK_BANK_REG,
+ 				IT66121_CLK_BANK_PWROFF_RCLK, 0);
+diff --git a/drivers/gpu/drm/drm_of.c b/drivers/gpu/drm/drm_of.c
+index ca04c34e82518..997b8827fed27 100644
+--- a/drivers/gpu/drm/drm_of.c
++++ b/drivers/gpu/drm/drm_of.c
+@@ -315,7 +315,7 @@ static int drm_of_lvds_get_remote_pixels_type(
+ 
+ 		remote_port = of_graph_get_remote_port(endpoint);
+ 		if (!remote_port) {
+-			of_node_put(remote_port);
++			of_node_put(endpoint);
+ 			return -EPIPE;
+ 		}
+ 
+@@ -331,8 +331,10 @@ static int drm_of_lvds_get_remote_pixels_type(
+ 		 * configurations by passing the endpoints explicitly to
+ 		 * drm_of_lvds_get_dual_link_pixel_order().
+ 		 */
+-		if (!current_pt || pixels_type != current_pt)
++		if (!current_pt || pixels_type != current_pt) {
++			of_node_put(endpoint);
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	return pixels_type;
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+index cab4d2c370a71..0ed665501ac48 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+@@ -897,13 +897,14 @@ static void g2d_runqueue_worker(struct work_struct *work)
+ 			ret = pm_runtime_resume_and_get(g2d->dev);
+ 			if (ret < 0) {
+ 				dev_err(g2d->dev, "failed to enable G2D device.\n");
+-				return;
++				goto out;
+ 			}
+ 
+ 			g2d_dma_start(g2d, g2d->runqueue_node);
+ 		}
+ 	}
+ 
++out:
+ 	mutex_unlock(&g2d->runqueue_mutex);
+ }
+ 
+diff --git a/drivers/gpu/drm/gma500/oaktrail_lvds.c b/drivers/gpu/drm/gma500/oaktrail_lvds.c
+index 432bdcc57ac9e..a1332878857b2 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_lvds.c
++++ b/drivers/gpu/drm/gma500/oaktrail_lvds.c
+@@ -117,7 +117,7 @@ static void oaktrail_lvds_mode_set(struct drm_encoder *encoder,
+ 			continue;
+ 	}
+ 
+-	if (!connector) {
++	if (list_entry_is_head(connector, &mode_config->connector_list, head)) {
+ 		DRM_ERROR("Couldn't find connector when setting mode");
+ 		gma_power_end(dev);
+ 		return;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+index f8a74f6cdc4cb..64740ddb983ea 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+@@ -345,10 +345,12 @@ static void dpu_hw_ctl_clear_all_blendstages(struct dpu_hw_ctl *ctx)
+ 	int i;
+ 
+ 	for (i = 0; i < ctx->mixer_count; i++) {
+-		DPU_REG_WRITE(c, CTL_LAYER(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT2(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT3(LM_0 + i), 0);
++		enum dpu_lm mixer_id = ctx->mixer_hw_caps[i].id;
++
++		DPU_REG_WRITE(c, CTL_LAYER(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT2(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT3(mixer_id), 0);
+ 	}
+ 
+ 	DPU_REG_WRITE(c, CTL_FETCH_PIPE_ACTIVE, 0);
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 4a5b518288b06..0712752742f4f 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -19,30 +19,12 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ {
+ 	struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
+ 	struct drm_device *dev = mdp4_kms->dev;
+-	uint32_t version, major, minor, dmap_cfg, vg_cfg;
++	u32 dmap_cfg, vg_cfg;
+ 	unsigned long clk;
+ 	int ret = 0;
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
+-	mdp4_enable(mdp4_kms);
+-	version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
+-	mdp4_disable(mdp4_kms);
+-
+-	major = FIELD(version, MDP4_VERSION_MAJOR);
+-	minor = FIELD(version, MDP4_VERSION_MINOR);
+-
+-	DBG("found MDP4 version v%d.%d", major, minor);
+-
+-	if (major != 4) {
+-		DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
+-				major, minor);
+-		ret = -ENXIO;
+-		goto out;
+-	}
+-
+-	mdp4_kms->rev = minor;
+-
+ 	if (mdp4_kms->rev > 1) {
+ 		mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER0, 0x0707ffff);
+ 		mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER1, 0x03073f3f);
+@@ -88,7 +70,6 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ 	if (mdp4_kms->rev > 1)
+ 		mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
+ 
+-out:
+ 	pm_runtime_put_sync(dev->dev);
+ 
+ 	return ret;
+@@ -411,6 +392,22 @@ fail:
+ 	return ret;
+ }
+ 
++static void read_mdp_hw_revision(struct mdp4_kms *mdp4_kms,
++				 u32 *major, u32 *minor)
++{
++	struct drm_device *dev = mdp4_kms->dev;
++	u32 version;
++
++	mdp4_enable(mdp4_kms);
++	version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
++	mdp4_disable(mdp4_kms);
++
++	*major = FIELD(version, MDP4_VERSION_MAJOR);
++	*minor = FIELD(version, MDP4_VERSION_MINOR);
++
++	DRM_DEV_INFO(dev->dev, "MDP4 version v%d.%d", *major, *minor);
++}
++
+ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+@@ -419,6 +416,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	struct msm_kms *kms = NULL;
+ 	struct msm_gem_address_space *aspace;
+ 	int irq, ret;
++	u32 major, minor;
+ 
+ 	mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL);
+ 	if (!mdp4_kms) {
+@@ -479,15 +477,6 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	if (IS_ERR(mdp4_kms->pclk))
+ 		mdp4_kms->pclk = NULL;
+ 
+-	if (mdp4_kms->rev >= 2) {
+-		mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
+-		if (IS_ERR(mdp4_kms->lut_clk)) {
+-			DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
+-			ret = PTR_ERR(mdp4_kms->lut_clk);
+-			goto fail;
+-		}
+-	}
+-
+ 	mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "bus_clk");
+ 	if (IS_ERR(mdp4_kms->axi_clk)) {
+ 		DRM_DEV_ERROR(dev->dev, "failed to get axi_clk\n");
+@@ -496,8 +485,27 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	}
+ 
+ 	clk_set_rate(mdp4_kms->clk, config->max_clk);
+-	if (mdp4_kms->lut_clk)
++
++	read_mdp_hw_revision(mdp4_kms, &major, &minor);
++
++	if (major != 4) {
++		DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
++			      major, minor);
++		ret = -ENXIO;
++		goto fail;
++	}
++
++	mdp4_kms->rev = minor;
++
++	if (mdp4_kms->rev >= 2) {
++		mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
++		if (IS_ERR(mdp4_kms->lut_clk)) {
++			DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
++			ret = PTR_ERR(mdp4_kms->lut_clk);
++			goto fail;
++		}
+ 		clk_set_rate(mdp4_kms->lut_clk, config->max_clk);
++	}
+ 
+ 	pm_runtime_enable(dev->dev);
+ 	mdp4_kms->rpm_enabled = true;
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 867388a399adf..997fd67f73799 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -55,7 +55,6 @@ enum {
+ 	EV_HPD_INIT_SETUP,
+ 	EV_HPD_PLUG_INT,
+ 	EV_IRQ_HPD_INT,
+-	EV_HPD_REPLUG_INT,
+ 	EV_HPD_UNPLUG_INT,
+ 	EV_USER_NOTIFICATION,
+ 	EV_CONNECT_PENDING_TIMEOUT,
+@@ -1119,9 +1118,6 @@ static int hpd_event_thread(void *data)
+ 		case EV_IRQ_HPD_INT:
+ 			dp_irq_hpd_handle(dp_priv, todo->data);
+ 			break;
+-		case EV_HPD_REPLUG_INT:
+-			/* do nothing */
+-			break;
+ 		case EV_USER_NOTIFICATION:
+ 			dp_display_send_hpd_notification(dp_priv,
+ 						todo->data);
+@@ -1165,10 +1161,8 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+ 
+ 	if (hpd_isr_status & 0x0F) {
+ 		/* hpd related interrupts */
+-		if (hpd_isr_status & DP_DP_HPD_PLUG_INT_MASK ||
+-			hpd_isr_status & DP_DP_HPD_REPLUG_INT_MASK) {
++		if (hpd_isr_status & DP_DP_HPD_PLUG_INT_MASK)
+ 			dp_add_event(dp, EV_HPD_PLUG_INT, 0, 0);
+-		}
+ 
+ 		if (hpd_isr_status & DP_DP_IRQ_HPD_INT_MASK) {
+ 			/* stop sentinel connect pending checking */
+@@ -1176,8 +1170,10 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+ 			dp_add_event(dp, EV_IRQ_HPD_INT, 0, 0);
+ 		}
+ 
+-		if (hpd_isr_status & DP_DP_HPD_REPLUG_INT_MASK)
+-			dp_add_event(dp, EV_HPD_REPLUG_INT, 0, 0);
++		if (hpd_isr_status & DP_DP_HPD_REPLUG_INT_MASK) {
++			dp_add_event(dp, EV_HPD_UNPLUG_INT, 0, 0);
++			dp_add_event(dp, EV_HPD_PLUG_INT, 0, 3);
++		}
+ 
+ 		if (hpd_isr_status & DP_DP_HPD_UNPLUG_INT_MASK)
+ 			dp_add_event(dp, EV_HPD_UNPLUG_INT, 0, 0);
+@@ -1286,7 +1282,7 @@ static int dp_pm_resume(struct device *dev)
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct msm_dp *dp_display = platform_get_drvdata(pdev);
+ 	struct dp_display_private *dp;
+-	u32 status;
++	int sink_count = 0;
+ 
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+@@ -1300,14 +1296,25 @@ static int dp_pm_resume(struct device *dev)
+ 
+ 	dp_catalog_ctrl_hpd_config(dp->catalog);
+ 
+-	status = dp_catalog_link_is_connected(dp->catalog);
++	/*
++	 * set sink to normal operation mode -- D0
++	 * before dpcd read
++	 */
++	dp_link_psm_config(dp->link, &dp->panel->link_info, false);
++
++	if (dp_catalog_link_is_connected(dp->catalog)) {
++		sink_count = drm_dp_read_sink_count(dp->aux);
++		if (sink_count < 0)
++			sink_count = 0;
++	}
+ 
++	dp->link->sink_count = sink_count;
+ 	/*
+ 	 * can not declared display is connected unless
+ 	 * HDMI cable is plugged in and sink_count of
+ 	 * dongle become 1
+ 	 */
+-	if (status && dp->link->sink_count)
++	if (dp->link->sink_count)
+ 		dp->dp_display.is_connected = true;
+ 	else
+ 		dp->dp_display.is_connected = false;
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 75afc12a7b25a..29d11f1cb79b0 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -26,8 +26,10 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 	}
+ 
+ 	phy_pdev = of_find_device_by_node(phy_node);
+-	if (phy_pdev)
++	if (phy_pdev) {
+ 		msm_dsi->phy = platform_get_drvdata(phy_pdev);
++		msm_dsi->phy_dev = &phy_pdev->dev;
++	}
+ 
+ 	of_node_put(phy_node);
+ 
+@@ -36,8 +38,6 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
+-	msm_dsi->phy_dev = get_device(&phy_pdev->dev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 9b8fa2ad0d840..729ab68d02034 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -539,6 +539,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 		if (IS_ERR(priv->event_thread[i].worker)) {
+ 			ret = PTR_ERR(priv->event_thread[i].worker);
+ 			DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n");
++			ret = PTR_ERR(priv->event_thread[i].worker);
+ 			goto err_msm_uninit;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index 6da93551e2e5f..c277d3f61a5ef 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -51,6 +51,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0xff,
+ 		.hs_wdth_shift	= 24,
+ 		.has_overlay	= false,
++		.has_ctrl2	= false,
+ 	},
+ 	[MXSFB_V4] = {
+ 		.transfer_count	= LCDC_V4_TRANSFER_COUNT,
+@@ -59,6 +60,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0x3fff,
+ 		.hs_wdth_shift	= 18,
+ 		.has_overlay	= false,
++		.has_ctrl2	= true,
+ 	},
+ 	[MXSFB_V6] = {
+ 		.transfer_count	= LCDC_V4_TRANSFER_COUNT,
+@@ -67,6 +69,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0x3fff,
+ 		.hs_wdth_shift	= 18,
+ 		.has_overlay	= true,
++		.has_ctrl2	= true,
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.h b/drivers/gpu/drm/mxsfb/mxsfb_drv.h
+index 399d23e91ed10..7c720e226fdfd 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.h
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.h
+@@ -22,6 +22,7 @@ struct mxsfb_devdata {
+ 	unsigned int	hs_wdth_mask;
+ 	unsigned int	hs_wdth_shift;
+ 	bool		has_overlay;
++	bool		has_ctrl2;
+ };
+ 
+ struct mxsfb_drm_private {
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+index 300e7bab0f431..54f905ac75c07 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+@@ -107,6 +107,14 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
+ 		clk_prepare_enable(mxsfb->clk_disp_axi);
+ 	clk_prepare_enable(mxsfb->clk);
+ 
++	/* Increase number of outstanding requests on all supported IPs */
++	if (mxsfb->devdata->has_ctrl2) {
++		reg = readl(mxsfb->base + LCDC_V4_CTRL2);
++		reg &= ~CTRL2_SET_OUTSTANDING_REQS_MASK;
++		reg |= CTRL2_SET_OUTSTANDING_REQS_16;
++		writel(reg, mxsfb->base + LCDC_V4_CTRL2);
++	}
++
+ 	/* If it was disabled, re-enable the mode again */
+ 	writel(CTRL_DOTCLK_MODE, mxsfb->base + LCDC_CTRL + REG_SET);
+ 
+@@ -115,6 +123,35 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
+ 	reg |= VDCTRL4_SYNC_SIGNALS_ON;
+ 	writel(reg, mxsfb->base + LCDC_VDCTRL4);
+ 
++	/*
++	 * Enable recovery on underflow.
++	 *
++	 * There is some sort of corner case behavior of the controller,
++	 * which could rarely be triggered at least on i.MX6SX connected
++	 * to 800x480 DPI panel and i.MX8MM connected to DPI->DSI->LVDS
++	 * bridged 1920x1080 panel (and likely on other setups too), where
++	 * the image on the panel shifts to the right and wraps around.
++	 * This happens either when the controller is enabled on boot or
++	 * even later during run time. The condition does not correct
++	 * itself automatically, i.e. the display image remains shifted.
++	 *
++	 * It seems this problem is known and is due to sporadic underflows
++	 * of the LCDIF FIFO. While the LCDIF IP does have underflow/overflow
++	 * IRQs, neither of the IRQs trigger and neither IRQ status bit is
++	 * asserted when this condition occurs.
++	 *
++	 * All known revisions of the LCDIF IP have CTRL1 RECOVER_ON_UNDERFLOW
++	 * bit, which is described in the reference manual since i.MX23 as
++	 * "
++	 *   Set this bit to enable the LCDIF block to recover in the next
++	 *   field/frame if there was an underflow in the current field/frame.
++	 * "
++	 * Enable this bit to mitigate the sporadic underflows.
++	 */
++	reg = readl(mxsfb->base + LCDC_CTRL1);
++	reg |= CTRL1_RECOVER_ON_UNDERFLOW;
++	writel(reg, mxsfb->base + LCDC_CTRL1);
++
+ 	writel(CTRL_RUN, mxsfb->base + LCDC_CTRL + REG_SET);
+ }
+ 
+@@ -206,6 +243,9 @@ static void mxsfb_crtc_mode_set_nofb(struct mxsfb_drm_private *mxsfb)
+ 
+ 	/* Clear the FIFOs */
+ 	writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
++	readl(mxsfb->base + LCDC_CTRL1);
++	writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++	readl(mxsfb->base + LCDC_CTRL1);
+ 
+ 	if (mxsfb->devdata->has_overlay)
+ 		writel(0, mxsfb->base + LCDC_AS_CTRL);
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_regs.h b/drivers/gpu/drm/mxsfb/mxsfb_regs.h
+index 55d28a27f9124..694fea13e893e 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_regs.h
++++ b/drivers/gpu/drm/mxsfb/mxsfb_regs.h
+@@ -15,6 +15,7 @@
+ #define LCDC_CTRL			0x00
+ #define LCDC_CTRL1			0x10
+ #define LCDC_V3_TRANSFER_COUNT		0x20
++#define LCDC_V4_CTRL2			0x20
+ #define LCDC_V4_TRANSFER_COUNT		0x30
+ #define LCDC_V4_CUR_BUF			0x40
+ #define LCDC_V4_NEXT_BUF		0x50
+@@ -54,12 +55,20 @@
+ #define CTRL_DF24			BIT(1)
+ #define CTRL_RUN			BIT(0)
+ 
++#define CTRL1_RECOVER_ON_UNDERFLOW	BIT(24)
+ #define CTRL1_FIFO_CLEAR		BIT(21)
+ #define CTRL1_SET_BYTE_PACKAGING(x)	(((x) & 0xf) << 16)
+ #define CTRL1_GET_BYTE_PACKAGING(x)	(((x) >> 16) & 0xf)
+ #define CTRL1_CUR_FRAME_DONE_IRQ_EN	BIT(13)
+ #define CTRL1_CUR_FRAME_DONE_IRQ	BIT(9)
+ 
++#define CTRL2_SET_OUTSTANDING_REQS_1	0
++#define CTRL2_SET_OUTSTANDING_REQS_2	(0x1 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_4	(0x2 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_8	(0x3 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_16	(0x4 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_MASK	(0x7 << 21)
++
+ #define TRANSFER_COUNT_SET_VCOUNT(x)	(((x) & 0xffff) << 16)
+ #define TRANSFER_COUNT_GET_VCOUNT(x)	(((x) >> 16) & 0xffff)
+ #define TRANSFER_COUNT_SET_HCOUNT(x)	((x) & 0xffff)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
+index 125ed973feaad..a2a09c51eed7b 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.c
++++ b/drivers/gpu/drm/panfrost/panfrost_device.c
+@@ -54,7 +54,8 @@ static int panfrost_clk_init(struct panfrost_device *pfdev)
+ 	if (IS_ERR(pfdev->bus_clock)) {
+ 		dev_err(pfdev->dev, "get bus_clock failed %ld\n",
+ 			PTR_ERR(pfdev->bus_clock));
+-		return PTR_ERR(pfdev->bus_clock);
++		err = PTR_ERR(pfdev->bus_clock);
++		goto disable_clock;
+ 	}
+ 
+ 	if (pfdev->bus_clock) {
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+index bfbff90588cbf..c22551c2facb1 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+@@ -556,8 +556,6 @@ static int rcar_du_remove(struct platform_device *pdev)
+ 
+ 	drm_kms_helper_poll_fini(ddev);
+ 
+-	drm_dev_put(ddev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hv/hv_snapshot.c b/drivers/hv/hv_snapshot.c
+index 2267bd4c34725..6018b9d1b1fbe 100644
+--- a/drivers/hv/hv_snapshot.c
++++ b/drivers/hv/hv_snapshot.c
+@@ -375,6 +375,7 @@ hv_vss_init(struct hv_util_service *srv)
+ 	}
+ 	recv_buffer = srv->recv_buffer;
+ 	vss_transaction.recv_channel = srv->channel;
++	vss_transaction.recv_channel->max_pkt_size = HV_HYP_PAGE_SIZE * 2;
+ 
+ 	/*
+ 	 * When this driver loads, the user level daemon that
+diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
+index d712c61c1f5e9..0241ed84b692f 100644
+--- a/drivers/hwmon/Makefile
++++ b/drivers/hwmon/Makefile
+@@ -45,7 +45,6 @@ obj-$(CONFIG_SENSORS_ADT7462)	+= adt7462.o
+ obj-$(CONFIG_SENSORS_ADT7470)	+= adt7470.o
+ obj-$(CONFIG_SENSORS_ADT7475)	+= adt7475.o
+ obj-$(CONFIG_SENSORS_AHT10)	+= aht10.o
+-obj-$(CONFIG_SENSORS_AMD_ENERGY) += amd_energy.o
+ obj-$(CONFIG_SENSORS_APPLESMC)	+= applesmc.o
+ obj-$(CONFIG_SENSORS_ARM_SCMI)	+= scmi-hwmon.o
+ obj-$(CONFIG_SENSORS_ARM_SCPI)	+= scpi-hwmon.o
+diff --git a/drivers/hwmon/pmbus/bpa-rs600.c b/drivers/hwmon/pmbus/bpa-rs600.c
+index 2be69fedfa361..be76efe67d83f 100644
+--- a/drivers/hwmon/pmbus/bpa-rs600.c
++++ b/drivers/hwmon/pmbus/bpa-rs600.c
+@@ -12,15 +12,6 @@
+ #include <linux/pmbus.h>
+ #include "pmbus.h"
+ 
+-#define BPARS600_MFR_VIN_MIN	0xa0
+-#define BPARS600_MFR_VIN_MAX	0xa1
+-#define BPARS600_MFR_IIN_MAX	0xa2
+-#define BPARS600_MFR_PIN_MAX	0xa3
+-#define BPARS600_MFR_VOUT_MIN	0xa4
+-#define BPARS600_MFR_VOUT_MAX	0xa5
+-#define BPARS600_MFR_IOUT_MAX	0xa6
+-#define BPARS600_MFR_POUT_MAX	0xa7
+-
+ static int bpa_rs600_read_byte_data(struct i2c_client *client, int page, int reg)
+ {
+ 	int ret;
+@@ -81,29 +72,13 @@ static int bpa_rs600_read_word_data(struct i2c_client *client, int page, int pha
+ 
+ 	switch (reg) {
+ 	case PMBUS_VIN_UV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VIN_MIN);
+-		break;
+ 	case PMBUS_VIN_OV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VIN_MAX);
+-		break;
+ 	case PMBUS_VOUT_UV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VOUT_MIN);
+-		break;
+ 	case PMBUS_VOUT_OV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VOUT_MAX);
+-		break;
+ 	case PMBUS_IIN_OC_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_IIN_MAX);
+-		break;
+ 	case PMBUS_IOUT_OC_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_IOUT_MAX);
+-		break;
+ 	case PMBUS_PIN_OP_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_PIN_MAX);
+-		break;
+ 	case PMBUS_POUT_OP_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_POUT_MAX);
+-		break;
+ 	case PMBUS_VIN_UV_FAULT_LIMIT:
+ 	case PMBUS_VIN_OV_FAULT_LIMIT:
+ 	case PMBUS_VOUT_UV_FAULT_LIMIT:
+diff --git a/drivers/i2c/busses/i2c-highlander.c b/drivers/i2c/busses/i2c-highlander.c
+index 803dad70e2a71..a2add128d0843 100644
+--- a/drivers/i2c/busses/i2c-highlander.c
++++ b/drivers/i2c/busses/i2c-highlander.c
+@@ -379,7 +379,7 @@ static int highlander_i2c_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, dev);
+ 
+ 	dev->irq = platform_get_irq(pdev, 0);
+-	if (iic_force_poll)
++	if (dev->irq < 0 || iic_force_poll)
+ 		dev->irq = 0;
+ 
+ 	if (dev->irq) {
+diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c
+index aa00ba8bcb70f..61ae58f570475 100644
+--- a/drivers/i2c/busses/i2c-hix5hd2.c
++++ b/drivers/i2c/busses/i2c-hix5hd2.c
+@@ -413,7 +413,7 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->regs);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0)
++	if (irq < 0)
+ 		return irq;
+ 
+ 	priv->clk = devm_clk_get(&pdev->dev, NULL);
+diff --git a/drivers/i2c/busses/i2c-iop3xx.c b/drivers/i2c/busses/i2c-iop3xx.c
+index cfecaf18ccbb7..4a6ff54d87fe8 100644
+--- a/drivers/i2c/busses/i2c-iop3xx.c
++++ b/drivers/i2c/busses/i2c-iop3xx.c
+@@ -469,16 +469,14 @@ iop3xx_i2c_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		ret = -ENXIO;
++		ret = irq;
+ 		goto unmap;
+ 	}
+ 	ret = request_irq(irq, iop3xx_i2c_irq_handler, 0,
+ 				pdev->name, adapter_data);
+ 
+-	if (ret) {
+-		ret = -EIO;
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ 	memcpy(new_adapter->name, pdev->name, strlen(pdev->name));
+ 	new_adapter->owner = THIS_MODULE;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 4ca716e091495..477480d1de6bd 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -1211,7 +1211,7 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(i2c->pdmabase);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0)
++	if (irq < 0)
+ 		return irq;
+ 
+ 	init_completion(&i2c->msg_complete);
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 4d82761e1585e..b49a1b170bb2f 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -1137,7 +1137,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
+ 	 */
+ 	if (!(i2c->quirks & QUIRK_POLL)) {
+ 		i2c->irq = ret = platform_get_irq(pdev, 0);
+-		if (ret <= 0) {
++		if (ret < 0) {
+ 			dev_err(&pdev->dev, "cannot find IRQ\n");
+ 			clk_unprepare(i2c->clk);
+ 			return ret;
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index 31be1811d5e66..e4026c5416b15 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -578,7 +578,7 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
+ 
+ 	i2c->irq = platform_get_irq(pdev, 0);
+ 	if (i2c->irq < 0)
+-		return -ENODEV;
++		return i2c->irq;
+ 
+ 	ret = devm_request_irq(&pdev->dev, i2c->irq, synquacer_i2c_isr,
+ 			       0, dev_name(&pdev->dev), i2c);
+diff --git a/drivers/i2c/busses/i2c-xlp9xx.c b/drivers/i2c/busses/i2c-xlp9xx.c
+index f2241cedf5d3f..6d24dc3855229 100644
+--- a/drivers/i2c/busses/i2c-xlp9xx.c
++++ b/drivers/i2c/busses/i2c-xlp9xx.c
+@@ -517,7 +517,7 @@ static int xlp9xx_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->base);
+ 
+ 	priv->irq = platform_get_irq(pdev, 0);
+-	if (priv->irq <= 0)
++	if (priv->irq < 0)
+ 		return priv->irq;
+ 	/* SMBAlert irq */
+ 	priv->alert_data.irq = platform_get_irq(pdev, 1);
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 3f1c5a4f158bf..19713cdd7b789 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1024,7 +1024,7 @@ static void *mlx5_ib_alloc_xlt(size_t *nents, size_t ent_size, gfp_t gfp_mask)
+ 
+ 	if (size > MLX5_SPARE_UMR_CHUNK) {
+ 		size = MLX5_SPARE_UMR_CHUNK;
+-		*nents = get_order(size) / ent_size;
++		*nents = size / ent_size;
+ 		res = (void *)__get_free_pages(gfp_mask | __GFP_NOWARN,
+ 					       get_order(size));
+ 		if (res)
+diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
+index b8c06bd8659e9..6fc145aacaf02 100644
+--- a/drivers/irqchip/irq-apple-aic.c
++++ b/drivers/irqchip/irq-apple-aic.c
+@@ -226,7 +226,7 @@ static void aic_irq_eoi(struct irq_data *d)
+ 	 * Reading the interrupt reason automatically acknowledges and masks
+ 	 * the IRQ, so we just unmask it here if needed.
+ 	 */
+-	if (!irqd_irq_disabled(d) && !irqd_irq_masked(d))
++	if (!irqd_irq_masked(d))
+ 		aic_irq_unmask(d);
+ }
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index e0f4debe64e13..3e61210da04be 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -100,6 +100,27 @@ EXPORT_SYMBOL(gic_pmr_sync);
+ DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
+ EXPORT_SYMBOL(gic_nonsecure_priorities);
+ 
++/*
++ * When the Non-secure world has access to group 0 interrupts (as a
++ * consequence of SCR_EL3.FIQ == 0), reading the ICC_RPR_EL1 register will
++ * return the Distributor's view of the interrupt priority.
++ *
++ * When GIC security is enabled (GICD_CTLR.DS == 0), the interrupt priority
++ * written by software is moved to the Non-secure range by the Distributor.
++ *
++ * If both are true (which is when gic_nonsecure_priorities gets enabled),
++ * we need to shift down the priority programmed by software to match it
++ * against the value returned by ICC_RPR_EL1.
++ */
++#define GICD_INT_RPR_PRI(priority)					\
++	({								\
++		u32 __priority = (priority);				\
++		if (static_branch_unlikely(&gic_nonsecure_priorities))	\
++			__priority = 0x80 | (__priority >> 1);		\
++									\
++		__priority;						\
++	})
++
+ /* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
+ static refcount_t *ppi_nmi_refs;
+ 
+@@ -687,7 +708,7 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ 		return;
+ 
+ 	if (gic_supports_nmi() &&
+-	    unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
++	    unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
+ 		gic_handle_nmi(irqnr, regs);
+ 		return;
+ 	}
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index f790ca6d78aa4..a4eb8a2181c7f 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -92,18 +92,22 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
+ 	case IRQ_TYPE_EDGE_RISING:
+ 		pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_edge_irq);
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_edge_irq);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_level_irq);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+ 		pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_level_irq);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -113,11 +117,24 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
+ 	return ret;
+ }
+ 
++static void pch_pic_ack_irq(struct irq_data *d)
++{
++	unsigned int reg;
++	struct pch_pic *priv = irq_data_get_irq_chip_data(d);
++
++	reg = readl(priv->base + PCH_PIC_EDGE + PIC_REG_IDX(d->hwirq) * 4);
++	if (reg & BIT(PIC_REG_BIT(d->hwirq))) {
++		writel(BIT(PIC_REG_BIT(d->hwirq)),
++			priv->base + PCH_PIC_CLR + PIC_REG_IDX(d->hwirq) * 4);
++	}
++	irq_chip_ack_parent(d);
++}
++
+ static struct irq_chip pch_pic_irq_chip = {
+ 	.name			= "PCH PIC",
+ 	.irq_mask		= pch_pic_mask_irq,
+ 	.irq_unmask		= pch_pic_unmask_irq,
+-	.irq_ack		= irq_chip_ack_parent,
++	.irq_ack		= pch_pic_ack_irq,
+ 	.irq_set_affinity	= irq_chip_set_affinity_parent,
+ 	.irq_set_type		= pch_pic_set_type,
+ };
+diff --git a/drivers/leds/blink/leds-lgm-sso.c b/drivers/leds/blink/leds-lgm-sso.c
+index 7eb2f44f16be5..aa14f0ebe7a02 100644
+--- a/drivers/leds/blink/leds-lgm-sso.c
++++ b/drivers/leds/blink/leds-lgm-sso.c
+@@ -631,8 +631,10 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 
+ 	fwnode_for_each_child_node(fw_ssoled, fwnode_child) {
+ 		led = devm_kzalloc(dev, sizeof(*led), GFP_KERNEL);
+-		if (!led)
+-			return -ENOMEM;
++		if (!led) {
++			ret = -ENOMEM;
++			goto __dt_err;
++		}
+ 
+ 		INIT_LIST_HEAD(&led->list);
+ 		led->priv = priv;
+@@ -642,7 +644,7 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 							      fwnode_child,
+ 							      GPIOD_ASIS, NULL);
+ 		if (IS_ERR(led->gpiod)) {
+-			dev_err(dev, "led: get gpio fail!\n");
++			ret = dev_err_probe(dev, PTR_ERR(led->gpiod), "led: get gpio fail!\n");
+ 			goto __dt_err;
+ 		}
+ 
+@@ -662,8 +664,11 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 			desc->panic_indicator = 1;
+ 
+ 		ret = fwnode_property_read_u32(fwnode_child, "reg", &prop);
+-		if (ret != 0 || prop >= SSO_LED_MAX_NUM) {
++		if (ret)
++			goto __dt_err;
++		if (prop >= SSO_LED_MAX_NUM) {
+ 			dev_err(dev, "invalid LED pin:%u\n", prop);
++			ret = -EINVAL;
+ 			goto __dt_err;
+ 		}
+ 		desc->pin = prop;
+@@ -699,21 +704,22 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 				desc->brightness = LED_FULL;
+ 		}
+ 
+-		if (sso_create_led(priv, led, fwnode_child))
++		ret = sso_create_led(priv, led, fwnode_child);
++		if (ret)
+ 			goto __dt_err;
+ 	}
+-	fwnode_handle_put(fw_ssoled);
+ 
+ 	return 0;
++
+ __dt_err:
+-	fwnode_handle_put(fw_ssoled);
++	fwnode_handle_put(fwnode_child);
+ 	/* unregister leds */
+ 	list_for_each(p, &priv->led_list) {
+ 		led = list_entry(p, struct sso_led, list);
+ 		sso_led_shutdown(led);
+ 	}
+ 
+-	return -EINVAL;
++	return ret;
+ }
+ 
+ static int sso_led_dt_parse(struct sso_led_priv *priv)
+@@ -731,6 +737,7 @@ static int sso_led_dt_parse(struct sso_led_priv *priv)
+ 	fw_ssoled = fwnode_get_named_child_node(fwnode, "ssoled");
+ 	if (fw_ssoled) {
+ 		ret = __sso_led_dt_parse(priv, fw_ssoled);
++		fwnode_handle_put(fw_ssoled);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/drivers/leds/flash/leds-rt8515.c b/drivers/leds/flash/leds-rt8515.c
+index 590bfa180d104..44904fdee3cc0 100644
+--- a/drivers/leds/flash/leds-rt8515.c
++++ b/drivers/leds/flash/leds-rt8515.c
+@@ -343,8 +343,9 @@ static int rt8515_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_led_classdev_flash_register_ext(dev, fled, &init_data);
+ 	if (ret) {
+-		dev_err(dev, "can't register LED %s\n", led->name);
++		fwnode_handle_put(child);
+ 		mutex_destroy(&rt->lock);
++		dev_err(dev, "can't register LED %s\n", led->name);
+ 		return ret;
+ 	}
+ 
+@@ -362,6 +363,7 @@ static int rt8515_probe(struct platform_device *pdev)
+ 		 */
+ 	}
+ 
++	fwnode_handle_put(child);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/leds/leds-is31fl32xx.c b/drivers/leds/leds-is31fl32xx.c
+index 3b55af9a8c585..22c092a4394ab 100644
+--- a/drivers/leds/leds-is31fl32xx.c
++++ b/drivers/leds/leds-is31fl32xx.c
+@@ -386,6 +386,7 @@ static int is31fl32xx_parse_dt(struct device *dev,
+ 			dev_err(dev,
+ 				"Node %pOF 'reg' conflicts with another LED\n",
+ 				child);
++			ret = -EINVAL;
+ 			goto err;
+ 		}
+ 
+diff --git a/drivers/leds/leds-lt3593.c b/drivers/leds/leds-lt3593.c
+index 3bb52d3165d90..d0160fde0f94c 100644
+--- a/drivers/leds/leds-lt3593.c
++++ b/drivers/leds/leds-lt3593.c
+@@ -97,10 +97,9 @@ static int lt3593_led_probe(struct platform_device *pdev)
+ 	init_data.default_label = ":";
+ 
+ 	ret = devm_led_classdev_register_ext(dev, &led_data->cdev, &init_data);
+-	if (ret < 0) {
+-		fwnode_handle_put(child);
++	fwnode_handle_put(child);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	platform_set_drvdata(pdev, led_data);
+ 
+diff --git a/drivers/leds/trigger/ledtrig-audio.c b/drivers/leds/trigger/ledtrig-audio.c
+index f76621e88482d..c6b437e6369b8 100644
+--- a/drivers/leds/trigger/ledtrig-audio.c
++++ b/drivers/leds/trigger/ledtrig-audio.c
+@@ -6,10 +6,33 @@
+ #include <linux/kernel.h>
+ #include <linux/leds.h>
+ #include <linux/module.h>
++#include "../leds.h"
+ 
+-static struct led_trigger *ledtrig_audio[NUM_AUDIO_LEDS];
+ static enum led_brightness audio_state[NUM_AUDIO_LEDS];
+ 
++static int ledtrig_audio_mute_activate(struct led_classdev *led_cdev)
++{
++	led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MUTE]);
++	return 0;
++}
++
++static int ledtrig_audio_micmute_activate(struct led_classdev *led_cdev)
++{
++	led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MICMUTE]);
++	return 0;
++}
++
++static struct led_trigger ledtrig_audio[NUM_AUDIO_LEDS] = {
++	[LED_AUDIO_MUTE] = {
++		.name     = "audio-mute",
++		.activate = ledtrig_audio_mute_activate,
++	},
++	[LED_AUDIO_MICMUTE] = {
++		.name     = "audio-micmute",
++		.activate = ledtrig_audio_micmute_activate,
++	},
++};
++
+ enum led_brightness ledtrig_audio_get(enum led_audio type)
+ {
+ 	return audio_state[type];
+@@ -19,24 +42,22 @@ EXPORT_SYMBOL_GPL(ledtrig_audio_get);
+ void ledtrig_audio_set(enum led_audio type, enum led_brightness state)
+ {
+ 	audio_state[type] = state;
+-	led_trigger_event(ledtrig_audio[type], state);
++	led_trigger_event(&ledtrig_audio[type], state);
+ }
+ EXPORT_SYMBOL_GPL(ledtrig_audio_set);
+ 
+ static int __init ledtrig_audio_init(void)
+ {
+-	led_trigger_register_simple("audio-mute",
+-				    &ledtrig_audio[LED_AUDIO_MUTE]);
+-	led_trigger_register_simple("audio-micmute",
+-				    &ledtrig_audio[LED_AUDIO_MICMUTE]);
++	led_trigger_register(&ledtrig_audio[LED_AUDIO_MUTE]);
++	led_trigger_register(&ledtrig_audio[LED_AUDIO_MICMUTE]);
+ 	return 0;
+ }
+ module_init(ledtrig_audio_init);
+ 
+ static void __exit ledtrig_audio_exit(void)
+ {
+-	led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MUTE]);
+-	led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MICMUTE]);
++	led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MUTE]);
++	led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MICMUTE]);
+ }
+ module_exit(ledtrig_audio_exit);
+ 
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 185246a0d8551..d0f08e946453c 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -931,20 +931,20 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 	n = BITS_TO_LONGS(d->nr_stripes) * sizeof(unsigned long);
+ 	d->full_dirty_stripes = kvzalloc(n, GFP_KERNEL);
+ 	if (!d->full_dirty_stripes)
+-		return -ENOMEM;
++		goto out_free_stripe_sectors_dirty;
+ 
+ 	idx = ida_simple_get(&bcache_device_idx, 0,
+ 				BCACHE_DEVICE_IDX_MAX, GFP_KERNEL);
+ 	if (idx < 0)
+-		return idx;
++		goto out_free_full_dirty_stripes;
+ 
+ 	if (bioset_init(&d->bio_split, 4, offsetof(struct bbio, bio),
+ 			BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER))
+-		goto err;
++		goto out_ida_remove;
+ 
+ 	d->disk = blk_alloc_disk(NUMA_NO_NODE);
+ 	if (!d->disk)
+-		goto err;
++		goto out_bioset_exit;
+ 
+ 	set_capacity(d->disk, sectors);
+ 	snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx);
+@@ -987,8 +987,14 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 
+ 	return 0;
+ 
+-err:
++out_bioset_exit:
++	bioset_exit(&d->bio_split);
++out_ida_remove:
+ 	ida_simple_remove(&bcache_device_idx, idx);
++out_free_full_dirty_stripes:
++	kvfree(d->full_dirty_stripes);
++out_free_stripe_sectors_dirty:
++	kvfree(d->stripe_sectors_dirty);
+ 	return -ENOMEM;
+ 
+ }
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 3c44c4bb40fc5..19598bd38939d 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1329,6 +1329,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 	struct raid1_plug_cb *plug = NULL;
+ 	int first_clone;
+ 	int max_sectors;
++	bool write_behind = false;
+ 
+ 	if (mddev_is_clustered(mddev) &&
+ 	     md_cluster_ops->area_resyncing(mddev, WRITE,
+@@ -1381,6 +1382,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 	max_sectors = r1_bio->sectors;
+ 	for (i = 0;  i < disks; i++) {
+ 		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
++
++		/*
++		 * The write-behind io is only attempted on drives marked as
++		 * write-mostly, which means we could allocate write behind
++		 * bio later.
++		 */
++		if (rdev && test_bit(WriteMostly, &rdev->flags))
++			write_behind = true;
++
+ 		if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
+ 			atomic_inc(&rdev->nr_pending);
+ 			blocked_rdev = rdev;
+@@ -1454,6 +1464,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 		goto retry_write;
+ 	}
+ 
++	/*
++	 * When using a bitmap, we may call alloc_behind_master_bio below.
++	 * alloc_behind_master_bio allocates a copy of the data payload a page
++	 * at a time and thus needs a new bio that can fit the whole payload
++	 * this bio in page sized chunks.
++	 */
++	if (write_behind && bitmap)
++		max_sectors = min_t(int, max_sectors,
++				    BIO_MAX_VECS * (PAGE_SIZE >> 9));
+ 	if (max_sectors < bio_sectors(bio)) {
+ 		struct bio *split = bio_split(bio, max_sectors,
+ 					      GFP_NOIO, &conf->bio_split);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 07119d7e0fdf9..aa2636582841e 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1712,6 +1712,11 @@ retry_discard:
+ 	} else
+ 		r10_bio->master_bio = (struct bio *)first_r10bio;
+ 
++	/*
++	 * first select target devices under rcu_lock and
++	 * inc refcount on their rdev.  Record them by setting
++	 * bios[x] to bio
++	 */
+ 	rcu_read_lock();
+ 	for (disk = 0; disk < geo->raid_disks; disk++) {
+ 		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+@@ -1743,9 +1748,6 @@ retry_discard:
+ 	for (disk = 0; disk < geo->raid_disks; disk++) {
+ 		sector_t dev_start, dev_end;
+ 		struct bio *mbio, *rbio = NULL;
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[disk].replacement);
+ 
+ 		/*
+ 		 * Now start to calculate the start and end address for each disk.
+@@ -1775,9 +1777,12 @@ retry_discard:
+ 
+ 		/*
+ 		 * It only handles discard bio which size is >= stripe size, so
+-		 * dev_end > dev_start all the time
++		 * dev_end > dev_start all the time.
++		 * It doesn't need to use rcu lock to get rdev here. We already
++		 * add rdev->nr_pending in the first loop.
+ 		 */
+ 		if (r10_bio->devs[disk].bio) {
++			struct md_rdev *rdev = conf->mirrors[disk].rdev;
+ 			mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
+ 			mbio->bi_end_io = raid10_end_discard_request;
+ 			mbio->bi_private = r10_bio;
+@@ -1790,6 +1795,7 @@ retry_discard:
+ 			bio_endio(mbio);
+ 		}
+ 		if (r10_bio->devs[disk].repl_bio) {
++			struct md_rdev *rrdev = conf->mirrors[disk].replacement;
+ 			rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
+ 			rbio->bi_end_io = raid10_end_discard_request;
+ 			rbio->bi_private = r10_bio;
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 91e6db847bb5a..3a191e257fad0 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -2233,6 +2233,7 @@ static int tda1997x_core_init(struct v4l2_subdev *sd)
+ 	/* get initial HDMI status */
+ 	state->hdmi_status = io_read(sd, REG_HDMI_FLAGS);
+ 
++	io_write(sd, REG_EDID_ENABLE, EDID_ENABLE_A_EN | EDID_ENABLE_B_EN);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/atmel/atmel-sama5d2-isc.c b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+index 925aa80a139b2..b66f1d174e9d7 100644
+--- a/drivers/media/platform/atmel/atmel-sama5d2-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+@@ -255,6 +255,23 @@ static void isc_sama5d2_config_rlp(struct isc_device *isc)
+ 	struct regmap *regmap = isc->regmap;
+ 	u32 rlp_mode = isc->config.rlp_cfg_mode;
+ 
++	/*
++	 * In sama5d2, the YUV planar modes and the YUYV modes are treated
++	 * in the same way in RLP register.
++	 * Normally, YYCC mode should be Luma(n) - Color B(n) - Color R (n)
++	 * and YCYC should be Luma(n + 1) - Color B (n) - Luma (n) - Color R (n)
++	 * but in sama5d2, the YCYC mode does not exist, and YYCC must be
++	 * selected for both planar and interleaved modes, as in fact
++	 * both modes are supported.
++	 *
++	 * Thus, if the YCYC mode is selected, replace it with the
++	 * sama5d2-compliant mode which is YYCC .
++	 */
++	if ((rlp_mode & ISC_RLP_CFG_MODE_YCYC) == ISC_RLP_CFG_MODE_YCYC) {
++		rlp_mode &= ~ISC_RLP_CFG_MODE_MASK;
++		rlp_mode |= ISC_RLP_CFG_MODE_YYCC;
++	}
++
+ 	regmap_update_bits(regmap, ISC_RLP_CFG + isc->offsets.rlp,
+ 			   ISC_RLP_CFG_MODE_MASK, rlp_mode);
+ }
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index 2f42808c43a4b..c484c008ab027 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -2053,17 +2053,25 @@ static int __coda_start_decoding(struct coda_ctx *ctx)
+ 	u32 src_fourcc, dst_fourcc;
+ 	int ret;
+ 
++	q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
++	q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++	src_fourcc = q_data_src->fourcc;
++	dst_fourcc = q_data_dst->fourcc;
++
+ 	if (!ctx->initialized) {
+ 		ret = __coda_decoder_seq_init(ctx);
+ 		if (ret < 0)
+ 			return ret;
++	} else {
++		ctx->frame_mem_ctrl &= ~(CODA_FRAME_CHROMA_INTERLEAVE | (0x3 << 9) |
++					 CODA9_FRAME_TILED2LINEAR);
++		if (dst_fourcc == V4L2_PIX_FMT_NV12 || dst_fourcc == V4L2_PIX_FMT_YUYV)
++			ctx->frame_mem_ctrl |= CODA_FRAME_CHROMA_INTERLEAVE;
++		if (ctx->tiled_map_type == GDI_TILED_FRAME_MB_RASTER_MAP)
++			ctx->frame_mem_ctrl |= (0x3 << 9) |
++				((ctx->use_vdoa) ? 0 : CODA9_FRAME_TILED2LINEAR);
+ 	}
+ 
+-	q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+-	q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+-	src_fourcc = q_data_src->fourcc;
+-	dst_fourcc = q_data_dst->fourcc;
+-
+ 	coda_write(dev, ctx->parabuf.paddr, CODA_REG_BIT_PARA_BUF_ADDR);
+ 
+ 	ret = coda_alloc_framebuffers(ctx, q_data_dst, src_fourcc);
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index 53025c8c75312..20f59c59ff8a2 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -2037,8 +2037,10 @@ static int isp_subdev_notifier_complete(struct v4l2_async_notifier *async)
+ 	mutex_lock(&isp->media_dev.graph_mutex);
+ 
+ 	ret = media_entity_enum_init(&isp->crashed, &isp->media_dev);
+-	if (ret)
++	if (ret) {
++		mutex_unlock(&isp->media_dev.graph_mutex);
+ 		return ret;
++	}
+ 
+ 	list_for_each_entry(sd, &v4l2_dev->subdevs, list) {
+ 		if (sd->notifier != &isp->notifier)
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 1fe6d463dc993..8012f5c7bf344 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -1137,6 +1137,9 @@ int venus_helper_set_format_constraints(struct venus_inst *inst)
+ 	if (!IS_V6(inst->core))
+ 		return 0;
+ 
++	if (inst->opb_fmt == HFI_COLOR_FORMAT_NV12_UBWC)
++		return 0;
++
+ 	pconstraint.buffer_type = HFI_BUFFER_OUTPUT2;
+ 	pconstraint.num_planes = 2;
+ 	pconstraint.plane_format[0].stride_multiples = 128;
+diff --git a/drivers/media/platform/qcom/venus/hfi_msgs.c b/drivers/media/platform/qcom/venus/hfi_msgs.c
+index d9fde66f6fa8c..9a2bdb002edcc 100644
+--- a/drivers/media/platform/qcom/venus/hfi_msgs.c
++++ b/drivers/media/platform/qcom/venus/hfi_msgs.c
+@@ -261,7 +261,7 @@ sys_get_prop_image_version(struct device *dev,
+ 
+ 	smem_tbl_ptr = qcom_smem_get(QCOM_SMEM_HOST_ANY,
+ 		SMEM_IMG_VER_TBL, &smem_blk_sz);
+-	if (smem_tbl_ptr && smem_blk_sz >= SMEM_IMG_OFFSET_VENUS + VER_STR_SZ)
++	if (!IS_ERR(smem_tbl_ptr) && smem_blk_sz >= SMEM_IMG_OFFSET_VENUS + VER_STR_SZ)
+ 		memcpy(smem_tbl_ptr + SMEM_IMG_OFFSET_VENUS,
+ 		       img_ver, VER_STR_SZ);
+ }
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index 8dd49d4f124cb..1d62e38065d62 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -183,6 +183,8 @@ venc_try_fmt_common(struct venus_inst *inst, struct v4l2_format *f)
+ 		else
+ 			return NULL;
+ 		fmt = find_format(inst, pixmp->pixelformat, f->type);
++		if (!fmt)
++			return NULL;
+ 	}
+ 
+ 	pixmp->width = clamp(pixmp->width, frame_width_min(inst),
+diff --git a/drivers/media/platform/rcar-vin/rcar-v4l2.c b/drivers/media/platform/rcar-vin/rcar-v4l2.c
+index cca15a10c0b34..0d141155f0e3e 100644
+--- a/drivers/media/platform/rcar-vin/rcar-v4l2.c
++++ b/drivers/media/platform/rcar-vin/rcar-v4l2.c
+@@ -253,8 +253,8 @@ static int rvin_try_format(struct rvin_dev *vin, u32 which,
+ 	int ret;
+ 
+ 	sd_state = v4l2_subdev_alloc_state(sd);
+-	if (sd_state == NULL)
+-		return -ENOMEM;
++	if (IS_ERR(sd_state))
++		return PTR_ERR(sd_state);
+ 
+ 	if (!rvin_format_from_pixel(vin, pix->pixelformat))
+ 		pix->pixelformat = RVIN_DEFAULT_FORMAT;
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index bf3fd71ec3aff..6759091b15e09 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -863,12 +863,12 @@ static int rga_probe(struct platform_device *pdev)
+ 	if (IS_ERR(rga->m2m_dev)) {
+ 		v4l2_err(&rga->v4l2_dev, "Failed to init mem2mem device\n");
+ 		ret = PTR_ERR(rga->m2m_dev);
+-		goto unreg_video_dev;
++		goto rel_vdev;
+ 	}
+ 
+ 	ret = pm_runtime_resume_and_get(rga->dev);
+ 	if (ret < 0)
+-		goto unreg_video_dev;
++		goto rel_vdev;
+ 
+ 	rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF;
+ 	rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F;
+@@ -882,11 +882,23 @@ static int rga_probe(struct platform_device *pdev)
+ 	rga->cmdbuf_virt = dma_alloc_attrs(rga->dev, RGA_CMDBUF_SIZE,
+ 					   &rga->cmdbuf_phy, GFP_KERNEL,
+ 					   DMA_ATTR_WRITE_COMBINE);
++	if (!rga->cmdbuf_virt) {
++		ret = -ENOMEM;
++		goto rel_vdev;
++	}
+ 
+ 	rga->src_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
++	if (!rga->src_mmu_pages) {
++		ret = -ENOMEM;
++		goto free_dma;
++	}
+ 	rga->dst_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
++	if (rga->dst_mmu_pages) {
++		ret = -ENOMEM;
++		goto free_src_pages;
++	}
+ 
+ 	def_frame.stride = (def_frame.width * def_frame.fmt->depth) >> 3;
+ 	def_frame.size = def_frame.stride * def_frame.height;
+@@ -894,7 +906,7 @@ static int rga_probe(struct platform_device *pdev)
+ 	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+ 	if (ret) {
+ 		v4l2_err(&rga->v4l2_dev, "Failed to register video device\n");
+-		goto rel_vdev;
++		goto free_dst_pages;
+ 	}
+ 
+ 	v4l2_info(&rga->v4l2_dev, "Registered %s as /dev/%s\n",
+@@ -902,10 +914,15 @@ static int rga_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++free_dst_pages:
++	free_pages((unsigned long)rga->dst_mmu_pages, 3);
++free_src_pages:
++	free_pages((unsigned long)rga->src_mmu_pages, 3);
++free_dma:
++	dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt,
++		       rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE);
+ rel_vdev:
+ 	video_device_release(vfd);
+-unreg_video_dev:
+-	video_unregister_device(rga->vfd);
+ unreg_v4l2_dev:
+ 	v4l2_device_unregister(&rga->v4l2_dev);
+ err_put_clk:
+diff --git a/drivers/media/platform/vsp1/vsp1_entity.c b/drivers/media/platform/vsp1/vsp1_entity.c
+index 6f51e5c755432..823c15facd1b4 100644
+--- a/drivers/media/platform/vsp1/vsp1_entity.c
++++ b/drivers/media/platform/vsp1/vsp1_entity.c
+@@ -676,9 +676,9 @@ int vsp1_entity_init(struct vsp1_device *vsp1, struct vsp1_entity *entity,
+ 	 * rectangles.
+ 	 */
+ 	entity->config = v4l2_subdev_alloc_state(&entity->subdev);
+-	if (entity->config == NULL) {
++	if (IS_ERR(entity->config)) {
+ 		media_entity_cleanup(&entity->subdev.entity);
+-		return -ENOMEM;
++		return PTR_ERR(entity->config);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/spi/cxd2880-spi.c b/drivers/media/spi/cxd2880-spi.c
+index e5094fff04c5a..b91a1e845b972 100644
+--- a/drivers/media/spi/cxd2880-spi.c
++++ b/drivers/media/spi/cxd2880-spi.c
+@@ -524,13 +524,13 @@ cxd2880_spi_probe(struct spi_device *spi)
+ 	if (IS_ERR(dvb_spi->vcc_supply)) {
+ 		if (PTR_ERR(dvb_spi->vcc_supply) == -EPROBE_DEFER) {
+ 			ret = -EPROBE_DEFER;
+-			goto fail_adapter;
++			goto fail_regulator;
+ 		}
+ 		dvb_spi->vcc_supply = NULL;
+ 	} else {
+ 		ret = regulator_enable(dvb_spi->vcc_supply);
+ 		if (ret)
+-			goto fail_adapter;
++			goto fail_regulator;
+ 	}
+ 
+ 	dvb_spi->spi = spi;
+@@ -618,6 +618,9 @@ fail_frontend:
+ fail_attach:
+ 	dvb_unregister_adapter(&dvb_spi->adapter);
+ fail_adapter:
++	if (!dvb_spi->vcc_supply)
++		regulator_disable(dvb_spi->vcc_supply);
++fail_regulator:
+ 	kfree(dvb_spi);
+ 	return ret;
+ }
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-i2c.c b/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
+index 2e07106f46803..bc4b2abdde1a4 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
+@@ -17,7 +17,8 @@ int dvb_usb_i2c_init(struct dvb_usb_device *d)
+ 
+ 	if (d->props.i2c_algo == NULL) {
+ 		err("no i2c algorithm specified");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	strscpy(d->i2c_adap.name, d->desc->name, sizeof(d->i2c_adap.name));
+@@ -27,11 +28,15 @@ int dvb_usb_i2c_init(struct dvb_usb_device *d)
+ 
+ 	i2c_set_adapdata(&d->i2c_adap, d);
+ 
+-	if ((ret = i2c_add_adapter(&d->i2c_adap)) < 0)
++	ret = i2c_add_adapter(&d->i2c_adap);
++	if (ret < 0) {
+ 		err("could not add i2c adapter");
++		goto err;
++	}
+ 
+ 	d->state |= DVB_USB_STATE_I2C;
+ 
++err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 28e1fd64dd3c2..61439c8f33cab 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -194,8 +194,8 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
+ 
+ err_adapter_init:
+ 	dvb_usb_adapter_exit(d);
+-err_i2c_init:
+ 	dvb_usb_i2c_exit(d);
++err_i2c_init:
+ 	if (d->priv && d->props.priv_destroy)
+ 		d->props.priv_destroy(d);
+ err_priv_init:
+diff --git a/drivers/media/usb/dvb-usb/nova-t-usb2.c b/drivers/media/usb/dvb-usb/nova-t-usb2.c
+index e7b290552b663..9c0eb0d40822e 100644
+--- a/drivers/media/usb/dvb-usb/nova-t-usb2.c
++++ b/drivers/media/usb/dvb-usb/nova-t-usb2.c
+@@ -130,7 +130,7 @@ ret:
+ 
+ static int nova_t_read_mac_address (struct dvb_usb_device *d, u8 mac[6])
+ {
+-	int i;
++	int i, ret;
+ 	u8 b;
+ 
+ 	mac[0] = 0x00;
+@@ -139,7 +139,9 @@ static int nova_t_read_mac_address (struct dvb_usb_device *d, u8 mac[6])
+ 
+ 	/* this is a complete guess, but works for my box */
+ 	for (i = 136; i < 139; i++) {
+-		dibusb_read_eeprom_byte(d,i, &b);
++		ret = dibusb_read_eeprom_byte(d, i, &b);
++		if (ret)
++			return ret;
+ 
+ 		mac[5 - (i - 136)] = b;
+ 	}
+diff --git a/drivers/media/usb/dvb-usb/vp702x.c b/drivers/media/usb/dvb-usb/vp702x.c
+index bf54747e2e01a..a1d9e4801a2ba 100644
+--- a/drivers/media/usb/dvb-usb/vp702x.c
++++ b/drivers/media/usb/dvb-usb/vp702x.c
+@@ -291,16 +291,22 @@ static int vp702x_rc_query(struct dvb_usb_device *d, u32 *event, int *state)
+ static int vp702x_read_mac_addr(struct dvb_usb_device *d,u8 mac[6])
+ {
+ 	u8 i, *buf;
++	int ret;
+ 	struct vp702x_device_state *st = d->priv;
+ 
+ 	mutex_lock(&st->buf_mutex);
+ 	buf = st->buf;
+-	for (i = 6; i < 12; i++)
+-		vp702x_usb_in_op(d, READ_EEPROM_REQ, i, 1, &buf[i - 6], 1);
++	for (i = 6; i < 12; i++) {
++		ret = vp702x_usb_in_op(d, READ_EEPROM_REQ, i, 1,
++				       &buf[i - 6], 1);
++		if (ret < 0)
++			goto err;
++	}
+ 
+ 	memcpy(mac, buf, 6);
++err:
+ 	mutex_unlock(&st->buf_mutex);
+-	return 0;
++	return ret;
+ }
+ 
+ static int vp702x_frontend_attach(struct dvb_usb_adapter *adap)
+diff --git a/drivers/media/usb/em28xx/em28xx-input.c b/drivers/media/usb/em28xx/em28xx-input.c
+index 59529cbf9cd0b..0b6d77c3bec86 100644
+--- a/drivers/media/usb/em28xx/em28xx-input.c
++++ b/drivers/media/usb/em28xx/em28xx-input.c
+@@ -842,7 +842,6 @@ error:
+ 	kfree(ir);
+ ref_put:
+ 	em28xx_shutdown_buttons(dev);
+-	kref_put(&dev->ref, em28xx_free_device);
+ 	return err;
+ }
+ 
+diff --git a/drivers/media/usb/go7007/go7007-driver.c b/drivers/media/usb/go7007/go7007-driver.c
+index f1767be9d8685..6650eab913d81 100644
+--- a/drivers/media/usb/go7007/go7007-driver.c
++++ b/drivers/media/usb/go7007/go7007-driver.c
+@@ -691,49 +691,23 @@ struct go7007 *go7007_alloc(const struct go7007_board_info *board,
+ 						struct device *dev)
+ {
+ 	struct go7007 *go;
+-	int i;
+ 
+ 	go = kzalloc(sizeof(struct go7007), GFP_KERNEL);
+ 	if (go == NULL)
+ 		return NULL;
+ 	go->dev = dev;
+ 	go->board_info = board;
+-	go->board_id = 0;
+ 	go->tuner_type = -1;
+-	go->channel_number = 0;
+-	go->name[0] = 0;
+ 	mutex_init(&go->hw_lock);
+ 	init_waitqueue_head(&go->frame_waitq);
+ 	spin_lock_init(&go->spinlock);
+ 	go->status = STATUS_INIT;
+-	memset(&go->i2c_adapter, 0, sizeof(go->i2c_adapter));
+-	go->i2c_adapter_online = 0;
+-	go->interrupt_available = 0;
+ 	init_waitqueue_head(&go->interrupt_waitq);
+-	go->input = 0;
+ 	go7007_update_board(go);
+-	go->encoder_h_halve = 0;
+-	go->encoder_v_halve = 0;
+-	go->encoder_subsample = 0;
+ 	go->format = V4L2_PIX_FMT_MJPEG;
+ 	go->bitrate = 1500000;
+ 	go->fps_scale = 1;
+-	go->pali = 0;
+ 	go->aspect_ratio = GO7007_RATIO_1_1;
+-	go->gop_size = 0;
+-	go->ipb = 0;
+-	go->closed_gop = 0;
+-	go->repeat_seqhead = 0;
+-	go->seq_header_enable = 0;
+-	go->gop_header_enable = 0;
+-	go->dvd_mode = 0;
+-	go->interlace_coding = 0;
+-	for (i = 0; i < 4; ++i)
+-		go->modet[i].enable = 0;
+-	for (i = 0; i < 1624; ++i)
+-		go->modet_map[i] = 0;
+-	go->audio_deliver = NULL;
+-	go->audio_enabled = 0;
+ 
+ 	return go;
+ }
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index dbf0455d5d50d..eeb85981e02b6 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1134,7 +1134,7 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ 
+ 	ep = usb->usbdev->ep_in[4];
+ 	if (!ep)
+-		return -ENODEV;
++		goto allocfail;
+ 
+ 	/* Allocate the URB and buffer for receiving incoming interrupts */
+ 	usb->intr_urb = usb_alloc_urb(0, GFP_KERNEL);
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index 9dda87c6b54a9..016cb0b150fc7 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -82,7 +82,7 @@ static struct crashpoint crashpoints[] = {
+ 	CRASHPOINT("FS_DEVRW",		 "ll_rw_block"),
+ 	CRASHPOINT("MEM_SWAPOUT",	 "shrink_inactive_list"),
+ 	CRASHPOINT("TIMERADD",		 "hrtimer_start"),
+-	CRASHPOINT("SCSI_DISPATCH_CMD",	 "scsi_dispatch_cmd"),
++	CRASHPOINT("SCSI_QUEUE_RQ",	 "scsi_queue_rq"),
+ 	CRASHPOINT("IDE_CORE_CP",	 "generic_ide_ioctl"),
+ #endif
+ };
+diff --git a/drivers/misc/pvpanic/pvpanic.c b/drivers/misc/pvpanic/pvpanic.c
+index 02b807c788c9f..bb7aa63685388 100644
+--- a/drivers/misc/pvpanic/pvpanic.c
++++ b/drivers/misc/pvpanic/pvpanic.c
+@@ -85,6 +85,8 @@ int devm_pvpanic_probe(struct device *dev, struct pvpanic_instance *pi)
+ 	list_add(&pi->list, &pvpanic_list);
+ 	spin_unlock(&pvpanic_lock);
+ 
++	dev_set_drvdata(dev, pi);
++
+ 	return devm_add_action_or_reset(dev, pvpanic_remove, pi);
+ }
+ EXPORT_SYMBOL_GPL(devm_pvpanic_probe);
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index c3229d8c7041c..33cb70aa02aa8 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -782,6 +782,7 @@ static int dw_mci_edmac_start_dma(struct dw_mci *host,
+ 	int ret = 0;
+ 
+ 	/* Set external dma config: burst size, burst width */
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.dst_addr = host->phy_regs + fifo_offset;
+ 	cfg.src_addr = cfg.dst_addr;
+ 	cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index bde2988875797..6c9d38132f74c 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -628,6 +628,7 @@ static int moxart_probe(struct platform_device *pdev)
+ 			 host->dma_chan_tx, host->dma_chan_rx);
+ 		host->have_dma = true;
+ 
++		memset(&cfg, 0, sizeof(cfg));
+ 		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index aba6e10b86054..fff6c39a343e9 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1222,6 +1222,7 @@ static int sdhci_external_dma_setup(struct sdhci_host *host,
+ 	if (!host->mapbase)
+ 		return -EINVAL;
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.src_addr = host->mapbase + SDHCI_BUFFER;
+ 	cfg.dst_addr = host->mapbase + SDHCI_BUFFER;
+ 	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index b23e3488695ba..bd1417a66cbf2 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -2016,15 +2016,6 @@ int b53_br_flags(struct dsa_switch *ds, int port,
+ }
+ EXPORT_SYMBOL(b53_br_flags);
+ 
+-int b53_set_mrouter(struct dsa_switch *ds, int port, bool mrouter,
+-		    struct netlink_ext_ack *extack)
+-{
+-	b53_port_set_mcast_flood(ds->priv, port, mrouter);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL(b53_set_mrouter);
+-
+ static bool b53_possible_cpu_port(struct dsa_switch *ds, int port)
+ {
+ 	/* Broadcom switches will accept enabling Broadcom tags on the
+@@ -2268,7 +2259,6 @@ static const struct dsa_switch_ops b53_switch_ops = {
+ 	.port_bridge_leave	= b53_br_leave,
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+ 	.port_bridge_flags	= b53_br_flags,
+-	.port_set_mrouter	= b53_set_mrouter,
+ 	.port_stp_state_set	= b53_br_set_stp_state,
+ 	.port_fast_age		= b53_br_fast_age,
+ 	.port_vlan_filtering	= b53_vlan_filtering,
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 82700a5714c10..9bf8319342b0b 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -328,8 +328,6 @@ int b53_br_flags_pre(struct dsa_switch *ds, int port,
+ int b53_br_flags(struct dsa_switch *ds, int port,
+ 		 struct switchdev_brport_flags flags,
+ 		 struct netlink_ext_ack *extack);
+-int b53_set_mrouter(struct dsa_switch *ds, int port, bool mrouter,
+-		    struct netlink_ext_ack *extack);
+ int b53_setup_devlink_resources(struct dsa_switch *ds);
+ void b53_port_event(struct dsa_switch *ds, int port);
+ void b53_phylink_validate(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 3b018fcf44124..6ce9ec1283e05 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1199,7 +1199,6 @@ static const struct dsa_switch_ops bcm_sf2_ops = {
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+ 	.port_bridge_flags	= b53_br_flags,
+ 	.port_stp_state_set	= b53_br_set_stp_state,
+-	.port_set_mrouter	= b53_set_mrouter,
+ 	.port_fast_age		= b53_br_fast_age,
+ 	.port_vlan_filtering	= b53_vlan_filtering,
+ 	.port_vlan_add		= b53_vlan_add,
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 05bc46634b369..0cea1572f8260 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1185,18 +1185,6 @@ mt7530_port_bridge_flags(struct dsa_switch *ds, int port,
+ 	return 0;
+ }
+ 
+-static int
+-mt7530_port_set_mrouter(struct dsa_switch *ds, int port, bool mrouter,
+-			struct netlink_ext_ack *extack)
+-{
+-	struct mt7530_priv *priv = ds->priv;
+-
+-	mt7530_rmw(priv, MT7530_MFC, UNM_FFP(BIT(port)),
+-		   mrouter ? UNM_FFP(BIT(port)) : 0);
+-
+-	return 0;
+-}
+-
+ static int
+ mt7530_port_bridge_join(struct dsa_switch *ds, int port,
+ 			struct net_device *bridge)
+@@ -3058,7 +3046,6 @@ static const struct dsa_switch_ops mt7530_switch_ops = {
+ 	.port_stp_state_set	= mt7530_stp_state_set,
+ 	.port_pre_bridge_flags	= mt7530_port_pre_bridge_flags,
+ 	.port_bridge_flags	= mt7530_port_bridge_flags,
+-	.port_set_mrouter	= mt7530_port_set_mrouter,
+ 	.port_bridge_join	= mt7530_port_bridge_join,
+ 	.port_bridge_leave	= mt7530_port_bridge_leave,
+ 	.port_fdb_add		= mt7530_port_fdb_add,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 272b0535d9461..111a6d5985da6 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -5781,23 +5781,6 @@ out:
+ 	return err;
+ }
+ 
+-static int mv88e6xxx_port_set_mrouter(struct dsa_switch *ds, int port,
+-				      bool mrouter,
+-				      struct netlink_ext_ack *extack)
+-{
+-	struct mv88e6xxx_chip *chip = ds->priv;
+-	int err;
+-
+-	if (!chip->info->ops->port_set_mcast_flood)
+-		return -EOPNOTSUPP;
+-
+-	mv88e6xxx_reg_lock(chip);
+-	err = chip->info->ops->port_set_mcast_flood(chip, port, mrouter);
+-	mv88e6xxx_reg_unlock(chip);
+-
+-	return err;
+-}
+-
+ static bool mv88e6xxx_lag_can_offload(struct dsa_switch *ds,
+ 				      struct net_device *lag,
+ 				      struct netdev_lag_upper_info *info)
+@@ -6099,7 +6082,6 @@ static const struct dsa_switch_ops mv88e6xxx_switch_ops = {
+ 	.port_bridge_leave	= mv88e6xxx_port_bridge_leave,
+ 	.port_pre_bridge_flags	= mv88e6xxx_port_pre_bridge_flags,
+ 	.port_bridge_flags	= mv88e6xxx_port_bridge_flags,
+-	.port_set_mrouter	= mv88e6xxx_port_set_mrouter,
+ 	.port_stp_state_set	= mv88e6xxx_port_stp_state_set,
+ 	.port_fast_age		= mv88e6xxx_port_fast_age,
+ 	.port_vlan_filtering	= mv88e6xxx_port_vlan_filtering,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 59253846e8858..f26d037356191 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -417,6 +417,9 @@ static int atl_resume_common(struct device *dev, bool deep)
+ 	pci_restore_state(pdev);
+ 
+ 	if (deep) {
++		/* Reinitialize Nic/Vecs objects */
++		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
++
+ 		ret = aq_nic_init(nic);
+ 		if (ret)
+ 			goto err_exit;
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 5bb56b4545415..f089d33dd48e0 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -322,7 +322,8 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 	tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+ 
+ 	// Check if next command will overflow the buffer.
+-	if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) == tail) {
++	if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) ==
++	    (tail & priv->adminq_mask)) {
+ 		int err;
+ 
+ 		// Flush existing commands to make room.
+@@ -332,7 +333,8 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 
+ 		// Retry.
+ 		tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+-		if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) == tail) {
++		if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) ==
++		    (tail & priv->adminq_mask)) {
+ 			// This should never happen. We just flushed the
+ 			// command queue so there should be enough space.
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index eff0a30790dd7..472f56b360b8c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1160,12 +1160,12 @@ static int i40e_quiesce_vf_pci(struct i40e_vf *vf)
+ }
+ 
+ /**
+- * i40e_getnum_vf_vsi_vlan_filters
++ * __i40e_getnum_vf_vsi_vlan_filters
+  * @vsi: pointer to the vsi
+  *
+  * called to get the number of VLANs offloaded on this VF
+  **/
+-static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
++static int __i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_mac_filter *f;
+ 	u16 num_vlans = 0, bkt;
+@@ -1178,6 +1178,23 @@ static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+ 	return num_vlans;
+ }
+ 
++/**
++ * i40e_getnum_vf_vsi_vlan_filters
++ * @vsi: pointer to the vsi
++ *
++ * wrapper for __i40e_getnum_vf_vsi_vlan_filters() with spinlock held
++ **/
++static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
++{
++	int num_vlans;
++
++	spin_lock_bh(&vsi->mac_filter_hash_lock);
++	num_vlans = __i40e_getnum_vf_vsi_vlan_filters(vsi);
++	spin_unlock_bh(&vsi->mac_filter_hash_lock);
++
++	return num_vlans;
++}
++
+ /**
+  * i40e_get_vlan_list_sync
+  * @vsi: pointer to the VSI
+@@ -1195,7 +1212,7 @@ static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans,
+ 	int bkt;
+ 
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+-	*num_vlans = i40e_getnum_vf_vsi_vlan_filters(vsi);
++	*num_vlans = __i40e_getnum_vf_vsi_vlan_filters(vsi);
+ 	*vlan_list = kcalloc(*num_vlans, sizeof(**vlan_list), GFP_ATOMIC);
+ 	if (!(*vlan_list))
+ 		goto err;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index fe2ded775f259..a8bd512d5b450 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5122,6 +5122,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 	struct ice_hw *hw = &pf->hw;
+ 	struct sockaddr *addr = pi;
+ 	enum ice_status status;
++	u8 old_mac[ETH_ALEN];
+ 	u8 flags = 0;
+ 	int err = 0;
+ 	u8 *mac;
+@@ -5144,8 +5145,13 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 	}
+ 
+ 	netif_addr_lock_bh(netdev);
++	ether_addr_copy(old_mac, netdev->dev_addr);
++	/* change the netdev's MAC address */
++	memcpy(netdev->dev_addr, mac, netdev->addr_len);
++	netif_addr_unlock_bh(netdev);
++
+ 	/* Clean up old MAC filter. Not an error if old filter doesn't exist */
+-	status = ice_fltr_remove_mac(vsi, netdev->dev_addr, ICE_FWD_TO_VSI);
++	status = ice_fltr_remove_mac(vsi, old_mac, ICE_FWD_TO_VSI);
+ 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
+ 		err = -EADDRNOTAVAIL;
+ 		goto err_update_filters;
+@@ -5168,13 +5174,12 @@ err_update_filters:
+ 	if (err) {
+ 		netdev_err(netdev, "can't set MAC %pM. filter update failed\n",
+ 			   mac);
++		netif_addr_lock_bh(netdev);
++		ether_addr_copy(netdev->dev_addr, old_mac);
+ 		netif_addr_unlock_bh(netdev);
+ 		return err;
+ 	}
+ 
+-	/* change the netdev's MAC address */
+-	memcpy(netdev->dev_addr, mac, netdev->addr_len);
+-	netif_addr_unlock_bh(netdev);
+ 	netdev_dbg(vsi->netdev, "updated MAC address to %pM\n",
+ 		   netdev->dev_addr);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 9e3ddb9b8b516..234bc68e79f96 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -22,7 +22,7 @@ static void ice_set_tx_tstamp(struct ice_pf *pf, bool on)
+ 		return;
+ 
+ 	/* Set the timestamp enable flag for all the Tx rings */
+-	ice_for_each_rxq(vsi, i) {
++	ice_for_each_txq(vsi, i) {
+ 		if (!vsi->tx_rings[i])
+ 			continue;
+ 		vsi->tx_rings[i]->ptp_tx = on;
+@@ -688,6 +688,41 @@ err:
+ 	return -EFAULT;
+ }
+ 
++/**
++ * ice_ptp_disable_all_clkout - Disable all currently configured outputs
++ * @pf: pointer to the PF structure
++ *
++ * Disable all currently configured clock outputs. This is necessary before
++ * certain changes to the PTP hardware clock. Use ice_ptp_enable_all_clkout to
++ * re-enable the clocks again.
++ */
++static void ice_ptp_disable_all_clkout(struct ice_pf *pf)
++{
++	uint i;
++
++	for (i = 0; i < pf->ptp.info.n_per_out; i++)
++		if (pf->ptp.perout_channels[i].ena)
++			ice_ptp_cfg_clkout(pf, i, NULL, false);
++}
++
++/**
++ * ice_ptp_enable_all_clkout - Enable all configured periodic clock outputs
++ * @pf: pointer to the PF structure
++ *
++ * Enable all currently configured clock outputs. Use this after
++ * ice_ptp_disable_all_clkout to reconfigure the output signals according to
++ * their configuration.
++ */
++static void ice_ptp_enable_all_clkout(struct ice_pf *pf)
++{
++	uint i;
++
++	for (i = 0; i < pf->ptp.info.n_per_out; i++)
++		if (pf->ptp.perout_channels[i].ena)
++			ice_ptp_cfg_clkout(pf, i, &pf->ptp.perout_channels[i],
++					   false);
++}
++
+ /**
+  * ice_ptp_gpio_enable_e810 - Enable/disable ancillary features of PHC
+  * @info: the driver's PTP info structure
+@@ -783,12 +818,17 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ 		goto exit;
+ 	}
+ 
++	/* Disable periodic outputs */
++	ice_ptp_disable_all_clkout(pf);
++
+ 	err = ice_ptp_write_init(pf, &ts64);
+ 	ice_ptp_unlock(hw);
+ 
+ 	if (!err)
+ 		ice_ptp_update_cached_phctime(pf);
+ 
++	/* Reenable periodic outputs */
++	ice_ptp_enable_all_clkout(pf);
+ exit:
+ 	if (err) {
+ 		dev_err(ice_pf_to_dev(pf), "PTP failed to set time %d\n", err);
+@@ -842,8 +882,14 @@ static int ice_ptp_adjtime(struct ptp_clock_info *info, s64 delta)
+ 		return -EBUSY;
+ 	}
+ 
++	/* Disable periodic outputs */
++	ice_ptp_disable_all_clkout(pf);
++
+ 	err = ice_ptp_write_adj(pf, delta);
+ 
++	/* Reenable periodic outputs */
++	ice_ptp_enable_all_clkout(pf);
++
+ 	ice_ptp_unlock(hw);
+ 
+ 	if (err) {
+@@ -1278,6 +1324,8 @@ ice_ptp_flush_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
+ {
+ 	u8 idx;
+ 
++	spin_lock(&tx->lock);
++
+ 	for (idx = 0; idx < tx->len; idx++) {
+ 		u8 phy_idx = idx + tx->quad_offset;
+ 
+@@ -1290,6 +1338,8 @@ ice_ptp_flush_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
+ 			tx->tstamps[idx].skb = NULL;
+ 		}
+ 	}
++
++	spin_unlock(&tx->lock);
+ }
+ 
+ /**
+@@ -1550,6 +1600,9 @@ void ice_ptp_release(struct ice_pf *pf)
+ 	if (!pf->ptp.clock)
+ 		return;
+ 
++	/* Disable periodic outputs */
++	ice_ptp_disable_all_clkout(pf);
++
+ 	ice_clear_ptp_clock_index(pf);
+ 	ptp_clock_unregister(pf->ptp.clock);
+ 	pf->ptp.clock = NULL;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+index 47f5ed006a932..e0b43aad203c1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+@@ -195,8 +195,6 @@ enum nix_scheduler {
+ #define NIX_CHAN_LBK_CHX(a, b)		(0 + 0x100 * (a) + (b))
+ #define NIX_CHAN_SDP_CH_START		(0x700ull)
+ 
+-#define SDP_CHANNELS			256
+-
+ /* The mask is to extract lower 10-bits of channel number
+  * which CPT will pass to X2P.
+  */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+index 8d48b64485c69..dbe9149a215e8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+@@ -82,10 +82,10 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
+ 		dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val);
+ 		return -EIO;
+ 	}
+-	/* PA[51:12] = RVU_AF_SMMU_TLN_FLIT1[60:21]
++	/* PA[51:12] = RVU_AF_SMMU_TLN_FLIT0[57:18]
+ 	 * PA[11:0] = IOVA[11:0]
+ 	 */
+-	pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT1) >> 21;
++	pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT0) >> 18;
+ 	pa &= GENMASK_ULL(39, 0);
+ 	*lmt_addr = (pa << 12) | (iova  & 0xFFF);
+ 
+@@ -212,9 +212,10 @@ void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc)
+ 
+ int rvu_set_channels_base(struct rvu *rvu)
+ {
++	u16 nr_lbk_chans, nr_sdp_chans, nr_cgx_chans, nr_cpt_chans;
++	u16 sdp_chan_base, cgx_chan_base, cpt_chan_base;
+ 	struct rvu_hwinfo *hw = rvu->hw;
+-	u16 cpt_chan_base;
+-	u64 nix_const;
++	u64 nix_const, nix_const1;
+ 	int blkaddr;
+ 
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, 0);
+@@ -222,6 +223,7 @@ int rvu_set_channels_base(struct rvu *rvu)
+ 		return blkaddr;
+ 
+ 	nix_const = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
++	nix_const1 = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
+ 
+ 	hw->cgx = (nix_const >> 12) & 0xFULL;
+ 	hw->lmac_per_cgx = (nix_const >> 8) & 0xFULL;
+@@ -244,14 +246,24 @@ int rvu_set_channels_base(struct rvu *rvu)
+ 	 * channels such that all channel numbers are contiguous
+ 	 * leaving no holes. This way the new CPT channels can be
+ 	 * accomodated. The order of channel numbers assigned is
+-	 * LBK, SDP, CGX and CPT.
++	 * LBK, SDP, CGX and CPT. Also the base channel number
++	 * of a block must be multiple of number of channels
++	 * of the block.
+ 	 */
+-	hw->sdp_chan_base = hw->lbk_chan_base + hw->lbk_links *
+-				((nix_const >> 16) & 0xFFULL);
+-	hw->cgx_chan_base = hw->sdp_chan_base + hw->sdp_links * SDP_CHANNELS;
++	nr_lbk_chans = (nix_const >> 16) & 0xFFULL;
++	nr_sdp_chans = nix_const1 & 0xFFFULL;
++	nr_cgx_chans = nix_const & 0xFFULL;
++	nr_cpt_chans = (nix_const >> 32) & 0xFFFULL;
+ 
+-	cpt_chan_base = hw->cgx_chan_base + hw->cgx_links *
+-				(nix_const & 0xFFULL);
++	sdp_chan_base = hw->lbk_chan_base + hw->lbk_links * nr_lbk_chans;
++	/* Round up base channel to multiple of number of channels */
++	hw->sdp_chan_base = ALIGN(sdp_chan_base, nr_sdp_chans);
++
++	cgx_chan_base = hw->sdp_chan_base + hw->sdp_links * nr_sdp_chans;
++	hw->cgx_chan_base = ALIGN(cgx_chan_base, nr_cgx_chans);
++
++	cpt_chan_base = hw->cgx_chan_base + hw->cgx_links * nr_cgx_chans;
++	hw->cpt_chan_base = ALIGN(cpt_chan_base, nr_cpt_chans);
+ 
+ 	/* Out of 4096 channels start CPT from 2048 so
+ 	 * that MSB for CPT channels is always set
+@@ -355,6 +367,7 @@ err_put:
+ 
+ static void __rvu_nix_set_channels(struct rvu *rvu, int blkaddr)
+ {
++	u64 nix_const1 = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
+ 	u64 nix_const = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
+ 	u16 cgx_chans, lbk_chans, sdp_chans, cpt_chans;
+ 	struct rvu_hwinfo *hw = rvu->hw;
+@@ -364,7 +377,7 @@ static void __rvu_nix_set_channels(struct rvu *rvu, int blkaddr)
+ 
+ 	cgx_chans = nix_const & 0xFFULL;
+ 	lbk_chans = (nix_const >> 16) & 0xFFULL;
+-	sdp_chans = SDP_CHANNELS;
++	sdp_chans = nix_const1 & 0xFFFULL;
+ 	cpt_chans = (nix_const >> 32) & 0xFFFULL;
+ 
+ 	start = hw->cgx_chan_base;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 4bfbbdf387709..c32195073e8a5 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -25,7 +25,7 @@ static int nix_update_mce_rule(struct rvu *rvu, u16 pcifunc,
+ 			       int type, bool add);
+ static int nix_setup_ipolicers(struct rvu *rvu,
+ 			       struct nix_hw *nix_hw, int blkaddr);
+-static void nix_ipolicer_freemem(struct nix_hw *nix_hw);
++static void nix_ipolicer_freemem(struct rvu *rvu, struct nix_hw *nix_hw);
+ static int nix_verify_bandprof(struct nix_cn10k_aq_enq_req *req,
+ 			       struct nix_hw *nix_hw, u16 pcifunc);
+ static int nix_free_all_bandprof(struct rvu *rvu, u16 pcifunc);
+@@ -3849,7 +3849,7 @@ static void rvu_nix_block_freemem(struct rvu *rvu, int blkaddr,
+ 			kfree(txsch->schq.bmap);
+ 		}
+ 
+-		nix_ipolicer_freemem(nix_hw);
++		nix_ipolicer_freemem(rvu, nix_hw);
+ 
+ 		vlan = &nix_hw->txvlan;
+ 		kfree(vlan->rsrc.bmap);
+@@ -4225,11 +4225,14 @@ static int nix_setup_ipolicers(struct rvu *rvu,
+ 	return 0;
+ }
+ 
+-static void nix_ipolicer_freemem(struct nix_hw *nix_hw)
++static void nix_ipolicer_freemem(struct rvu *rvu, struct nix_hw *nix_hw)
+ {
+ 	struct nix_ipolicer *ipolicer;
+ 	int layer;
+ 
++	if (!rvu->hw->cap.ipolicer)
++		return;
++
+ 	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+ 		ipolicer = &nix_hw->ipolicer[layer];
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 52b255426c22a..26a792407c40a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -23,7 +23,7 @@
+ #define RSVD_MCAM_ENTRIES_PER_NIXLF	1 /* Ucast for LFs */
+ 
+ #define NPC_PARSE_RESULT_DMAC_OFFSET	8
+-#define NPC_HW_TSTAMP_OFFSET		8
++#define NPC_HW_TSTAMP_OFFSET		8ULL
+ #define NPC_KEX_CHAN_MASK		0xFFFULL
+ #define NPC_KEX_PF_FUNC_MASK		0xFFFFULL
+ 
+@@ -938,7 +938,7 @@ void rvu_npc_enable_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
+ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 				     int blkaddr, u16 pcifunc, u64 rx_action)
+ {
+-	int actindex, index, bank;
++	int actindex, index, bank, entry;
+ 	bool enable;
+ 
+ 	if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+@@ -949,7 +949,7 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 		if (mcam->entry2target_pffunc[index] == pcifunc) {
+ 			bank = npc_get_bank(mcam, index);
+ 			actindex = index;
+-			index &= (mcam->banksize - 1);
++			entry = index & (mcam->banksize - 1);
+ 
+ 			/* read vf flow entry enable status */
+ 			enable = is_mcam_entry_enabled(rvu, mcam, blkaddr,
+@@ -959,7 +959,7 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 					      false);
+ 			/* update 'action' */
+ 			rvu_write64(rvu, blkaddr,
+-				    NPC_AF_MCAMEX_BANKX_ACTION(index, bank),
++				    NPC_AF_MCAMEX_BANKX_ACTION(entry, bank),
+ 				    rx_action);
+ 			if (enable)
+ 				npc_enable_mcam_entry(rvu, mcam, blkaddr,
+@@ -2030,14 +2030,15 @@ int rvu_npc_init(struct rvu *rvu)
+ 
+ 	/* Enable below for Rx pkts.
+ 	 * - Outer IPv4 header checksum validation.
+-	 * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2M].
++	 * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2B].
++	 * - Detect outer L2 multicast address and set NPC_RESULT_S[L2M].
+ 	 * - Inner IPv4 header checksum validation.
+ 	 * - Set non zero checksum error code value
+ 	 */
+ 	rvu_write64(rvu, blkaddr, NPC_AF_PCK_CFG,
+ 		    rvu_read64(rvu, blkaddr, NPC_AF_PCK_CFG) |
+-		    BIT_ULL(32) | BIT_ULL(24) | BIT_ULL(6) |
+-		    BIT_ULL(2) | BIT_ULL(1));
++		    ((u64)NPC_EC_OIP4_CSUM << 32) | (NPC_EC_IIP4_CSUM << 24) |
++		    BIT_ULL(7) | BIT_ULL(6) | BIT_ULL(2) | BIT_ULL(1));
+ 
+ 	rvu_npc_setup_interfaces(rvu, blkaddr);
+ 
+@@ -2166,7 +2167,7 @@ static void npc_unmap_mcam_entry_and_cntr(struct rvu *rvu,
+ 					  int blkaddr, u16 entry, u16 cntr)
+ {
+ 	u16 index = entry & (mcam->banksize - 1);
+-	u16 bank = npc_get_bank(mcam, entry);
++	u32 bank = npc_get_bank(mcam, entry);
+ 
+ 	/* Remove mapping and reduce counter's refcnt */
+ 	mcam->entry2cntr_map[entry] = NPC_MCAM_INVALID_MAP;
+@@ -2788,8 +2789,8 @@ int rvu_mbox_handler_npc_mcam_shift_entry(struct rvu *rvu,
+ 	struct npc_mcam *mcam = &rvu->hw->mcam;
+ 	u16 pcifunc = req->hdr.pcifunc;
+ 	u16 old_entry, new_entry;
++	int blkaddr, rc = 0;
+ 	u16 index, cntr;
+-	int blkaddr, rc;
+ 
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+ 	if (blkaddr < 0)
+@@ -2990,10 +2991,11 @@ int rvu_mbox_handler_npc_mcam_unmap_counter(struct rvu *rvu,
+ 		index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry);
+ 		if (index >= mcam->bmap_entries)
+ 			break;
++		entry = index + 1;
++
+ 		if (mcam->entry2cntr_map[index] != req->cntr)
+ 			continue;
+ 
+-		entry = index + 1;
+ 		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+ 					      index, req->cntr);
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+index 8b01ef6e2c997..4215841c9f86e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+@@ -53,7 +53,7 @@
+ #define RVU_AF_SMMU_TXN_REQ		    (0x6008)
+ #define RVU_AF_SMMU_ADDR_RSP_STS	    (0x6010)
+ #define RVU_AF_SMMU_ADDR_TLN		    (0x6018)
+-#define RVU_AF_SMMU_TLN_FLIT1		    (0x6030)
++#define RVU_AF_SMMU_TLN_FLIT0		    (0x6020)
+ 
+ /* Admin function's privileged PF/VF registers */
+ #define RVU_PRIV_CONST                      (0x8000000)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 70fcc1fd962fc..94dfd64f526fa 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -208,7 +208,8 @@ int otx2_set_mac_address(struct net_device *netdev, void *p)
+ 	if (!otx2_hw_set_mac_addr(pfvf, addr->sa_data)) {
+ 		memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+ 		/* update dmac field in vlan offload rule */
+-		if (pfvf->flags & OTX2_FLAG_RX_VLAN_SUPPORT)
++		if (netif_running(netdev) &&
++		    pfvf->flags & OTX2_FLAG_RX_VLAN_SUPPORT)
+ 			otx2_install_rxvlan_offload_flow(pfvf);
+ 		/* update dmac address in ntuple and DMAC filter list */
+ 		if (pfvf->flags & OTX2_FLAG_DMACFLTR_SUPPORT)
+@@ -268,6 +269,7 @@ unlock:
+ int otx2_set_flowkey_cfg(struct otx2_nic *pfvf)
+ {
+ 	struct otx2_rss_info *rss = &pfvf->hw.rss_info;
++	struct nix_rss_flowkey_cfg_rsp *rsp;
+ 	struct nix_rss_flowkey_cfg *req;
+ 	int err;
+ 
+@@ -282,6 +284,18 @@ int otx2_set_flowkey_cfg(struct otx2_nic *pfvf)
+ 	req->group = DEFAULT_RSS_CONTEXT_GROUP;
+ 
+ 	err = otx2_sync_mbox_msg(&pfvf->mbox);
++	if (err)
++		goto fail;
++
++	rsp = (struct nix_rss_flowkey_cfg_rsp *)
++			otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++	if (IS_ERR(rsp)) {
++		err = PTR_ERR(rsp);
++		goto fail;
++	}
++
++	pfvf->hw.flowkey_alg_idx = rsp->alg_idx;
++fail:
+ 	mutex_unlock(&pfvf->mbox.lock);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 8fd58cd07f50b..8c602d27108a7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -196,6 +196,9 @@ struct otx2_hw {
+ 	u8			lso_udpv4_idx;
+ 	u8			lso_udpv6_idx;
+ 
++	/* RSS */
++	u8			flowkey_alg_idx;
++
+ 	/* MSI-X */
+ 	u8			cint_cnt; /* CQ interrupt count */
+ 	u16			npa_msixoff; /* Offset of NPA vectors */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+index 4d9de525802d0..fdd27c4fea86d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+@@ -858,6 +858,7 @@ static int otx2_add_flow_msg(struct otx2_nic *pfvf, struct otx2_flow *flow)
+ 		if (flow->flow_spec.flow_type & FLOW_RSS) {
+ 			req->op = NIX_RX_ACTIONOP_RSS;
+ 			req->index = flow->rss_ctx_id;
++			req->flow_key_alg = pfvf->hw.flowkey_alg_idx;
+ 		} else {
+ 			req->op = NIX_RX_ACTIONOP_UCAST;
+ 			req->index = ethtool_get_flow_spec_ring(ring_cookie);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 972b202b9884d..32d5c623fdfaf 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -485,8 +485,8 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
+ 				   match.key->vlan_priority << 13;
+ 
+ 			vlan_tci_mask = match.mask->vlan_id |
+-					match.key->vlan_dei << 12 |
+-					match.key->vlan_priority << 13;
++					match.mask->vlan_dei << 12 |
++					match.mask->vlan_priority << 13;
+ 
+ 			flow_spec->vlan_tci = htons(vlan_tci);
+ 			flow_mask->vlan_tci = htons(vlan_tci_mask);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index def2156e50eeb..20bb372662541 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -397,7 +397,7 @@ int mlx5_register_device(struct mlx5_core_dev *dev)
+ void mlx5_unregister_device(struct mlx5_core_dev *dev)
+ {
+ 	mutex_lock(&mlx5_intf_mutex);
+-	dev->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
++	dev->priv.flags = MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
+ 	mlx5_rescan_drivers_locked(dev);
+ 	mutex_unlock(&mlx5_intf_mutex);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index d791d351b489d..be6b75bd10f1e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -670,6 +670,7 @@ params_reg_err:
+ void mlx5_devlink_unregister(struct devlink *devlink)
+ {
+ 	mlx5_devlink_traps_unregister(devlink);
++	devlink_params_unpublish(devlink);
+ 	devlink_params_unregister(devlink, mlx5_devlink_params,
+ 				  ARRAY_SIZE(mlx5_devlink_params));
+ 	devlink_unregister(devlink);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+index 1d5ce07b83f45..43b092f5565af 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+@@ -248,18 +248,12 @@ struct ttc_params {
+ 
+ void mlx5e_set_ttc_basic_params(struct mlx5e_priv *priv, struct ttc_params *ttc_params);
+ void mlx5e_set_ttc_ft_params(struct ttc_params *ttc_params);
+-void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params);
+ 
+ int mlx5e_create_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+ 			   struct mlx5e_ttc_table *ttc);
+ void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv,
+ 			     struct mlx5e_ttc_table *ttc);
+ 
+-int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+-				 struct mlx5e_ttc_table *ttc);
+-void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
+-				   struct mlx5e_ttc_table *ttc);
+-
+ void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft);
+ int mlx5e_ttc_fwd_dest(struct mlx5e_priv *priv, enum mlx5e_traffic_types type,
+ 		       struct mlx5_flow_destination *new_dest);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+index 5efe3278b0f64..1fd8baf198296 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+@@ -733,8 +733,8 @@ static void mlx5e_reset_qdisc(struct net_device *dev, u16 qid)
+ 	spin_unlock_bh(qdisc_lock(qdisc));
+ }
+ 
+-int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+-		       u16 *new_qid, struct netlink_ext_ack *extack)
++int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 *classid,
++		       struct netlink_ext_ack *extack)
+ {
+ 	struct mlx5e_qos_node *node;
+ 	struct netdev_queue *txq;
+@@ -742,11 +742,9 @@ int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+ 	bool opened;
+ 	int err;
+ 
+-	qos_dbg(priv->mdev, "TC_HTB_LEAF_DEL classid %04x\n", classid);
+-
+-	*old_qid = *new_qid = 0;
++	qos_dbg(priv->mdev, "TC_HTB_LEAF_DEL classid %04x\n", *classid);
+ 
+-	node = mlx5e_sw_node_find(priv, classid);
++	node = mlx5e_sw_node_find(priv, *classid);
+ 	if (!node)
+ 		return -ENOENT;
+ 
+@@ -764,7 +762,7 @@ int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+ 	err = mlx5_qos_destroy_node(priv->mdev, node->hw_id);
+ 	if (err) /* Not fatal. */
+ 		qos_warn(priv->mdev, "Failed to destroy leaf node %u (class %04x), err = %d\n",
+-			 node->hw_id, classid, err);
++			 node->hw_id, *classid, err);
+ 
+ 	mlx5e_sw_node_delete(priv, node);
+ 
+@@ -826,8 +824,7 @@ int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+ 	if (opened)
+ 		mlx5e_reactivate_qos_sq(priv, moved_qid, txq);
+ 
+-	*old_qid = mlx5e_qid_from_qos(&priv->channels, moved_qid);
+-	*new_qid = mlx5e_qid_from_qos(&priv->channels, qid);
++	*classid = node->classid;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h
+index 5af7991fcd194..757682b7c0e04 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h
+@@ -34,8 +34,8 @@ int mlx5e_htb_leaf_alloc_queue(struct mlx5e_priv *priv, u16 classid,
+ 			       struct netlink_ext_ack *extack);
+ int mlx5e_htb_leaf_to_inner(struct mlx5e_priv *priv, u16 classid, u16 child_classid,
+ 			    u64 rate, u64 ceil, struct netlink_ext_ack *extack);
+-int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+-		       u16 *new_qid, struct netlink_ext_ack *extack);
++int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 *classid,
++		       struct netlink_ext_ack *extack);
+ int mlx5e_htb_leaf_del_last(struct mlx5e_priv *priv, u16 classid, bool force,
+ 			    struct netlink_ext_ack *extack);
+ int mlx5e_htb_node_modify(struct mlx5e_priv *priv, u16 classid, u64 rate, u64 ceil,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index 2e846b7412806..1c44c6c345f5d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -147,7 +147,7 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
+ 	mlx5e_rep_queue_neigh_stats_work(priv);
+ 
+ 	list_for_each_entry(flow, flow_list, tmp_list) {
+-		if (!mlx5e_is_offloaded_flow(flow))
++		if (!mlx5e_is_offloaded_flow(flow) || !flow_flag_test(flow, SLOW))
+ 			continue;
+ 		attr = flow->attr;
+ 		esw_attr = attr->esw_attr;
+@@ -188,7 +188,7 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv,
+ 	int err;
+ 
+ 	list_for_each_entry(flow, flow_list, tmp_list) {
+-		if (!mlx5e_is_offloaded_flow(flow))
++		if (!mlx5e_is_offloaded_flow(flow) || flow_flag_test(flow, SLOW))
+ 			continue;
+ 		attr = flow->attr;
+ 		esw_attr = attr->esw_attr;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 0b75fab41ae8f..6464ac3f294e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -1324,7 +1324,7 @@ void mlx5e_set_ttc_basic_params(struct mlx5e_priv *priv,
+ 	ttc_params->inner_ttc = &priv->fs.inner_ttc;
+ }
+ 
+-void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params)
++static void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params)
+ {
+ 	struct mlx5_flow_table_attr *ft_attr = &ttc_params->ft_attr;
+ 
+@@ -1343,8 +1343,8 @@ void mlx5e_set_ttc_ft_params(struct ttc_params *ttc_params)
+ 	ft_attr->prio = MLX5E_NIC_PRIO;
+ }
+ 
+-int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+-				 struct mlx5e_ttc_table *ttc)
++static int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
++					struct mlx5e_ttc_table *ttc)
+ {
+ 	struct mlx5e_flow_table *ft = &ttc->ft;
+ 	int err;
+@@ -1374,8 +1374,8 @@ err:
+ 	return err;
+ }
+ 
+-void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
+-				   struct mlx5e_ttc_table *ttc)
++static void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
++					  struct mlx5e_ttc_table *ttc)
+ {
+ 	if (!mlx5e_tunnel_inner_ft_supported(priv->mdev))
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 24f919ef9b8e4..2d53eaf3b9241 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2567,6 +2567,14 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
+ 		err = mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in);
+ 		if (err)
+ 			goto free_in;
++
++		/* Verify inner tirs resources allocated */
++		if (!priv->inner_indir_tir[0].tirn)
++			continue;
++
++		err = mlx5_core_modify_tir(mdev, priv->inner_indir_tir[tt].tirn, in);
++		if (err)
++			goto free_in;
+ 	}
+ 
+ 	for (ix = 0; ix < priv->max_nch; ix++) {
+@@ -3445,8 +3453,7 @@ static int mlx5e_setup_tc_htb(struct mlx5e_priv *priv, struct tc_htb_qopt_offloa
+ 		return mlx5e_htb_leaf_to_inner(priv, htb->parent_classid, htb->classid,
+ 					       htb->rate, htb->ceil, htb->extack);
+ 	case TC_HTB_LEAF_DEL:
+-		return mlx5e_htb_leaf_del(priv, htb->classid, &htb->moved_qid, &htb->qid,
+-					  htb->extack);
++		return mlx5e_htb_leaf_del(priv, &htb->classid, htb->extack);
+ 	case TC_HTB_LEAF_DEL_LAST:
+ 	case TC_HTB_LEAF_DEL_LAST_FORCE:
+ 		return mlx5e_htb_leaf_del_last(priv, htb->classid,
+@@ -4812,7 +4819,14 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	netdev->hw_enc_features  |= NETIF_F_HW_VLAN_CTAG_TX;
+ 	netdev->hw_enc_features  |= NETIF_F_HW_VLAN_CTAG_RX;
+ 
++	/* Tunneled LRO is not supported in the driver, and the same RQs are
++	 * shared between inner and outer TIRs, so the driver can't disable LRO
++	 * for inner TIRs while having it enabled for outer TIRs. Due to this,
++	 * block LRO altogether if the firmware declares tunneled LRO support.
++	 */
+ 	if (!!MLX5_CAP_ETH(mdev, lro_cap) &&
++	    !MLX5_CAP_ETH(mdev, tunnel_lro_vxlan) &&
++	    !MLX5_CAP_ETH(mdev, tunnel_lro_gre) &&
+ 	    mlx5e_check_fragmented_striding_rq_cap(mdev))
+ 		netdev->vlan_features    |= NETIF_F_LRO;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index d273758255c3a..6eba574c5a364 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1338,6 +1338,7 @@ bool mlx5e_tc_is_vf_tunnel(struct net_device *out_dev, struct net_device *route_
+ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *route_dev, u16 *vport)
+ {
+ 	struct mlx5e_priv *out_priv, *route_priv;
++	struct mlx5_devcom *devcom = NULL;
+ 	struct mlx5_core_dev *route_mdev;
+ 	struct mlx5_eswitch *esw;
+ 	u16 vhca_id;
+@@ -1349,7 +1350,24 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
+ 	route_mdev = route_priv->mdev;
+ 
+ 	vhca_id = MLX5_CAP_GEN(route_mdev, vhca_id);
++	if (mlx5_lag_is_active(out_priv->mdev)) {
++		/* In lag case we may get devices from different eswitch instances.
++		 * If we failed to get vport num, it means, mostly, that we on the wrong
++		 * eswitch.
++		 */
++		err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
++		if (err != -ENOENT)
++			return err;
++
++		devcom = out_priv->mdev->priv.devcom;
++		esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
++		if (!esw)
++			return -ENODEV;
++	}
++
+ 	err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
++	if (devcom)
++		mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
+index 3da7becc1069f..425c91814b34f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
+@@ -364,6 +364,7 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw,
+ 	dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+ 	dest.vport.num = e->vport;
+ 	dest.vport.vhca_id = MLX5_CAP_GEN(esw->dev, vhca_id);
++	dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID;
+ 	e->fwd_rule = mlx5_add_flow_rules(e->ft, spec, &flow_act, &dest, 1);
+ 	if (IS_ERR(e->fwd_rule)) {
+ 		mlx5_destroy_flow_group(e->fwd_grp);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 3bb71a1860042..fc945945ae33e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -3091,8 +3091,11 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
+ 
+ 	switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
+ 	case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
+-		if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE)
++		if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE) {
++			err = 0;
+ 			goto out;
++		}
++
+ 		fallthrough;
+ 	case MLX5_CAP_INLINE_MODE_L2:
+ 		NL_SET_ERR_MSG_MOD(extack, "Inline mode can't be set");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 7d7ed025db0da..620d638e1e8ff 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -331,17 +331,6 @@ static int mlx5i_create_flow_steering(struct mlx5e_priv *priv)
+ 	}
+ 
+ 	mlx5e_set_ttc_basic_params(priv, &ttc_params);
+-	mlx5e_set_inner_ttc_ft_params(&ttc_params);
+-	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
+-		ttc_params.indir_tirn[tt] = priv->inner_indir_tir[tt].tirn;
+-
+-	err = mlx5e_create_inner_ttc_table(priv, &ttc_params, &priv->fs.inner_ttc);
+-	if (err) {
+-		netdev_err(priv->netdev, "Failed to create inner ttc table, err=%d\n",
+-			   err);
+-		goto err_destroy_arfs_tables;
+-	}
+-
+ 	mlx5e_set_ttc_ft_params(&ttc_params);
+ 	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
+ 		ttc_params.indir_tirn[tt] = priv->indir_tir[tt].tirn;
+@@ -350,13 +339,11 @@ static int mlx5i_create_flow_steering(struct mlx5e_priv *priv)
+ 	if (err) {
+ 		netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n",
+ 			   err);
+-		goto err_destroy_inner_ttc_table;
++		goto err_destroy_arfs_tables;
+ 	}
+ 
+ 	return 0;
+ 
+-err_destroy_inner_ttc_table:
+-	mlx5e_destroy_inner_ttc_table(priv, &priv->fs.inner_ttc);
+ err_destroy_arfs_tables:
+ 	mlx5e_arfs_destroy_tables(priv);
+ 
+@@ -366,7 +353,6 @@ err_destroy_arfs_tables:
+ static void mlx5i_destroy_flow_steering(struct mlx5e_priv *priv)
+ {
+ 	mlx5e_destroy_ttc_table(priv, &priv->fs.ttc);
+-	mlx5e_destroy_inner_ttc_table(priv, &priv->fs.inner_ttc);
+ 	mlx5e_arfs_destroy_tables(priv);
+ }
+ 
+@@ -392,7 +378,7 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
+ 	if (err)
+ 		goto err_destroy_indirect_rqts;
+ 
+-	err = mlx5e_create_indirect_tirs(priv, true);
++	err = mlx5e_create_indirect_tirs(priv, false);
+ 	if (err)
+ 		goto err_destroy_direct_rqts;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index 5c043c5cc4035..40ef60f562b42 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -277,6 +277,7 @@ static int mlx5_deactivate_lag(struct mlx5_lag *ldev)
+ 	int err;
+ 
+ 	ldev->flags &= ~MLX5_LAG_MODE_FLAGS;
++	mlx5_lag_mp_reset(ldev);
+ 
+ 	MLX5_SET(destroy_lag_in, in, opcode, MLX5_CMD_OP_DESTROY_LAG);
+ 	err = mlx5_cmd_exec_in(dev0, destroy_lag, in);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+index c4bf8b679541e..516bfc2bd797b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+@@ -302,6 +302,14 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
+ 	return NOTIFY_DONE;
+ }
+ 
++void mlx5_lag_mp_reset(struct mlx5_lag *ldev)
++{
++	/* Clear mfi, as it might become stale when a route delete event
++	 * has been missed, see mlx5_lag_fib_route_event().
++	 */
++	ldev->lag_mp.mfi = NULL;
++}
++
+ int mlx5_lag_mp_init(struct mlx5_lag *ldev)
+ {
+ 	struct lag_mp *mp = &ldev->lag_mp;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h
+index 258ac7b2964e8..729c839397a89 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h
+@@ -21,11 +21,13 @@ struct lag_mp {
+ 
+ #ifdef CONFIG_MLX5_ESWITCH
+ 
++void mlx5_lag_mp_reset(struct mlx5_lag *ldev);
+ int mlx5_lag_mp_init(struct mlx5_lag *ldev);
+ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev);
+ 
+ #else /* CONFIG_MLX5_ESWITCH */
+ 
++static inline void mlx5_lag_mp_reset(struct mlx5_lag *ldev) {};
+ static inline int mlx5_lag_mp_init(struct mlx5_lag *ldev) { return 0; }
+ static inline void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev) {}
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+index b41301a5b0df8..cd520e4c5522f 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+@@ -91,20 +91,20 @@ int ionic_devlink_register(struct ionic *ionic)
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	devlink_port_attrs_set(&ionic->dl_port, &attrs);
+ 	err = devlink_port_register(dl, &ionic->dl_port, 0);
+-	if (err)
++	if (err) {
+ 		dev_err(ionic->dev, "devlink_port_register failed: %d\n", err);
+-	else
+-		devlink_port_type_eth_set(&ionic->dl_port,
+-					  ionic->lif->netdev);
++		devlink_unregister(dl);
++		return err;
++	}
+ 
+-	return err;
++	devlink_port_type_eth_set(&ionic->dl_port, ionic->lif->netdev);
++	return 0;
+ }
+ 
+ void ionic_devlink_unregister(struct ionic *ionic)
+ {
+ 	struct devlink *dl = priv_to_devlink(ionic);
+ 
+-	if (ionic->dl_port.registered)
+-		devlink_port_unregister(&ionic->dl_port);
++	devlink_port_unregister(&ionic->dl_port);
+ 	devlink_unregister(dl);
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index b64c254e00ba1..8427fe1b8fd1c 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -434,7 +434,7 @@ qcaspi_receive(struct qcaspi *qca)
+ 				skb_put(qca->rx_skb, retcode);
+ 				qca->rx_skb->protocol = eth_type_trans(
+ 					qca->rx_skb, qca->rx_skb->dev);
+-				qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
++				skb_checksum_none_assert(qca->rx_skb);
+ 				netif_rx_ni(qca->rx_skb);
+ 				qca->rx_skb = netdev_alloc_skb_ip_align(net_dev,
+ 					net_dev->mtu + VLAN_ETH_HLEN);
+diff --git a/drivers/net/ethernet/qualcomm/qca_uart.c b/drivers/net/ethernet/qualcomm/qca_uart.c
+index bcdeca7b33664..ce3f7ce31adc1 100644
+--- a/drivers/net/ethernet/qualcomm/qca_uart.c
++++ b/drivers/net/ethernet/qualcomm/qca_uart.c
+@@ -107,7 +107,7 @@ qca_tty_receive(struct serdev_device *serdev, const unsigned char *data,
+ 			skb_put(qca->rx_skb, retcode);
+ 			qca->rx_skb->protocol = eth_type_trans(
+ 						qca->rx_skb, qca->rx_skb->dev);
+-			qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
++			skb_checksum_none_assert(qca->rx_skb);
+ 			netif_rx_ni(qca->rx_skb);
+ 			qca->rx_skb = netdev_alloc_skb_ip_align(netdev,
+ 								netdev->mtu +
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+index e632702675787..f83db62938dd1 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+@@ -172,11 +172,12 @@ int dwmac4_dma_interrupt(void __iomem *ioaddr,
+ 		x->rx_normal_irq_n++;
+ 		ret |= handle_rx;
+ 	}
+-	if (likely(intr_status & (DMA_CHAN_STATUS_TI |
+-		DMA_CHAN_STATUS_TBU))) {
++	if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
+ 		x->tx_normal_irq_n++;
+ 		ret |= handle_tx;
+ 	}
++	if (unlikely(intr_status & DMA_CHAN_STATUS_TBU))
++		ret |= handle_tx;
+ 	if (unlikely(intr_status & DMA_CHAN_STATUS_ERI))
+ 		x->rx_early_irq++;
+ 
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 67a08cbba859d..e967cd1ade36b 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -518,6 +518,10 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common,
+ 	}
+ 
+ 	napi_enable(&common->napi_rx);
++	if (common->rx_irq_disabled) {
++		common->rx_irq_disabled = false;
++		enable_irq(common->rx_chns.irq);
++	}
+ 
+ 	dev_dbg(common->dev, "cpsw_nuss started\n");
+ 	return 0;
+@@ -871,8 +875,12 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
+ 
+ 	dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget);
+ 
+-	if (num_rx < budget && napi_complete_done(napi_rx, num_rx))
+-		enable_irq(common->rx_chns.irq);
++	if (num_rx < budget && napi_complete_done(napi_rx, num_rx)) {
++		if (common->rx_irq_disabled) {
++			common->rx_irq_disabled = false;
++			enable_irq(common->rx_chns.irq);
++		}
++	}
+ 
+ 	return num_rx;
+ }
+@@ -1090,6 +1098,7 @@ static irqreturn_t am65_cpsw_nuss_rx_irq(int irq, void *dev_id)
+ {
+ 	struct am65_cpsw_common *common = dev_id;
+ 
++	common->rx_irq_disabled = true;
+ 	disable_irq_nosync(irq);
+ 	napi_schedule(&common->napi_rx);
+ 
+@@ -2388,21 +2397,6 @@ static const struct devlink_param am65_cpsw_devlink_params[] = {
+ 			     am65_cpsw_dl_switch_mode_set, NULL),
+ };
+ 
+-static void am65_cpsw_unregister_devlink_ports(struct am65_cpsw_common *common)
+-{
+-	struct devlink_port *dl_port;
+-	struct am65_cpsw_port *port;
+-	int i;
+-
+-	for (i = 1; i <= common->port_num; i++) {
+-		port = am65_common_get_port(common, i);
+-		dl_port = &port->devlink_port;
+-
+-		if (dl_port->registered)
+-			devlink_port_unregister(dl_port);
+-	}
+-}
+-
+ static int am65_cpsw_nuss_register_devlink(struct am65_cpsw_common *common)
+ {
+ 	struct devlink_port_attrs attrs = {};
+@@ -2464,7 +2458,12 @@ static int am65_cpsw_nuss_register_devlink(struct am65_cpsw_common *common)
+ 	return ret;
+ 
+ dl_port_unreg:
+-	am65_cpsw_unregister_devlink_ports(common);
++	for (i = i - 1; i >= 1; i--) {
++		port = am65_common_get_port(common, i);
++		dl_port = &port->devlink_port;
++
++		devlink_port_unregister(dl_port);
++	}
+ dl_unreg:
+ 	devlink_unregister(common->devlink);
+ dl_free:
+@@ -2475,6 +2474,17 @@ dl_free:
+ 
+ static void am65_cpsw_unregister_devlink(struct am65_cpsw_common *common)
+ {
++	struct devlink_port *dl_port;
++	struct am65_cpsw_port *port;
++	int i;
++
++	for (i = 1; i <= common->port_num; i++) {
++		port = am65_common_get_port(common, i);
++		dl_port = &port->devlink_port;
++
++		devlink_port_unregister(dl_port);
++	}
++
+ 	if (!AM65_CPSW_IS_CPSW2G(common) &&
+ 	    IS_ENABLED(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV)) {
+ 		devlink_params_unpublish(common->devlink);
+@@ -2482,7 +2492,6 @@ static void am65_cpsw_unregister_devlink(struct am65_cpsw_common *common)
+ 					  ARRAY_SIZE(am65_cpsw_devlink_params));
+ 	}
+ 
+-	am65_cpsw_unregister_devlink_ports(common);
+ 	devlink_unregister(common->devlink);
+ 	devlink_free(common->devlink);
+ }
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
+index 5d93e346f05eb..048ed10143c17 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
+@@ -126,6 +126,8 @@ struct am65_cpsw_common {
+ 	struct am65_cpsw_rx_chn	rx_chns;
+ 	struct napi_struct	napi_rx;
+ 
++	bool			rx_irq_disabled;
++
+ 	u32			nuss_ver;
+ 	u32			cpsw_ver;
+ 	unsigned long		bus_freq;
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 53a433442803a..f4d758f8a1ee1 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -987,11 +987,19 @@ static int mv3310_get_number_of_ports(struct phy_device *phydev)
+ 
+ static int mv3310_match_phy_device(struct phy_device *phydev)
+ {
++	if ((phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] &
++	     MARVELL_PHY_ID_MASK) != MARVELL_PHY_ID_88X3310)
++		return 0;
++
+ 	return mv3310_get_number_of_ports(phydev) == 1;
+ }
+ 
+ static int mv3340_match_phy_device(struct phy_device *phydev)
+ {
++	if ((phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] &
++	     MARVELL_PHY_ID_MASK) != MARVELL_PHY_ID_88X3310)
++		return 0;
++
+ 	return mv3310_get_number_of_ports(phydev) == 4;
+ }
+ 
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index dc87e8caf954a..53c3c680c0832 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -1220,6 +1220,7 @@ static const struct driver_info ax88772b_info = {
+ 	.unbind = ax88772_unbind,
+ 	.status = asix_status,
+ 	.reset = ax88772_reset,
++	.stop = ax88772_stop,
+ 	.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR |
+ 	         FLAG_MULTI_PACKET,
+ 	.rx_fixup = asix_rx_fixup_common,
+diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c
+index b137e7f343979..bd1ef63349978 100644
+--- a/drivers/net/wireless/ath/ath6kl/wmi.c
++++ b/drivers/net/wireless/ath/ath6kl/wmi.c
+@@ -2504,8 +2504,10 @@ static int ath6kl_wmi_sync_point(struct wmi *wmi, u8 if_idx)
+ 		goto free_data_skb;
+ 
+ 	for (index = 0; index < num_pri_streams; index++) {
+-		if (WARN_ON(!data_sync_bufs[index].skb))
++		if (WARN_ON(!data_sync_bufs[index].skb)) {
++			ret = -ENOMEM;
+ 			goto free_data_skb;
++		}
+ 
+ 		ep_id = ath6kl_ac2_endpoint_id(wmi->parent_dev,
+ 					       data_sync_bufs[index].
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index c49dd0c36ae43..bbd72c2db0886 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -2075,7 +2075,7 @@ cleanup:
+ 
+ 	err = brcmf_pcie_probe(pdev, NULL);
+ 	if (err)
+-		brcmf_err(bus, "probe after resume failed, err=%d\n", err);
++		__brcmf_err(NULL, __func__, "probe after resume failed, err=%d\n", err);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 34933f133a0ae..66f8d949c1e69 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -264,7 +264,7 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 		goto out_free;
+ 	}
+ 
+-	enabled = !!wifi_pkg->package.elements[0].integer.value;
++	enabled = !!wifi_pkg->package.elements[1].integer.value;
+ 
+ 	if (!enabled) {
+ 		*block_list_size = -1;
+@@ -273,15 +273,15 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 		goto out_free;
+ 	}
+ 
+-	if (wifi_pkg->package.elements[1].type != ACPI_TYPE_INTEGER ||
+-	    wifi_pkg->package.elements[1].integer.value >
++	if (wifi_pkg->package.elements[2].type != ACPI_TYPE_INTEGER ||
++	    wifi_pkg->package.elements[2].integer.value >
+ 	    APCI_WTAS_BLACK_LIST_MAX) {
+ 		IWL_DEBUG_RADIO(fwrt, "TAS invalid array size %llu\n",
+ 				wifi_pkg->package.elements[1].integer.value);
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+-	*block_list_size = wifi_pkg->package.elements[1].integer.value;
++	*block_list_size = wifi_pkg->package.elements[2].integer.value;
+ 
+ 	IWL_DEBUG_RADIO(fwrt, "TAS array size %d\n", *block_list_size);
+ 	if (*block_list_size > APCI_WTAS_BLACK_LIST_MAX) {
+@@ -294,15 +294,15 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 	for (i = 0; i < *block_list_size; i++) {
+ 		u32 country;
+ 
+-		if (wifi_pkg->package.elements[2 + i].type !=
++		if (wifi_pkg->package.elements[3 + i].type !=
+ 		    ACPI_TYPE_INTEGER) {
+ 			IWL_DEBUG_RADIO(fwrt,
+-					"TAS invalid array elem %d\n", 2 + i);
++					"TAS invalid array elem %d\n", 3 + i);
+ 			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+-		country = wifi_pkg->package.elements[2 + i].integer.value;
++		country = wifi_pkg->package.elements[3 + i].integer.value;
+ 		block_list_array[i] = cpu_to_le32(country);
+ 		IWL_DEBUG_RADIO(fwrt, "TAS block list country %d\n", country);
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 0b8a0cd3b652d..6f49950a5f6d1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -558,6 +558,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
++	IWL_DEV_INFO(0xA0F0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x0070, iwl_ax201_cfg_quz_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x0074, iwl_ax201_cfg_quz_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x6074, iwl_ax201_cfg_quz_hr, NULL),
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index 99b21a2c83861..f4a26f16f00f4 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -1038,8 +1038,10 @@ static int rsi_load_9116_firmware(struct rsi_hw *adapter)
+ 	}
+ 
+ 	ta_firmware = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
+-	if (!ta_firmware)
++	if (!ta_firmware) {
++		status = -ENOMEM;
+ 		goto fail_release_fw;
++	}
+ 	fw_p = ta_firmware;
+ 	instructions_sz = fw_entry->size;
+ 	rsi_dbg(INFO_ZONE, "FW Length = %d bytes\n", instructions_sz);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 3fbe2a3c14550..416976f098882 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -816,6 +816,7 @@ static int rsi_probe(struct usb_interface *pfunction,
+ 	} else {
+ 		rsi_dbg(ERR_ZONE, "%s: Unsupported RSI device id 0x%x\n",
+ 			__func__, id->idProduct);
++		status = -ENODEV;
+ 		goto err1;
+ 	}
+ 
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 7f6b3a9915014..3bd9cbc80246f 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -735,13 +735,13 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctrl->ctrl.queue_count = nr_io_queues + 1;
+-	if (ctrl->ctrl.queue_count < 2) {
++	if (nr_io_queues == 0) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"unable to set any I/O queues\n");
+ 		return -ENOMEM;
+ 	}
+ 
++	ctrl->ctrl.queue_count = nr_io_queues + 1;
+ 	dev_info(ctrl->ctrl.device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 8cb15ee5b249e..18bd68b82d78f 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1769,13 +1769,13 @@ static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctrl->queue_count = nr_io_queues + 1;
+-	if (ctrl->queue_count < 2) {
++	if (nr_io_queues == 0) {
+ 		dev_err(ctrl->device,
+ 			"unable to set any I/O queues\n");
+ 		return -ENOMEM;
+ 	}
+ 
++	ctrl->queue_count = nr_io_queues + 1;
+ 	dev_info(ctrl->device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
+index 7d0f3523fdab2..8ef564c3b32c8 100644
+--- a/drivers/nvme/target/fabrics-cmd.c
++++ b/drivers/nvme/target/fabrics-cmd.c
+@@ -120,6 +120,7 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
+ 	if (!sqsize) {
+ 		pr_warn("queue size zero!\n");
+ 		req->error_loc = offsetof(struct nvmf_connect_command, sqsize);
++		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(sqsize);
+ 		ret = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
+ 		goto err;
+ 	}
+@@ -260,11 +261,11 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ 	}
+ 
+ 	status = nvmet_install_queue(ctrl, req);
+-	if (status) {
+-		/* pass back cntlid that had the issue of installing queue */
+-		req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
++	if (status)
+ 		goto out_ctrl_put;
+-	}
++
++	/* pass back cntlid for successful completion */
++	req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
+ 
+ 	pr_debug("adding queue %d to ctrl %d.\n", qid, ctrl->cntlid);
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index aacf575c15cff..3f353572588df 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2495,7 +2495,14 @@ static int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable
+ 	if (enable) {
+ 		int error;
+ 
+-		if (pci_pme_capable(dev, state))
++		/*
++		 * Enable PME signaling if the device can signal PME from
++		 * D3cold regardless of whether or not it can signal PME from
++		 * the current target state, because that will allow it to
++		 * signal PME when the hierarchy above it goes into D3cold and
++		 * the device itself ends up in D3cold as a result of that.
++		 */
++		if (pci_pme_capable(dev, state) || pci_pme_capable(dev, PCI_D3cold))
+ 			pci_pme_active(dev, true);
+ 		else
+ 			ret = 1;
+@@ -2599,16 +2606,20 @@ static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup)
+ 	if (dev->current_state == PCI_D3cold)
+ 		target_state = PCI_D3cold;
+ 
+-	if (wakeup) {
++	if (wakeup && dev->pme_support) {
++		pci_power_t state = target_state;
++
+ 		/*
+ 		 * Find the deepest state from which the device can generate
+ 		 * PME#.
+ 		 */
+-		if (dev->pme_support) {
+-			while (target_state
+-			      && !(dev->pme_support & (1 << target_state)))
+-				target_state--;
+-		}
++		while (state && !(dev->pme_support & (1 << state)))
++			state--;
++
++		if (state)
++			return state;
++		else if (dev->pme_support & 1)
++			return PCI_D0;
+ 	}
+ 
+ 	return target_state;
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 2ba2d8d6b8e63..d1bcc52e67c35 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -147,7 +147,7 @@ static int fuel_gauge_reg_readb(struct axp288_fg_info *info, int reg)
+ 	}
+ 
+ 	if (ret < 0) {
+-		dev_err(&info->pdev->dev, "axp288 reg read err:%d\n", ret);
++		dev_err(&info->pdev->dev, "Error reading reg 0x%02x err: %d\n", reg, ret);
+ 		return ret;
+ 	}
+ 
+@@ -161,7 +161,7 @@ static int fuel_gauge_reg_writeb(struct axp288_fg_info *info, int reg, u8 val)
+ 	ret = regmap_write(info->regmap, reg, (unsigned int)val);
+ 
+ 	if (ret < 0)
+-		dev_err(&info->pdev->dev, "axp288 reg write err:%d\n", ret);
++		dev_err(&info->pdev->dev, "Error writing reg 0x%02x err: %d\n", reg, ret);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/power/supply/cw2015_battery.c b/drivers/power/supply/cw2015_battery.c
+index d110597746b0a..091868e9e9e82 100644
+--- a/drivers/power/supply/cw2015_battery.c
++++ b/drivers/power/supply/cw2015_battery.c
+@@ -679,7 +679,9 @@ static int cw_bat_probe(struct i2c_client *client)
+ 						    &cw2015_bat_desc,
+ 						    &psy_cfg);
+ 	if (IS_ERR(cw_bat->rk_bat)) {
+-		dev_err(cw_bat->dev, "Failed to register power supply\n");
++		/* try again if this happens */
++		dev_err_probe(&client->dev, PTR_ERR(cw_bat->rk_bat),
++			"Failed to register power supply\n");
+ 		return PTR_ERR(cw_bat->rk_bat);
+ 	}
+ 
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index ce2041b30a066..215e77d3b6d93 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -748,7 +748,7 @@ static inline void max17042_override_por_values(struct max17042_chip *chip)
+ 	struct max17042_config_data *config = chip->pdata->config_data;
+ 
+ 	max17042_override_por(map, MAX17042_TGAIN, config->tgain);
+-	max17042_override_por(map, MAx17042_TOFF, config->toff);
++	max17042_override_por(map, MAX17042_TOFF, config->toff);
+ 	max17042_override_por(map, MAX17042_CGAIN, config->cgain);
+ 	max17042_override_por(map, MAX17042_COFF, config->coff);
+ 
+diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
+index df240420f2de0..9d8c2fadd4d03 100644
+--- a/drivers/power/supply/smb347-charger.c
++++ b/drivers/power/supply/smb347-charger.c
+@@ -55,6 +55,7 @@
+ #define CFG_PIN_EN_CTRL_ACTIVE_LOW		0x60
+ #define CFG_PIN_EN_APSD_IRQ			BIT(1)
+ #define CFG_PIN_EN_CHARGER_ERROR		BIT(2)
++#define CFG_PIN_EN_CTRL				BIT(4)
+ #define CFG_THERM				0x07
+ #define CFG_THERM_SOFT_HOT_COMPENSATION_MASK	0x03
+ #define CFG_THERM_SOFT_HOT_COMPENSATION_SHIFT	0
+@@ -724,6 +725,15 @@ static int smb347_hw_init(struct smb347_charger *smb)
+ 	if (ret < 0)
+ 		goto fail;
+ 
++	/* Activate pin control, making it writable. */
++	switch (smb->enable_control) {
++	case SMB3XX_CHG_ENABLE_PIN_ACTIVE_LOW:
++	case SMB3XX_CHG_ENABLE_PIN_ACTIVE_HIGH:
++		ret = regmap_set_bits(smb->regmap, CFG_PIN, CFG_PIN_EN_CTRL);
++		if (ret < 0)
++			goto fail;
++	}
++
+ 	/*
+ 	 * Make the charging functionality controllable by a write to the
+ 	 * command register unless pin control is specified in the platform
+diff --git a/drivers/regulator/tps65910-regulator.c b/drivers/regulator/tps65910-regulator.c
+index 1d5b0a1b86f78..06cbe60c990f9 100644
+--- a/drivers/regulator/tps65910-regulator.c
++++ b/drivers/regulator/tps65910-regulator.c
+@@ -1211,12 +1211,10 @@ static int tps65910_probe(struct platform_device *pdev)
+ 
+ 		rdev = devm_regulator_register(&pdev->dev, &pmic->desc[i],
+ 					       &config);
+-		if (IS_ERR(rdev)) {
+-			dev_err(tps65910->dev,
+-				"failed to register %s regulator\n",
+-				pdev->name);
+-			return PTR_ERR(rdev);
+-		}
++		if (IS_ERR(rdev))
++			return dev_err_probe(tps65910->dev, PTR_ERR(rdev),
++					     "failed to register %s regulator\n",
++					     pdev->name);
+ 
+ 		/* Save regulator for cleanup */
+ 		pmic->rdev[i] = rdev;
+diff --git a/drivers/regulator/vctrl-regulator.c b/drivers/regulator/vctrl-regulator.c
+index cbadb1c996790..d2a37978fc3a8 100644
+--- a/drivers/regulator/vctrl-regulator.c
++++ b/drivers/regulator/vctrl-regulator.c
+@@ -37,7 +37,6 @@ struct vctrl_voltage_table {
+ struct vctrl_data {
+ 	struct regulator_dev *rdev;
+ 	struct regulator_desc desc;
+-	struct regulator *ctrl_reg;
+ 	bool enabled;
+ 	unsigned int min_slew_down_rate;
+ 	unsigned int ovp_threshold;
+@@ -82,7 +81,12 @@ static int vctrl_calc_output_voltage(struct vctrl_data *vctrl, int ctrl_uV)
+ static int vctrl_get_voltage(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ctrl_uV = regulator_get_voltage_rdev(vctrl->ctrl_reg->rdev);
++	int ctrl_uV;
++
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
++	ctrl_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
+ 
+ 	return vctrl_calc_output_voltage(vctrl, ctrl_uV);
+ }
+@@ -92,14 +96,19 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 			     unsigned int *selector)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+-	int orig_ctrl_uV = regulator_get_voltage_rdev(ctrl_reg->rdev);
+-	int uV = vctrl_calc_output_voltage(vctrl, orig_ctrl_uV);
++	int orig_ctrl_uV;
++	int uV;
+ 	int ret;
+ 
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
++	orig_ctrl_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
++	uV = vctrl_calc_output_voltage(vctrl, orig_ctrl_uV);
++
+ 	if (req_min_uV >= uV || !vctrl->ovp_threshold)
+ 		/* voltage rising or no OVP */
+-		return regulator_set_voltage_rdev(ctrl_reg->rdev,
++		return regulator_set_voltage_rdev(rdev->supply->rdev,
+ 			vctrl_calc_ctrl_voltage(vctrl, req_min_uV),
+ 			vctrl_calc_ctrl_voltage(vctrl, req_max_uV),
+ 			PM_SUSPEND_ON);
+@@ -117,7 +126,7 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 		next_uV = max_t(int, req_min_uV, uV - max_drop_uV);
+ 		next_ctrl_uV = vctrl_calc_ctrl_voltage(vctrl, next_uV);
+ 
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    next_ctrl_uV,
+ 					    next_ctrl_uV,
+ 					    PM_SUSPEND_ON);
+@@ -134,7 +143,7 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 
+ err:
+ 	/* Try to go back to original voltage */
+-	regulator_set_voltage_rdev(ctrl_reg->rdev, orig_ctrl_uV, orig_ctrl_uV,
++	regulator_set_voltage_rdev(rdev->supply->rdev, orig_ctrl_uV, orig_ctrl_uV,
+ 				   PM_SUSPEND_ON);
+ 
+ 	return ret;
+@@ -151,16 +160,18 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ 				 unsigned int selector)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+ 	unsigned int orig_sel = vctrl->sel;
+ 	int ret;
+ 
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
+ 	if (selector >= rdev->desc->n_voltages)
+ 		return -EINVAL;
+ 
+ 	if (selector >= vctrl->sel || !vctrl->ovp_threshold) {
+ 		/* voltage rising or no OVP */
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    vctrl->vtable[selector].ctrl,
+ 					    vctrl->vtable[selector].ctrl,
+ 					    PM_SUSPEND_ON);
+@@ -179,7 +190,7 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ 		else
+ 			next_sel = vctrl->vtable[vctrl->sel].ovp_min_sel;
+ 
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    vctrl->vtable[next_sel].ctrl,
+ 					    vctrl->vtable[next_sel].ctrl,
+ 					    PM_SUSPEND_ON);
+@@ -202,7 +213,7 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ err:
+ 	if (vctrl->sel != orig_sel) {
+ 		/* Try to go back to original voltage */
+-		if (!regulator_set_voltage_rdev(ctrl_reg->rdev,
++		if (!regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					   vctrl->vtable[orig_sel].ctrl,
+ 					   vctrl->vtable[orig_sel].ctrl,
+ 					   PM_SUSPEND_ON))
+@@ -234,10 +245,6 @@ static int vctrl_parse_dt(struct platform_device *pdev,
+ 	u32 pval;
+ 	u32 vrange_ctrl[2];
+ 
+-	vctrl->ctrl_reg = devm_regulator_get(&pdev->dev, "ctrl");
+-	if (IS_ERR(vctrl->ctrl_reg))
+-		return PTR_ERR(vctrl->ctrl_reg);
+-
+ 	ret = of_property_read_u32(np, "ovp-threshold-percent", &pval);
+ 	if (!ret) {
+ 		vctrl->ovp_threshold = pval;
+@@ -315,11 +322,11 @@ static int vctrl_cmp_ctrl_uV(const void *a, const void *b)
+ 	return at->ctrl - bt->ctrl;
+ }
+ 
+-static int vctrl_init_vtable(struct platform_device *pdev)
++static int vctrl_init_vtable(struct platform_device *pdev,
++			     struct regulator *ctrl_reg)
+ {
+ 	struct vctrl_data *vctrl = platform_get_drvdata(pdev);
+ 	struct regulator_desc *rdesc = &vctrl->desc;
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+ 	struct vctrl_voltage_range *vrange_ctrl = &vctrl->vrange.ctrl;
+ 	int n_voltages;
+ 	int ctrl_uV;
+@@ -395,23 +402,19 @@ static int vctrl_init_vtable(struct platform_device *pdev)
+ static int vctrl_enable(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ret = regulator_enable(vctrl->ctrl_reg);
+ 
+-	if (!ret)
+-		vctrl->enabled = true;
++	vctrl->enabled = true;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int vctrl_disable(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ret = regulator_disable(vctrl->ctrl_reg);
+ 
+-	if (!ret)
+-		vctrl->enabled = false;
++	vctrl->enabled = false;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int vctrl_is_enabled(struct regulator_dev *rdev)
+@@ -447,6 +450,7 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	struct regulator_desc *rdesc;
+ 	struct regulator_config cfg = { };
+ 	struct vctrl_voltage_range *vrange_ctrl;
++	struct regulator *ctrl_reg;
+ 	int ctrl_uV;
+ 	int ret;
+ 
+@@ -461,15 +465,20 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	ctrl_reg = devm_regulator_get(&pdev->dev, "ctrl");
++	if (IS_ERR(ctrl_reg))
++		return PTR_ERR(ctrl_reg);
++
+ 	vrange_ctrl = &vctrl->vrange.ctrl;
+ 
+ 	rdesc = &vctrl->desc;
+ 	rdesc->name = "vctrl";
+ 	rdesc->type = REGULATOR_VOLTAGE;
+ 	rdesc->owner = THIS_MODULE;
++	rdesc->supply_name = "ctrl";
+ 
+-	if ((regulator_get_linear_step(vctrl->ctrl_reg) == 1) ||
+-	    (regulator_count_voltages(vctrl->ctrl_reg) == -EINVAL)) {
++	if ((regulator_get_linear_step(ctrl_reg) == 1) ||
++	    (regulator_count_voltages(ctrl_reg) == -EINVAL)) {
+ 		rdesc->continuous_voltage_range = true;
+ 		rdesc->ops = &vctrl_ops_cont;
+ 	} else {
+@@ -486,11 +495,12 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	cfg.init_data = init_data;
+ 
+ 	if (!rdesc->continuous_voltage_range) {
+-		ret = vctrl_init_vtable(pdev);
++		ret = vctrl_init_vtable(pdev, ctrl_reg);
+ 		if (ret)
+ 			return ret;
+ 
+-		ctrl_uV = regulator_get_voltage_rdev(vctrl->ctrl_reg->rdev);
++		/* Use locked consumer API when not in regulator framework */
++		ctrl_uV = regulator_get_voltage(ctrl_reg);
+ 		if (ctrl_uV < 0) {
+ 			dev_err(&pdev->dev, "failed to get control voltage\n");
+ 			return ctrl_uV;
+@@ -513,6 +523,9 @@ static int vctrl_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	/* Drop ctrl-supply here in favor of regulator core managed supply */
++	devm_regulator_put(ctrl_reg);
++
+ 	vctrl->rdev = devm_regulator_register(&pdev->dev, rdesc, &cfg);
+ 	if (IS_ERR(vctrl->rdev)) {
+ 		ret = PTR_ERR(vctrl->rdev);
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index a974943c27dac..9fcdb8d81eee6 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -430,9 +430,26 @@ static ssize_t pimpampom_show(struct device *dev,
+ }
+ static DEVICE_ATTR_RO(pimpampom);
+ 
++static ssize_t dev_busid_show(struct device *dev,
++			      struct device_attribute *attr,
++			      char *buf)
++{
++	struct subchannel *sch = to_subchannel(dev);
++	struct pmcw *pmcw = &sch->schib.pmcw;
++
++	if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
++	     pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
++		return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
++				  pmcw->dev);
++	else
++		return sysfs_emit(buf, "none\n");
++}
++static DEVICE_ATTR_RO(dev_busid);
++
+ static struct attribute *io_subchannel_type_attrs[] = {
+ 	&dev_attr_chpids.attr,
+ 	&dev_attr_pimpampom.attr,
++	&dev_attr_dev_busid.attr,
+ 	NULL,
+ };
+ ATTRIBUTE_GROUPS(io_subchannel_type);
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 8d3a1d84a7575..9c4f3c3889345 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -127,22 +127,13 @@ static struct bus_type ap_bus_type;
+ /* Adapter interrupt definitions */
+ static void ap_interrupt_handler(struct airq_struct *airq, bool floating);
+ 
+-static int ap_airq_flag;
++static bool ap_irq_flag;
+ 
+ static struct airq_struct ap_airq = {
+ 	.handler = ap_interrupt_handler,
+ 	.isc = AP_ISC,
+ };
+ 
+-/**
+- * ap_using_interrupts() - Returns non-zero if interrupt support is
+- * available.
+- */
+-static inline int ap_using_interrupts(void)
+-{
+-	return ap_airq_flag;
+-}
+-
+ /**
+  * ap_airq_ptr() - Get the address of the adapter interrupt indicator
+  *
+@@ -152,7 +143,7 @@ static inline int ap_using_interrupts(void)
+  */
+ void *ap_airq_ptr(void)
+ {
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		return ap_airq.lsi_ptr;
+ 	return NULL;
+ }
+@@ -396,7 +387,7 @@ void ap_wait(enum ap_sm_wait wait)
+ 	switch (wait) {
+ 	case AP_SM_WAIT_AGAIN:
+ 	case AP_SM_WAIT_INTERRUPT:
+-		if (ap_using_interrupts())
++		if (ap_irq_flag)
+ 			break;
+ 		if (ap_poll_kthread) {
+ 			wake_up(&ap_poll_wait);
+@@ -471,7 +462,7 @@ static void ap_tasklet_fn(unsigned long dummy)
+ 	 * be received. Doing it in the beginning of the tasklet is therefor
+ 	 * important that no requests on any AP get lost.
+ 	 */
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		xchg(ap_airq.lsi_ptr, 0);
+ 
+ 	spin_lock_bh(&ap_queues_lock);
+@@ -541,7 +532,7 @@ static int ap_poll_thread_start(void)
+ {
+ 	int rc;
+ 
+-	if (ap_using_interrupts() || ap_poll_kthread)
++	if (ap_irq_flag || ap_poll_kthread)
+ 		return 0;
+ 	mutex_lock(&ap_poll_thread_mutex);
+ 	ap_poll_kthread = kthread_run(ap_poll_thread, NULL, "appoll");
+@@ -1187,7 +1178,7 @@ static BUS_ATTR_RO(ap_adapter_mask);
+ static ssize_t ap_interrupts_show(struct bus_type *bus, char *buf)
+ {
+ 	return scnprintf(buf, PAGE_SIZE, "%d\n",
+-			 ap_using_interrupts() ? 1 : 0);
++			 ap_irq_flag ? 1 : 0);
+ }
+ 
+ static BUS_ATTR_RO(ap_interrupts);
+@@ -1912,7 +1903,7 @@ static int __init ap_module_init(void)
+ 	/* enable interrupts if available */
+ 	if (ap_interrupts_available()) {
+ 		rc = register_adapter_interrupt(&ap_airq);
+-		ap_airq_flag = (rc == 0);
++		ap_irq_flag = (rc == 0);
+ 	}
+ 
+ 	/* Create /sys/bus/ap. */
+@@ -1956,7 +1947,7 @@ out_work:
+ out_bus:
+ 	bus_unregister(&ap_bus_type);
+ out:
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		unregister_adapter_interrupt(&ap_airq);
+ 	kfree(ap_qci_info);
+ 	return rc;
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 8f18abdbbc2ba..6dd5e8f0380ce 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -80,12 +80,6 @@ static inline int ap_test_bit(unsigned int *ptr, unsigned int nr)
+ #define AP_FUNC_EP11  5
+ #define AP_FUNC_APXA  6
+ 
+-/*
+- * AP interrupt states
+- */
+-#define AP_INTR_DISABLED	0	/* AP interrupt disabled */
+-#define AP_INTR_ENABLED		1	/* AP interrupt enabled */
+-
+ /*
+  * AP queue state machine states
+  */
+@@ -112,7 +106,7 @@ enum ap_sm_event {
+  * AP queue state wait behaviour
+  */
+ enum ap_sm_wait {
+-	AP_SM_WAIT_AGAIN,	/* retry immediately */
++	AP_SM_WAIT_AGAIN = 0,	/* retry immediately */
+ 	AP_SM_WAIT_TIMEOUT,	/* wait for timeout */
+ 	AP_SM_WAIT_INTERRUPT,	/* wait for thin interrupt (if available) */
+ 	AP_SM_WAIT_NONE,	/* no wait */
+@@ -186,7 +180,7 @@ struct ap_queue {
+ 	enum ap_dev_state dev_state;	/* queue device state */
+ 	bool config;			/* configured state */
+ 	ap_qid_t qid;			/* AP queue id. */
+-	int interrupt;			/* indicate if interrupts are enabled */
++	bool interrupt;			/* indicate if interrupts are enabled */
+ 	int queue_count;		/* # messages currently on AP queue. */
+ 	int pendingq_count;		/* # requests on pendingq list. */
+ 	int requestq_count;		/* # requests on requestq list. */
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 669f96fddad61..d70c4d3d0907f 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -19,7 +19,7 @@
+ static void __ap_flush_queue(struct ap_queue *aq);
+ 
+ /**
+- * ap_queue_enable_interruption(): Enable interruption on an AP queue.
++ * ap_queue_enable_irq(): Enable interrupt support on this AP queue.
+  * @qid: The AP queue number
+  * @ind: the notification indicator byte
+  *
+@@ -27,7 +27,7 @@ static void __ap_flush_queue(struct ap_queue *aq);
+  * value it waits a while and tests the AP queue if interrupts
+  * have been switched on using ap_test_queue().
+  */
+-static int ap_queue_enable_interruption(struct ap_queue *aq, void *ind)
++static int ap_queue_enable_irq(struct ap_queue *aq, void *ind)
+ {
+ 	struct ap_queue_status status;
+ 	struct ap_qirq_ctrl qirqctrl = { 0 };
+@@ -218,7 +218,8 @@ static enum ap_sm_wait ap_sm_read(struct ap_queue *aq)
+ 		return AP_SM_WAIT_NONE;
+ 	case AP_RESPONSE_NO_PENDING_REPLY:
+ 		if (aq->queue_count > 0)
+-			return AP_SM_WAIT_INTERRUPT;
++			return aq->interrupt ?
++				AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_TIMEOUT;
+ 		aq->sm_state = AP_SM_STATE_IDLE;
+ 		return AP_SM_WAIT_NONE;
+ 	default:
+@@ -272,7 +273,8 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
+ 		fallthrough;
+ 	case AP_RESPONSE_Q_FULL:
+ 		aq->sm_state = AP_SM_STATE_QUEUE_FULL;
+-		return AP_SM_WAIT_INTERRUPT;
++		return aq->interrupt ?
++			AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_TIMEOUT;
+ 	case AP_RESPONSE_RESET_IN_PROGRESS:
+ 		aq->sm_state = AP_SM_STATE_RESET_WAIT;
+ 		return AP_SM_WAIT_TIMEOUT;
+@@ -322,7 +324,7 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
+ 	case AP_RESPONSE_NORMAL:
+ 	case AP_RESPONSE_RESET_IN_PROGRESS:
+ 		aq->sm_state = AP_SM_STATE_RESET_WAIT;
+-		aq->interrupt = AP_INTR_DISABLED;
++		aq->interrupt = false;
+ 		return AP_SM_WAIT_TIMEOUT;
+ 	default:
+ 		aq->dev_state = AP_DEV_STATE_ERROR;
+@@ -355,7 +357,7 @@ static enum ap_sm_wait ap_sm_reset_wait(struct ap_queue *aq)
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+ 		lsi_ptr = ap_airq_ptr();
+-		if (lsi_ptr && ap_queue_enable_interruption(aq, lsi_ptr) == 0)
++		if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0)
+ 			aq->sm_state = AP_SM_STATE_SETIRQ_WAIT;
+ 		else
+ 			aq->sm_state = (aq->queue_count > 0) ?
+@@ -396,7 +398,7 @@ static enum ap_sm_wait ap_sm_setirq_wait(struct ap_queue *aq)
+ 
+ 	if (status.irq_enabled == 1) {
+ 		/* Irqs are now enabled */
+-		aq->interrupt = AP_INTR_ENABLED;
++		aq->interrupt = true;
+ 		aq->sm_state = (aq->queue_count > 0) ?
+ 			AP_SM_STATE_WORKING : AP_SM_STATE_IDLE;
+ 	}
+@@ -586,7 +588,7 @@ static ssize_t interrupt_show(struct device *dev,
+ 	spin_lock_bh(&aq->lock);
+ 	if (aq->sm_state == AP_SM_STATE_SETIRQ_WAIT)
+ 		rc = scnprintf(buf, PAGE_SIZE, "Enable Interrupt pending.\n");
+-	else if (aq->interrupt == AP_INTR_ENABLED)
++	else if (aq->interrupt)
+ 		rc = scnprintf(buf, PAGE_SIZE, "Interrupts enabled.\n");
+ 	else
+ 		rc = scnprintf(buf, PAGE_SIZE, "Interrupts disabled.\n");
+@@ -767,7 +769,7 @@ struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type)
+ 	aq->ap_dev.device.type = &ap_queue_type;
+ 	aq->ap_dev.device_type = device_type;
+ 	aq->qid = qid;
+-	aq->interrupt = AP_INTR_DISABLED;
++	aq->interrupt = false;
+ 	spin_lock_init(&aq->lock);
+ 	INIT_LIST_HEAD(&aq->pendingq);
+ 	INIT_LIST_HEAD(&aq->requestq);
+diff --git a/drivers/s390/crypto/zcrypt_ccamisc.c b/drivers/s390/crypto/zcrypt_ccamisc.c
+index bc34bedf9db8b..6a3c2b4609652 100644
+--- a/drivers/s390/crypto/zcrypt_ccamisc.c
++++ b/drivers/s390/crypto/zcrypt_ccamisc.c
+@@ -1724,10 +1724,10 @@ static int fetch_cca_info(u16 cardnr, u16 domain, struct cca_info *ci)
+ 	rlen = vlen = PAGE_SIZE/2;
+ 	rc = cca_query_crypto_facility(cardnr, domain, "STATICSB",
+ 				       rarray, &rlen, varray, &vlen);
+-	if (rc == 0 && rlen >= 10*8 && vlen >= 240) {
+-		ci->new_apka_mk_state = (char) rarray[7*8];
+-		ci->cur_apka_mk_state = (char) rarray[8*8];
+-		ci->old_apka_mk_state = (char) rarray[9*8];
++	if (rc == 0 && rlen >= 13*8 && vlen >= 240) {
++		ci->new_apka_mk_state = (char) rarray[10*8];
++		ci->cur_apka_mk_state = (char) rarray[11*8];
++		ci->old_apka_mk_state = (char) rarray[12*8];
+ 		if (ci->old_apka_mk_state == '2')
+ 			memcpy(&ci->old_apka_mkvp, varray + 208, 8);
+ 		if (ci->cur_apka_mk_state == '2')
+diff --git a/drivers/soc/mediatek/mt8183-mmsys.h b/drivers/soc/mediatek/mt8183-mmsys.h
+index 579dfc8dc8fc9..9dee485807c94 100644
+--- a/drivers/soc/mediatek/mt8183-mmsys.h
++++ b/drivers/soc/mediatek/mt8183-mmsys.h
+@@ -28,25 +28,32 @@
+ static const struct mtk_mmsys_routes mmsys_mt8183_routing_table[] = {
+ 	{
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0,
+-		MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L
++		MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L,
++		MT8183_OVL0_MOUT_EN_OVL0_2L
+ 	}, {
+ 		DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0,
+-		MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0
++		MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0,
++		MT8183_OVL0_2L_MOUT_EN_DISP_PATH0
+ 	}, {
+ 		DDP_COMPONENT_OVL_2L1, DDP_COMPONENT_RDMA1,
+-		MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1
++		MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1,
++		MT8183_OVL1_2L_MOUT_EN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0,
+-		MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0
++		MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0,
++		MT8183_DITHER0_MOUT_IN_DSI0
+ 	}, {
+ 		DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0,
+-		MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L
++		MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L,
++		MT8183_DISP_PATH0_SEL_IN_OVL0_2L
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+-		MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1
++		MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1,
++		MT8183_DPI0_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0,
+-		MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0
++		MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0,
++		MT8183_RDMA0_SOUT_COLOR0
+ 	}
+ };
+ 
+diff --git a/drivers/soc/mediatek/mtk-mmsys.c b/drivers/soc/mediatek/mtk-mmsys.c
+index 080660ef11bfa..0f949896fd064 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.c
++++ b/drivers/soc/mediatek/mtk-mmsys.c
+@@ -68,7 +68,9 @@ void mtk_mmsys_ddp_connect(struct device *dev,
+ 
+ 	for (i = 0; i < mmsys->data->num_routes; i++)
+ 		if (cur == routes[i].from_comp && next == routes[i].to_comp) {
+-			reg = readl_relaxed(mmsys->regs + routes[i].addr) | routes[i].val;
++			reg = readl_relaxed(mmsys->regs + routes[i].addr);
++			reg &= ~routes[i].mask;
++			reg |= routes[i].val;
+ 			writel_relaxed(reg, mmsys->regs + routes[i].addr);
+ 		}
+ }
+@@ -85,7 +87,8 @@ void mtk_mmsys_ddp_disconnect(struct device *dev,
+ 
+ 	for (i = 0; i < mmsys->data->num_routes; i++)
+ 		if (cur == routes[i].from_comp && next == routes[i].to_comp) {
+-			reg = readl_relaxed(mmsys->regs + routes[i].addr) & ~routes[i].val;
++			reg = readl_relaxed(mmsys->regs + routes[i].addr);
++			reg &= ~routes[i].mask;
+ 			writel_relaxed(reg, mmsys->regs + routes[i].addr);
+ 		}
+ }
+diff --git a/drivers/soc/mediatek/mtk-mmsys.h b/drivers/soc/mediatek/mtk-mmsys.h
+index a760a34e6eca8..5f3e2bf0c40bc 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.h
++++ b/drivers/soc/mediatek/mtk-mmsys.h
+@@ -35,41 +35,54 @@
+ #define RDMA0_SOUT_DSI1				0x1
+ #define RDMA0_SOUT_DSI2				0x4
+ #define RDMA0_SOUT_DSI3				0x5
++#define RDMA0_SOUT_MASK				0x7
+ #define RDMA1_SOUT_DPI0				0x2
+ #define RDMA1_SOUT_DPI1				0x3
+ #define RDMA1_SOUT_DSI1				0x1
+ #define RDMA1_SOUT_DSI2				0x4
+ #define RDMA1_SOUT_DSI3				0x5
++#define RDMA1_SOUT_MASK				0x7
+ #define RDMA2_SOUT_DPI0				0x2
+ #define RDMA2_SOUT_DPI1				0x3
+ #define RDMA2_SOUT_DSI1				0x1
+ #define RDMA2_SOUT_DSI2				0x4
+ #define RDMA2_SOUT_DSI3				0x5
++#define RDMA2_SOUT_MASK				0x7
+ #define DPI0_SEL_IN_RDMA1			0x1
+ #define DPI0_SEL_IN_RDMA2			0x3
++#define DPI0_SEL_IN_MASK			0x3
+ #define DPI1_SEL_IN_RDMA1			(0x1 << 8)
+ #define DPI1_SEL_IN_RDMA2			(0x3 << 8)
++#define DPI1_SEL_IN_MASK			(0x3 << 8)
+ #define DSI0_SEL_IN_RDMA1			0x1
+ #define DSI0_SEL_IN_RDMA2			0x4
++#define DSI0_SEL_IN_MASK			0x7
+ #define DSI1_SEL_IN_RDMA1			0x1
+ #define DSI1_SEL_IN_RDMA2			0x4
++#define DSI1_SEL_IN_MASK			0x7
+ #define DSI2_SEL_IN_RDMA1			(0x1 << 16)
+ #define DSI2_SEL_IN_RDMA2			(0x4 << 16)
++#define DSI2_SEL_IN_MASK			(0x7 << 16)
+ #define DSI3_SEL_IN_RDMA1			(0x1 << 16)
+ #define DSI3_SEL_IN_RDMA2			(0x4 << 16)
++#define DSI3_SEL_IN_MASK			(0x7 << 16)
+ #define COLOR1_SEL_IN_OVL1			0x1
+ 
+ #define OVL_MOUT_EN_RDMA			0x1
+ #define BLS_TO_DSI_RDMA1_TO_DPI1		0x8
+ #define BLS_TO_DPI_RDMA1_TO_DSI			0x2
++#define BLS_RDMA1_DSI_DPI_MASK			0xf
+ #define DSI_SEL_IN_BLS				0x0
+ #define DPI_SEL_IN_BLS				0x0
++#define DPI_SEL_IN_MASK				0x1
+ #define DSI_SEL_IN_RDMA				0x1
++#define DSI_SEL_IN_MASK				0x1
+ 
+ struct mtk_mmsys_routes {
+ 	u32 from_comp;
+ 	u32 to_comp;
+ 	u32 addr;
++	u32 mask;
+ 	u32 val;
+ };
+ 
+@@ -91,124 +104,164 @@ struct mtk_mmsys_driver_data {
+ static const struct mtk_mmsys_routes mmsys_default_routing_table[] = {
+ 	{
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_OUT_SEL, BLS_TO_DSI_RDMA1_TO_DPI1
++		DISP_REG_CONFIG_OUT_SEL, BLS_RDMA1_DSI_DPI_MASK,
++		BLS_TO_DSI_RDMA1_TO_DPI1
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_BLS
++		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_MASK,
++		DSI_SEL_IN_BLS
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_OUT_SEL, BLS_TO_DPI_RDMA1_TO_DSI
++		DISP_REG_CONFIG_OUT_SEL, BLS_RDMA1_DSI_DPI_MASK,
++		BLS_TO_DPI_RDMA1_TO_DSI
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_RDMA
++		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_MASK,
++		DSI_SEL_IN_RDMA
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_BLS
++		DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_MASK,
++		DPI_SEL_IN_BLS
+ 	}, {
+ 		DDP_COMPONENT_GAMMA, DDP_COMPONENT_RDMA1,
+-		DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1
++		DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1,
++		GAMMA_MOUT_EN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_OD0, DDP_COMPONENT_RDMA0,
+-		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0
++		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0,
++		OD_MOUT_EN_RDMA0
+ 	}, {
+ 		DDP_COMPONENT_OD1, DDP_COMPONENT_RDMA1,
+-		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1
++		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1,
++		OD1_MOUT_EN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0,
+-		DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0
++		DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0,
++		OVL0_MOUT_EN_COLOR0
+ 	}, {
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0,
+-		DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0
++		DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0,
++		COLOR0_SEL_IN_OVL0
+ 	}, {
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0,
+-		DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA
++		DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA,
++		OVL_MOUT_EN_RDMA
+ 	}, {
+ 		DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1,
+-		DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1
++		DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1,
++		OVL1_MOUT_EN_COLOR1
+ 	}, {
+ 		DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1,
+-		DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1
++		DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1,
++		COLOR1_SEL_IN_OVL1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI0
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DPI0
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI1
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DPI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI1
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DSI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI2
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DSI2
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI3
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DSI3
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI0
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DPI0
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_MASK,
++		DPI0_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI1
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DPI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_MASK,
++		DPI1_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_MASK,
++		DSI0_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI1
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DSI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_MASK,
++		DSI1_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI2
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DSI2
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_MASK,
++		DSI2_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI3
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DSI3
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK,
++		DSI3_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI0
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DPI0
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_MASK,
++		DPI0_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI1
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DPI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_MASK,
++		DPI1_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_MASK,
++		DSI0_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI1
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DSI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_MASK,
++		DSI1_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI2
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DSI2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_MASK,
++		DSI2_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI3
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DSI3
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK,
++		DSI3_SEL_IN_RDMA2
+ 	}
+ };
+ 
+diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
+index 2daa17ba54a3f..fa209b479ab35 100644
+--- a/drivers/soc/qcom/rpmhpd.c
++++ b/drivers/soc/qcom/rpmhpd.c
+@@ -403,12 +403,11 @@ static int rpmhpd_power_on(struct generic_pm_domain *domain)
+ static int rpmhpd_power_off(struct generic_pm_domain *domain)
+ {
+ 	struct rpmhpd *pd = domain_to_rpmhpd(domain);
+-	int ret = 0;
++	int ret;
+ 
+ 	mutex_lock(&rpmhpd_lock);
+ 
+-	ret = rpmhpd_aggregate_corner(pd, pd->level[0]);
+-
++	ret = rpmhpd_aggregate_corner(pd, 0);
+ 	if (!ret)
+ 		pd->enabled = false;
+ 
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index 1d3d5e3ec2b07..6e9a9cd28b178 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -109,7 +109,7 @@ struct smsm_entry {
+ 	DECLARE_BITMAP(irq_enabled, 32);
+ 	DECLARE_BITMAP(irq_rising, 32);
+ 	DECLARE_BITMAP(irq_falling, 32);
+-	u32 last_value;
++	unsigned long last_value;
+ 
+ 	u32 *remote_state;
+ 	u32 *subscription;
+@@ -204,8 +204,7 @@ static irqreturn_t smsm_intr(int irq, void *data)
+ 	u32 val;
+ 
+ 	val = readl(entry->remote_state);
+-	changed = val ^ entry->last_value;
+-	entry->last_value = val;
++	changed = val ^ xchg(&entry->last_value, val);
+ 
+ 	for_each_set_bit(i, entry->irq_enabled, 32) {
+ 		if (!(changed & BIT(i)))
+@@ -264,6 +263,12 @@ static void smsm_unmask_irq(struct irq_data *irqd)
+ 	struct qcom_smsm *smsm = entry->smsm;
+ 	u32 val;
+ 
++	/* Make sure our last cached state is up-to-date */
++	if (readl(entry->remote_state) & BIT(irq))
++		set_bit(irq, &entry->last_value);
++	else
++		clear_bit(irq, &entry->last_value);
++
+ 	set_bit(irq, entry->irq_enabled);
+ 
+ 	if (entry->subscription) {
+diff --git a/drivers/soc/rockchip/Kconfig b/drivers/soc/rockchip/Kconfig
+index 2c13bf4dd5dbe..25eb2c1e31bb2 100644
+--- a/drivers/soc/rockchip/Kconfig
++++ b/drivers/soc/rockchip/Kconfig
+@@ -6,8 +6,8 @@ if ARCH_ROCKCHIP || COMPILE_TEST
+ #
+ 
+ config ROCKCHIP_GRF
+-	bool
+-	default y
++	bool "Rockchip General Register Files support" if COMPILE_TEST
++	default y if ARCH_ROCKCHIP
+ 	help
+ 	  The General Register Files are a central component providing
+ 	  special additional settings registers for a lot of soc-components.
+diff --git a/drivers/spi/spi-coldfire-qspi.c b/drivers/spi/spi-coldfire-qspi.c
+index 8996115ce736a..263ce90473277 100644
+--- a/drivers/spi/spi-coldfire-qspi.c
++++ b/drivers/spi/spi-coldfire-qspi.c
+@@ -444,7 +444,7 @@ static int mcfqspi_remove(struct platform_device *pdev)
+ 	mcfqspi_wr_qmr(mcfqspi, MCFQSPI_QMR_MSTR);
+ 
+ 	mcfqspi_cs_teardown(mcfqspi);
+-	clk_disable(mcfqspi->clk);
++	clk_disable_unprepare(mcfqspi->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index e114e6fe5ea5b..d112c2cac042b 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -213,12 +213,6 @@ static void davinci_spi_chipselect(struct spi_device *spi, int value)
+ 	 * line for the controller
+ 	 */
+ 	if (spi->cs_gpiod) {
+-		/*
+-		 * FIXME: is this code ever executed? This host does not
+-		 * set SPI_MASTER_GPIO_SS so this chipselect callback should
+-		 * not get called from the SPI core when we are using
+-		 * GPIOs for chip select.
+-		 */
+ 		if (value == BITBANG_CS_ACTIVE)
+ 			gpiod_set_value(spi->cs_gpiod, 1);
+ 		else
+@@ -945,7 +939,7 @@ static int davinci_spi_probe(struct platform_device *pdev)
+ 	master->bus_num = pdev->id;
+ 	master->num_chipselect = pdata->num_chipselect;
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16);
+-	master->flags = SPI_MASTER_MUST_RX;
++	master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_GPIO_SS;
+ 	master->setup = davinci_spi_setup;
+ 	master->cleanup = davinci_spi_cleanup;
+ 	master->can_dma = davinci_spi_can_dma;
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index fb45e6af66381..fd004c9db9dc0 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -530,6 +530,7 @@ static int dspi_request_dma(struct fsl_dspi *dspi, phys_addr_t phy_addr)
+ 		goto err_rx_dma_buf;
+ 	}
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.src_addr = phy_addr + SPI_POPR;
+ 	cfg.dst_addr = phy_addr + SPI_PUSHR;
+ 	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/spi/spi-pic32.c b/drivers/spi/spi-pic32.c
+index 104bde153efd2..5eb7b61bbb4d8 100644
+--- a/drivers/spi/spi-pic32.c
++++ b/drivers/spi/spi-pic32.c
+@@ -361,6 +361,7 @@ static int pic32_spi_dma_config(struct pic32_spi *pic32s, u32 dma_width)
+ 	struct dma_slave_config cfg;
+ 	int ret;
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.device_fc = true;
+ 	cfg.src_addr = pic32s->dma_base + buf_offset;
+ 	cfg.dst_addr = pic32s->dma_base + buf_offset;
+diff --git a/drivers/spi/spi-sprd-adi.c b/drivers/spi/spi-sprd-adi.c
+index ab19068be8675..98ef17389952a 100644
+--- a/drivers/spi/spi-sprd-adi.c
++++ b/drivers/spi/spi-sprd-adi.c
+@@ -103,7 +103,7 @@
+ #define HWRST_STATUS_WATCHDOG		0xf0
+ 
+ /* Use default timeout 50 ms that converts to watchdog values */
+-#define WDG_LOAD_VAL			((50 * 1000) / 32768)
++#define WDG_LOAD_VAL			((50 * 32768) / 1000)
+ #define WDG_LOAD_MASK			GENMASK(15, 0)
+ #define WDG_UNLOCK_KEY			0xe551
+ 
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index 9262c6418463b..cfa222c9bd5e7 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -545,7 +545,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -563,7 +563,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -579,7 +579,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 
+@@ -603,7 +603,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+diff --git a/drivers/staging/clocking-wizard/Kconfig b/drivers/staging/clocking-wizard/Kconfig
+index 69cf51445f082..2324b5d737886 100644
+--- a/drivers/staging/clocking-wizard/Kconfig
++++ b/drivers/staging/clocking-wizard/Kconfig
+@@ -5,6 +5,6 @@
+ 
+ config COMMON_CLK_XLNX_CLKWZRD
+ 	tristate "Xilinx Clocking Wizard"
+-	depends on COMMON_CLK && OF && IOMEM
++	depends on COMMON_CLK && OF && HAS_IOMEM
+ 	help
+ 	  Support for the Xilinx Clocking Wizard IP core clock generator.
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c b/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
+index 11196180a2066..34bf92de2f29b 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
+@@ -1545,16 +1545,19 @@ static struct v4l2_ctrl_config mt9m114_controls[] = {
+ static int mt9m114_detect(struct mt9m114_device *dev, struct i2c_client *client)
+ {
+ 	struct i2c_adapter *adapter = client->adapter;
+-	u32 retvalue;
++	u32 model;
++	int ret;
+ 
+ 	if (!i2c_check_functionality(adapter, I2C_FUNC_I2C)) {
+ 		dev_err(&client->dev, "%s: i2c error", __func__);
+ 		return -ENODEV;
+ 	}
+-	mt9m114_read_reg(client, MISENSOR_16BIT, (u32)MT9M114_PID, &retvalue);
+-	dev->real_model_id = retvalue;
++	ret = mt9m114_read_reg(client, MISENSOR_16BIT, MT9M114_PID, &model);
++	if (ret)
++		return ret;
++	dev->real_model_id = model;
+ 
+-	if (retvalue != MT9M114_MOD_ID) {
++	if (model != MT9M114_MOD_ID) {
+ 		dev_err(&client->dev, "%s: failed: client->addr = %x\n",
+ 			__func__, client->addr);
+ 		return -ENODEV;
+diff --git a/drivers/staging/media/tegra-video/vi.c b/drivers/staging/media/tegra-video/vi.c
+index 89709cd06d4d3..d321790b07d95 100644
+--- a/drivers/staging/media/tegra-video/vi.c
++++ b/drivers/staging/media/tegra-video/vi.c
+@@ -508,8 +508,8 @@ static int __tegra_channel_try_format(struct tegra_vi_channel *chan,
+ 		return -ENODEV;
+ 
+ 	sd_state = v4l2_subdev_alloc_state(subdev);
+-	if (!sd_state)
+-		return -ENOMEM;
++	if (IS_ERR(sd_state))
++		return PTR_ERR(sd_state);
+ 	/*
+ 	 * Retrieve the format information and if requested format isn't
+ 	 * supported, keep the current format.
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index f0e5da77ed6d4..460e428b7592f 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2611,7 +2611,7 @@ static int lpuart_probe(struct platform_device *pdev)
+ 		return PTR_ERR(sport->port.membase);
+ 
+ 	sport->port.membase += sdata->reg_off;
+-	sport->port.mapbase = res->start;
++	sport->port.mapbase = res->start + sdata->reg_off;
+ 	sport->port.dev = &pdev->dev;
+ 	sport->port.type = PORT_LPUART;
+ 	sport->devtype = sdata->devtype;
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 26debec26b4e1..79c6cc39e5dd6 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2290,8 +2290,6 @@ static int tty_fasync(int fd, struct file *filp, int on)
+  *	Locking:
+  *		Called functions take tty_ldiscs_lock
+  *		current->signal->tty check is safe without locks
+- *
+- *	FIXME: may race normal receive processing
+  */
+ 
+ static int tiocsti(struct tty_struct *tty, char __user *p)
+@@ -2307,8 +2305,10 @@ static int tiocsti(struct tty_struct *tty, char __user *p)
+ 	ld = tty_ldisc_ref_wait(tty);
+ 	if (!ld)
+ 		return -EIO;
++	tty_buffer_lock_exclusive(tty->port);
+ 	if (ld->ops->receive_buf)
+ 		ld->ops->receive_buf(tty, &ch, &mbz, 1);
++	tty_buffer_unlock_exclusive(tty->port);
+ 	tty_ldisc_deref(ld);
+ 	return 0;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index ffe301d6ea359..d0f9b7c296b0d 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -598,6 +598,8 @@ static int dwc3_meson_g12a_otg_init(struct platform_device *pdev,
+ 				   USB_R5_ID_DIG_IRQ, 0);
+ 
+ 		irq = platform_get_irq(pdev, 0);
++		if (irq < 0)
++			return irq;
+ 		ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
+ 						dwc3_meson_g12a_irq_thread,
+ 						IRQF_ONESHOT, pdev->name, priv);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 49e6ca94486dd..cfbb96f6627e4 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -614,6 +614,10 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
+ 		qcom->acpi_pdata->dwc3_core_base_size;
+ 
+ 	irq = platform_get_irq(pdev_irq, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto out;
++	}
+ 	child_res[1].flags = IORESOURCE_IRQ;
+ 	child_res[1].start = child_res[1].end = irq;
+ 
+diff --git a/drivers/usb/gadget/udc/at91_udc.c b/drivers/usb/gadget/udc/at91_udc.c
+index eede5cedacb4a..d9ad9adf7348f 100644
+--- a/drivers/usb/gadget/udc/at91_udc.c
++++ b/drivers/usb/gadget/udc/at91_udc.c
+@@ -1876,7 +1876,9 @@ static int at91udc_probe(struct platform_device *pdev)
+ 	clk_disable(udc->iclk);
+ 
+ 	/* request UDC and maybe VBUS irqs */
+-	udc->udp_irq = platform_get_irq(pdev, 0);
++	udc->udp_irq = retval = platform_get_irq(pdev, 0);
++	if (retval < 0)
++		goto err_unprepare_iclk;
+ 	retval = devm_request_irq(dev, udc->udp_irq, at91_udc_irq, 0,
+ 				  driver_name, udc);
+ 	if (retval) {
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_core.c b/drivers/usb/gadget/udc/bdc/bdc_core.c
+index 0bef6b3f049b9..fa1a3908ec3bb 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_core.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_core.c
+@@ -488,27 +488,14 @@ static int bdc_probe(struct platform_device *pdev)
+ 	int irq;
+ 	u32 temp;
+ 	struct device *dev = &pdev->dev;
+-	struct clk *clk;
+ 	int phy_num;
+ 
+ 	dev_dbg(dev, "%s()\n", __func__);
+ 
+-	clk = devm_clk_get_optional(dev, "sw_usbd");
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
+-
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "could not enable clock\n");
+-		return ret;
+-	}
+-
+ 	bdc = devm_kzalloc(dev, sizeof(*bdc), GFP_KERNEL);
+ 	if (!bdc)
+ 		return -ENOMEM;
+ 
+-	bdc->clk = clk;
+-
+ 	bdc->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(bdc->regs))
+ 		return PTR_ERR(bdc->regs);
+@@ -545,10 +532,20 @@ static int bdc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	bdc->clk = devm_clk_get_optional(dev, "sw_usbd");
++	if (IS_ERR(bdc->clk))
++		return PTR_ERR(bdc->clk);
++
++	ret = clk_prepare_enable(bdc->clk);
++	if (ret) {
++		dev_err(dev, "could not enable clock\n");
++		return ret;
++	}
++
+ 	ret = bdc_phy_init(bdc);
+ 	if (ret) {
+ 		dev_err(bdc->dev, "BDC phy init failure:%d\n", ret);
+-		return ret;
++		goto disable_clk;
+ 	}
+ 
+ 	temp = bdc_readl(bdc->regs, BDC_BDCCAP1);
+@@ -560,7 +557,8 @@ static int bdc_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(dev,
+ 				"No suitable DMA config available, abort\n");
+-			return -ENOTSUPP;
++			ret = -ENOTSUPP;
++			goto phycleanup;
+ 		}
+ 		dev_dbg(dev, "Using 32-bit address\n");
+ 	}
+@@ -580,6 +578,8 @@ cleanup:
+ 	bdc_hw_exit(bdc);
+ phycleanup:
+ 	bdc_phy_exit(bdc);
++disable_clk:
++	clk_disable_unprepare(bdc->clk);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/mv_u3d_core.c b/drivers/usb/gadget/udc/mv_u3d_core.c
+index ce3d7a3eb7e33..a1057ddfbda33 100644
+--- a/drivers/usb/gadget/udc/mv_u3d_core.c
++++ b/drivers/usb/gadget/udc/mv_u3d_core.c
+@@ -1921,14 +1921,6 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 		goto err_get_irq;
+ 	}
+ 	u3d->irq = r->start;
+-	if (request_irq(u3d->irq, mv_u3d_irq,
+-		IRQF_SHARED, driver_name, u3d)) {
+-		u3d->irq = 0;
+-		dev_err(&dev->dev, "Request irq %d for u3d failed\n",
+-			u3d->irq);
+-		retval = -ENODEV;
+-		goto err_request_irq;
+-	}
+ 
+ 	/* initialize gadget structure */
+ 	u3d->gadget.ops = &mv_u3d_ops;	/* usb_gadget_ops */
+@@ -1941,6 +1933,15 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 
+ 	mv_u3d_eps_init(u3d);
+ 
++	if (request_irq(u3d->irq, mv_u3d_irq,
++		IRQF_SHARED, driver_name, u3d)) {
++		u3d->irq = 0;
++		dev_err(&dev->dev, "Request irq %d for u3d failed\n",
++			u3d->irq);
++		retval = -ENODEV;
++		goto err_request_irq;
++	}
++
+ 	/* external vbus detection */
+ 	if (u3d->vbus) {
+ 		u3d->clock_gating = 1;
+@@ -1964,8 +1965,8 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 
+ err_unregister:
+ 	free_irq(u3d->irq, u3d);
+-err_request_irq:
+ err_get_irq:
++err_request_irq:
+ 	kfree(u3d->status_req);
+ err_alloc_status_req:
+ 	kfree(u3d->eps);
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index f1b35a39d1ba8..57d417a7c3e0a 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2707,10 +2707,15 @@ static const struct renesas_usb3_priv renesas_usb3_priv_r8a77990 = {
+ 
+ static const struct of_device_id usb3_of_match[] = {
+ 	{
++		.compatible = "renesas,r8a774c0-usb3-peri",
++		.data = &renesas_usb3_priv_r8a77990,
++	}, {
+ 		.compatible = "renesas,r8a7795-usb3-peri",
+ 		.data = &renesas_usb3_priv_gen3,
+-	},
+-	{
++	}, {
++		.compatible = "renesas,r8a77990-usb3-peri",
++		.data = &renesas_usb3_priv_r8a77990,
++	}, {
+ 		.compatible = "renesas,rcar-gen3-usb3-peri",
+ 		.data = &renesas_usb3_priv_gen3,
+ 	},
+@@ -2719,18 +2724,10 @@ static const struct of_device_id usb3_of_match[] = {
+ MODULE_DEVICE_TABLE(of, usb3_of_match);
+ 
+ static const struct soc_device_attribute renesas_usb3_quirks_match[] = {
+-	{
+-		.soc_id = "r8a774c0",
+-		.data = &renesas_usb3_priv_r8a77990,
+-	},
+ 	{
+ 		.soc_id = "r8a7795", .revision = "ES1.*",
+ 		.data = &renesas_usb3_priv_r8a7795_es1,
+ 	},
+-	{
+-		.soc_id = "r8a77990",
+-		.data = &renesas_usb3_priv_r8a77990,
+-	},
+ 	{ /* sentinel */ },
+ };
+ 
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index 179777cb699fb..e3931da24277a 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -1784,6 +1784,10 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	s3c2410_udc_reinit(udc);
+ 
+ 	irq_usbd = platform_get_irq(pdev, 0);
++	if (irq_usbd < 0) {
++		retval = irq_usbd;
++		goto err_udc_clk;
++	}
+ 
+ 	/* irq setup after old hardware state is cleaned up */
+ 	retval = request_irq(irq_usbd, s3c2410_udc_irq,
+diff --git a/drivers/usb/host/ehci-orion.c b/drivers/usb/host/ehci-orion.c
+index a319b1df3011c..3626758b3e2aa 100644
+--- a/drivers/usb/host/ehci-orion.c
++++ b/drivers/usb/host/ehci-orion.c
+@@ -264,8 +264,11 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ 	 * the clock does not exists.
+ 	 */
+ 	priv->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (!IS_ERR(priv->clk))
+-		clk_prepare_enable(priv->clk);
++	if (!IS_ERR(priv->clk)) {
++		err = clk_prepare_enable(priv->clk);
++		if (err)
++			goto err_put_hcd;
++	}
+ 
+ 	priv->phy = devm_phy_optional_get(&pdev->dev, "usb");
+ 	if (IS_ERR(priv->phy)) {
+@@ -311,6 +314,7 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ err_dis_clk:
+ 	if (!IS_ERR(priv->clk))
+ 		clk_disable_unprepare(priv->clk);
++err_put_hcd:
+ 	usb_put_hcd(hcd);
+ err:
+ 	dev_err(&pdev->dev, "init %s fail, %d\n",
+diff --git a/drivers/usb/host/ohci-tmio.c b/drivers/usb/host/ohci-tmio.c
+index 7f857bad9e95b..08ec2ab0d95a5 100644
+--- a/drivers/usb/host/ohci-tmio.c
++++ b/drivers/usb/host/ohci-tmio.c
+@@ -202,6 +202,9 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
+ 	if (!cell)
+ 		return -EINVAL;
+ 
++	if (irq < 0)
++		return irq;
++
+ 	hcd = usb_create_hcd(&ohci_tmio_hc_driver, &dev->dev, dev_name(&dev->dev));
+ 	if (!hcd) {
+ 		ret = -ENOMEM;
+diff --git a/drivers/usb/misc/brcmstb-usb-pinmap.c b/drivers/usb/misc/brcmstb-usb-pinmap.c
+index 336653091e3b3..2b2019c19cdeb 100644
+--- a/drivers/usb/misc/brcmstb-usb-pinmap.c
++++ b/drivers/usb/misc/brcmstb-usb-pinmap.c
+@@ -293,6 +293,8 @@ static int __init brcmstb_usb_pinmap_probe(struct platform_device *pdev)
+ 
+ 		/* Enable interrupt for out pins */
+ 		irq = platform_get_irq(pdev, 0);
++		if (irq < 0)
++			return irq;
+ 		err = devm_request_irq(&pdev->dev, irq,
+ 				       brcmstb_usb_pinmap_ovr_isr,
+ 				       IRQF_TRIGGER_RISING,
+diff --git a/drivers/usb/phy/phy-fsl-usb.c b/drivers/usb/phy/phy-fsl-usb.c
+index f34c9437a182c..972704262b02b 100644
+--- a/drivers/usb/phy/phy-fsl-usb.c
++++ b/drivers/usb/phy/phy-fsl-usb.c
+@@ -873,6 +873,8 @@ int usb_otg_start(struct platform_device *pdev)
+ 
+ 	/* request irq */
+ 	p_otg->irq = platform_get_irq(pdev, 0);
++	if (p_otg->irq < 0)
++		return p_otg->irq;
+ 	status = request_irq(p_otg->irq, fsl_otg_isr,
+ 				IRQF_SHARED, driver_name, p_otg);
+ 	if (status) {
+diff --git a/drivers/usb/phy/phy-tahvo.c b/drivers/usb/phy/phy-tahvo.c
+index baebb1f5a9737..a3e043e3e4aae 100644
+--- a/drivers/usb/phy/phy-tahvo.c
++++ b/drivers/usb/phy/phy-tahvo.c
+@@ -393,7 +393,9 @@ static int tahvo_usb_probe(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(&pdev->dev, tu);
+ 
+-	tu->irq = platform_get_irq(pdev, 0);
++	tu->irq = ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
+ 	ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
+ 				   IRQF_ONESHOT,
+ 				   "tahvo-vbus", tu);
+diff --git a/drivers/usb/phy/phy-twl6030-usb.c b/drivers/usb/phy/phy-twl6030-usb.c
+index 8ba6c5a915570..ab3c38a7d8ac0 100644
+--- a/drivers/usb/phy/phy-twl6030-usb.c
++++ b/drivers/usb/phy/phy-twl6030-usb.c
+@@ -348,6 +348,11 @@ static int twl6030_usb_probe(struct platform_device *pdev)
+ 	twl->irq2		= platform_get_irq(pdev, 1);
+ 	twl->linkstat		= MUSB_UNKNOWN;
+ 
++	if (twl->irq1 < 0)
++		return twl->irq1;
++	if (twl->irq2 < 0)
++		return twl->irq2;
++
+ 	twl->comparator.set_vbus	= twl6030_set_vbus;
+ 	twl->comparator.start_srp	= twl6030_start_srp;
+ 
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index e48fded3e414c..8d8959a70e440 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -409,6 +409,33 @@ static bool pwm_backlight_is_linear(struct platform_pwm_backlight_data *data)
+ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ {
+ 	struct device_node *node = pb->dev->of_node;
++	bool active = true;
++
++	/*
++	 * If the enable GPIO is present, observable (either as input
++	 * or output) and off then the backlight is not currently active.
++	 * */
++	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
++		active = false;
++
++	if (!regulator_is_enabled(pb->power_supply))
++		active = false;
++
++	if (!pwm_is_enabled(pb->pwm))
++		active = false;
++
++	/*
++	 * Synchronize the enable_gpio with the observed state of the
++	 * hardware.
++	 */
++	if (pb->enable_gpio)
++		gpiod_direction_output(pb->enable_gpio, active);
++
++	/*
++	 * Do not change pb->enabled here! pb->enabled essentially
++	 * tells us if we own one of the regulator's use counts and
++	 * right now we do not.
++	 */
+ 
+ 	/* Not booted with device tree or no phandle link to the node */
+ 	if (!node || !node->phandle)
+@@ -420,20 +447,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ 	 * assume that another driver will enable the backlight at the
+ 	 * appropriate time. Therefore, if it is disabled, keep it so.
+ 	 */
+-
+-	/* if the enable GPIO is disabled, do not enable the backlight */
+-	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
+-		return FB_BLANK_POWERDOWN;
+-
+-	/* The regulator is disabled, do not enable the backlight */
+-	if (!regulator_is_enabled(pb->power_supply))
+-		return FB_BLANK_POWERDOWN;
+-
+-	/* The PWM is disabled, keep it like this */
+-	if (!pwm_is_enabled(pb->pwm))
+-		return FB_BLANK_POWERDOWN;
+-
+-	return FB_BLANK_UNBLANK;
++	return active ? FB_BLANK_UNBLANK: FB_BLANK_POWERDOWN;
+ }
+ 
+ static int pwm_backlight_probe(struct platform_device *pdev)
+@@ -486,18 +500,6 @@ static int pwm_backlight_probe(struct platform_device *pdev)
+ 		goto err_alloc;
+ 	}
+ 
+-	/*
+-	 * If the GPIO is not known to be already configured as output, that
+-	 * is, if gpiod_get_direction returns either 1 or -EINVAL, change the
+-	 * direction to output and set the GPIO as active.
+-	 * Do not force the GPIO to active when it was already output as it
+-	 * could cause backlight flickering or we would enable the backlight too
+-	 * early. Leave the decision of the initial backlight state for later.
+-	 */
+-	if (pb->enable_gpio &&
+-	    gpiod_get_direction(pb->enable_gpio) != 0)
+-		gpiod_direction_output(pb->enable_gpio, 1);
+-
+ 	pb->power_supply = devm_regulator_get(&pdev->dev, "power");
+ 	if (IS_ERR(pb->power_supply)) {
+ 		ret = PTR_ERR(pb->power_supply);
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 1c855145711ba..63e2f17f3c619 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -962,6 +962,7 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	struct fb_var_screeninfo old_var;
+ 	struct fb_videomode mode;
+ 	struct fb_event event;
++	u32 unused;
+ 
+ 	if (var->activate & FB_ACTIVATE_INV_MODE) {
+ 		struct fb_videomode mode1, mode2;
+@@ -1008,6 +1009,11 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	if (var->xres < 8 || var->yres < 8)
+ 		return -EINVAL;
+ 
++	/* Too huge resolution causes multiplication overflow. */
++	if (check_mul_overflow(var->xres, var->yres, &unused) ||
++	    check_mul_overflow(var->xres_virtual, var->yres_virtual, &unused))
++		return -EINVAL;
++
+ 	ret = info->fbops->fb_check_var(var, info);
+ 
+ 	if (ret)
+diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c
+index 9bd03a2310328..171ad8b42107e 100644
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -358,14 +358,9 @@ cifs_strndup_from_utf16(const char *src, const int maxlen,
+ 		if (!dst)
+ 			return NULL;
+ 		cifs_from_utf16(dst, (__le16 *) src, len, maxlen, codepage,
+-			       NO_MAP_UNI_RSVD);
++				NO_MAP_UNI_RSVD);
+ 	} else {
+-		len = strnlen(src, maxlen);
+-		len++;
+-		dst = kmalloc(len, GFP_KERNEL);
+-		if (!dst)
+-			return NULL;
+-		strlcpy(dst, src, len);
++		dst = kstrndup(src, maxlen, GFP_KERNEL);
+ 	}
+ 
+ 	return dst;
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index eed59bc1d9130..727c8835b2227 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -1266,10 +1266,17 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 			ctx->posix_paths = 1;
+ 		break;
+ 	case Opt_unix:
+-		if (result.negated)
++		if (result.negated) {
++			if (ctx->linux_ext == 1)
++				pr_warn_once("conflicting posix mount options specified\n");
+ 			ctx->linux_ext = 0;
+-		else
+ 			ctx->no_linux_ext = 1;
++		} else {
++			if (ctx->no_linux_ext == 1)
++				pr_warn_once("conflicting posix mount options specified\n");
++			ctx->linux_ext = 1;
++			ctx->no_linux_ext = 0;
++		}
+ 		break;
+ 	case Opt_nocase:
+ 		ctx->nocase = 1;
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index bfee176b901d4..54d77c99e21c0 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -369,7 +369,7 @@ int get_symlink_reparse_path(char *full_path, struct cifs_sb_info *cifs_sb,
+  */
+ 
+ static int
+-initiate_cifs_search(const unsigned int xid, struct file *file,
++_initiate_cifs_search(const unsigned int xid, struct file *file,
+ 		     const char *full_path)
+ {
+ 	__u16 search_flags;
+@@ -451,6 +451,27 @@ error_exit:
+ 	return rc;
+ }
+ 
++static int
++initiate_cifs_search(const unsigned int xid, struct file *file,
++		     const char *full_path)
++{
++	int rc, retry_count = 0;
++
++	do {
++		rc = _initiate_cifs_search(xid, file, full_path);
++		/*
++		 * If we don't have enough credits to start reading the
++		 * directory just try again after short wait.
++		 */
++		if (rc != -EDEADLK)
++			break;
++
++		usleep_range(512, 2048);
++	} while (retry_count++ < 5);
++
++	return rc;
++}
++
+ /* return length of unicode string in bytes */
+ static int cifs_unicode_bytelen(const char *str)
+ {
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index df00231d3ecc9..7d162b0efbf03 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -179,8 +179,10 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
+ 	if (!fops_get(real_fops)) {
+ #ifdef CONFIG_MODULES
+ 		if (real_fops->owner &&
+-		    real_fops->owner->state == MODULE_STATE_GOING)
++		    real_fops->owner->state == MODULE_STATE_GOING) {
++			r = -ENXIO;
+ 			goto out;
++		}
+ #endif
+ 
+ 		/* Huh? Module did not clean up after itself at exit? */
+@@ -314,8 +316,10 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
+ 	if (!fops_get(real_fops)) {
+ #ifdef CONFIG_MODULES
+ 		if (real_fops->owner &&
+-		    real_fops->owner->state == MODULE_STATE_GOING)
++		    real_fops->owner->state == MODULE_STATE_GOING) {
++			r = -ENXIO;
+ 			goto out;
++		}
+ #endif
+ 
+ 		/* Huh? Module did not cleanup after itself at exit? */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6afd4562335fc..97d48c5bdebcb 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -261,8 +261,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 	};
+ 	unsigned int seq_id = 0;
+ 
+-	if (unlikely(f2fs_readonly(inode->i_sb) ||
+-				is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
++	if (unlikely(f2fs_readonly(inode->i_sb)))
+ 		return 0;
+ 
+ 	trace_f2fs_sync_file_enter(inode);
+@@ -276,7 +275,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 	ret = file_write_and_wait_range(file, start, end);
+ 	clear_inode_flag(inode, FI_NEED_IPU);
+ 
+-	if (ret) {
++	if (ret || is_sbi_flag_set(sbi, SBI_CP_DISABLED)) {
+ 		trace_f2fs_sync_file_exit(inode, cp_reason, datasync, ret);
+ 		return ret;
+ 	}
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 8fecd3050ccd4..ce703e6fdafc0 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2039,8 +2039,17 @@ restore_flag:
+ 
+ static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
+ {
++	int retry = DEFAULT_RETRY_IO_COUNT;
++
+ 	/* we should flush all the data to keep data consistency */
+-	sync_inodes_sb(sbi->sb);
++	do {
++		sync_inodes_sb(sbi->sb);
++		cond_resched();
++		congestion_wait(BLK_RW_ASYNC, DEFAULT_IO_TIMEOUT);
++	} while (get_pages(sbi, F2FS_DIRTY_DATA) && retry--);
++
++	if (unlikely(retry < 0))
++		f2fs_warn(sbi, "checkpoint=enable has some unwritten data.");
+ 
+ 	down_write(&sbi->gc_lock);
+ 	f2fs_dirty_to_prefree(sbi);
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index f946bec8f1f1b..68added37c15f 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -150,7 +150,8 @@ void f_delown(struct file *filp)
+ pid_t f_getown(struct file *filp)
+ {
+ 	pid_t pid = 0;
+-	read_lock(&filp->f_owner.lock);
++
++	read_lock_irq(&filp->f_owner.lock);
+ 	rcu_read_lock();
+ 	if (pid_task(filp->f_owner.pid, filp->f_owner.pid_type)) {
+ 		pid = pid_vnr(filp->f_owner.pid);
+@@ -158,7 +159,7 @@ pid_t f_getown(struct file *filp)
+ 			pid = -pid;
+ 	}
+ 	rcu_read_unlock();
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 	return pid;
+ }
+ 
+@@ -208,7 +209,7 @@ static int f_getown_ex(struct file *filp, unsigned long arg)
+ 	struct f_owner_ex owner = {};
+ 	int ret = 0;
+ 
+-	read_lock(&filp->f_owner.lock);
++	read_lock_irq(&filp->f_owner.lock);
+ 	rcu_read_lock();
+ 	if (pid_task(filp->f_owner.pid, filp->f_owner.pid_type))
+ 		owner.pid = pid_vnr(filp->f_owner.pid);
+@@ -231,7 +232,7 @@ static int f_getown_ex(struct file *filp, unsigned long arg)
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 
+ 	if (!ret) {
+ 		ret = copy_to_user(owner_p, &owner, sizeof(owner));
+@@ -249,10 +250,10 @@ static int f_getowner_uids(struct file *filp, unsigned long arg)
+ 	uid_t src[2];
+ 	int err;
+ 
+-	read_lock(&filp->f_owner.lock);
++	read_lock_irq(&filp->f_owner.lock);
+ 	src[0] = from_kuid(user_ns, filp->f_owner.uid);
+ 	src[1] = from_kuid(user_ns, filp->f_owner.euid);
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 
+ 	err  = put_user(src[0], &dst[0]);
+ 	err |= put_user(src[1], &dst[1]);
+@@ -1003,13 +1004,14 @@ static void kill_fasync_rcu(struct fasync_struct *fa, int sig, int band)
+ {
+ 	while (fa) {
+ 		struct fown_struct *fown;
++		unsigned long flags;
+ 
+ 		if (fa->magic != FASYNC_MAGIC) {
+ 			printk(KERN_ERR "kill_fasync: bad magic number in "
+ 			       "fasync_struct!\n");
+ 			return;
+ 		}
+-		read_lock(&fa->fa_lock);
++		read_lock_irqsave(&fa->fa_lock, flags);
+ 		if (fa->fa_file) {
+ 			fown = &fa->fa_file->f_owner;
+ 			/* Don't send SIGURG to processes which have not set a
+@@ -1018,7 +1020,7 @@ static void kill_fasync_rcu(struct fasync_struct *fa, int sig, int band)
+ 			if (!(sig == SIGURG && fown->signum == 0))
+ 				send_sigio(fown, fa->fa_fd, band);
+ 		}
+-		read_unlock(&fa->fa_lock);
++		read_unlock_irqrestore(&fa->fa_lock, flags);
+ 		fa = rcu_dereference(fa->fa_next);
+ 	}
+ }
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 97f860cfc195f..2bca7edfc9f69 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -198,12 +198,11 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 	struct fuse_file *ff = file->private_data;
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 
+-	if (!(ff->open_flags & FOPEN_KEEP_CACHE))
+-		invalidate_inode_pages2(inode->i_mapping);
+ 	if (ff->open_flags & FOPEN_STREAM)
+ 		stream_open(inode, file);
+ 	else if (ff->open_flags & FOPEN_NONSEEKABLE)
+ 		nonseekable_open(inode, file);
++
+ 	if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
+ 		struct fuse_inode *fi = get_fuse_inode(inode);
+ 
+@@ -211,10 +210,14 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 		fi->attr_version = atomic64_inc_return(&fc->attr_version);
+ 		i_size_write(inode, 0);
+ 		spin_unlock(&fi->lock);
++		truncate_pagecache(inode, 0);
+ 		fuse_invalidate_attr(inode);
+ 		if (fc->writeback_cache)
+ 			file_update_time(file);
++	} else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) {
++		invalidate_inode_pages2(inode->i_mapping);
+ 	}
++
+ 	if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
+ 		fuse_link_write_file(file);
+ }
+@@ -389,6 +392,7 @@ struct fuse_writepage_args {
+ 	struct list_head queue_entry;
+ 	struct fuse_writepage_args *next;
+ 	struct inode *inode;
++	struct fuse_sync_bucket *bucket;
+ };
+ 
+ static struct fuse_writepage_args *fuse_find_writeback(struct fuse_inode *fi,
+@@ -1608,6 +1612,9 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
+ 	struct fuse_args_pages *ap = &wpa->ia.ap;
+ 	int i;
+ 
++	if (wpa->bucket)
++		fuse_sync_bucket_dec(wpa->bucket);
++
+ 	for (i = 0; i < ap->num_pages; i++)
+ 		__free_page(ap->pages[i]);
+ 
+@@ -1871,6 +1878,20 @@ static struct fuse_writepage_args *fuse_writepage_args_alloc(void)
+ 
+ }
+ 
++static void fuse_writepage_add_to_bucket(struct fuse_conn *fc,
++					 struct fuse_writepage_args *wpa)
++{
++	if (!fc->sync_fs)
++		return;
++
++	rcu_read_lock();
++	/* Prevent resurrection of dead bucket in unlikely race with syncfs */
++	do {
++		wpa->bucket = rcu_dereference(fc->curr_bucket);
++	} while (unlikely(!atomic_inc_not_zero(&wpa->bucket->count)));
++	rcu_read_unlock();
++}
++
+ static int fuse_writepage_locked(struct page *page)
+ {
+ 	struct address_space *mapping = page->mapping;
+@@ -1898,6 +1919,7 @@ static int fuse_writepage_locked(struct page *page)
+ 	if (!wpa->ia.ff)
+ 		goto err_nofile;
+ 
++	fuse_writepage_add_to_bucket(fc, wpa);
+ 	fuse_write_args_fill(&wpa->ia, wpa->ia.ff, page_offset(page), 0);
+ 
+ 	copy_highpage(tmp_page, page);
+@@ -2148,6 +2170,8 @@ static int fuse_writepages_fill(struct page *page,
+ 			__free_page(tmp_page);
+ 			goto out_unlock;
+ 		}
++		fuse_writepage_add_to_bucket(fc, wpa);
++
+ 		data->max_pages = 1;
+ 
+ 		ap = &wpa->ia.ap;
+@@ -2881,7 +2905,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 
+ static int fuse_writeback_range(struct inode *inode, loff_t start, loff_t end)
+ {
+-	int err = filemap_write_and_wait_range(inode->i_mapping, start, end);
++	int err = filemap_write_and_wait_range(inode->i_mapping, start, -1);
+ 
+ 	if (!err)
+ 		fuse_sync_writes(inode);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 07829ce78695b..a1cd598860776 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -515,6 +515,13 @@ struct fuse_fs_context {
+ 	void **fudptr;
+ };
+ 
++struct fuse_sync_bucket {
++	/* count is a possible scalability bottleneck */
++	atomic_t count;
++	wait_queue_head_t waitq;
++	struct rcu_head rcu;
++};
++
+ /**
+  * A Fuse connection.
+  *
+@@ -807,6 +814,9 @@ struct fuse_conn {
+ 
+ 	/** List of filesystems using this connection */
+ 	struct list_head mounts;
++
++	/* New writepages go into this bucket */
++	struct fuse_sync_bucket __rcu *curr_bucket;
+ };
+ 
+ /*
+@@ -910,6 +920,15 @@ static inline void fuse_page_descs_length_init(struct fuse_page_desc *descs,
+ 		descs[i].length = PAGE_SIZE - descs[i].offset;
+ }
+ 
++static inline void fuse_sync_bucket_dec(struct fuse_sync_bucket *bucket)
++{
++	/* Need RCU protection to prevent use after free after the decrement */
++	rcu_read_lock();
++	if (atomic_dec_and_test(&bucket->count))
++		wake_up(&bucket->waitq);
++	rcu_read_unlock();
++}
++
+ /** Device operations */
+ extern const struct file_operations fuse_dev_operations;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index b9beb39a4a181..be7378c4f47ca 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -506,6 +506,57 @@ static int fuse_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	return err;
+ }
+ 
++static struct fuse_sync_bucket *fuse_sync_bucket_alloc(void)
++{
++	struct fuse_sync_bucket *bucket;
++
++	bucket = kzalloc(sizeof(*bucket), GFP_KERNEL | __GFP_NOFAIL);
++	if (bucket) {
++		init_waitqueue_head(&bucket->waitq);
++		/* Initial active count */
++		atomic_set(&bucket->count, 1);
++	}
++	return bucket;
++}
++
++static void fuse_sync_fs_writes(struct fuse_conn *fc)
++{
++	struct fuse_sync_bucket *bucket, *new_bucket;
++	int count;
++
++	new_bucket = fuse_sync_bucket_alloc();
++	spin_lock(&fc->lock);
++	bucket = rcu_dereference_protected(fc->curr_bucket, 1);
++	count = atomic_read(&bucket->count);
++	WARN_ON(count < 1);
++	/* No outstanding writes? */
++	if (count == 1) {
++		spin_unlock(&fc->lock);
++		kfree(new_bucket);
++		return;
++	}
++
++	/*
++	 * Completion of new bucket depends on completion of this bucket, so add
++	 * one more count.
++	 */
++	atomic_inc(&new_bucket->count);
++	rcu_assign_pointer(fc->curr_bucket, new_bucket);
++	spin_unlock(&fc->lock);
++	/*
++	 * Drop initial active count.  At this point if all writes in this and
++	 * ancestor buckets complete, the count will go to zero and this task
++	 * will be woken up.
++	 */
++	atomic_dec(&bucket->count);
++
++	wait_event(bucket->waitq, atomic_read(&bucket->count) == 0);
++
++	/* Drop temp count on descendant bucket */
++	fuse_sync_bucket_dec(new_bucket);
++	kfree_rcu(bucket, rcu);
++}
++
+ static int fuse_sync_fs(struct super_block *sb, int wait)
+ {
+ 	struct fuse_mount *fm = get_fuse_mount_super(sb);
+@@ -528,6 +579,8 @@ static int fuse_sync_fs(struct super_block *sb, int wait)
+ 	if (!fc->sync_fs)
+ 		return 0;
+ 
++	fuse_sync_fs_writes(fc);
++
+ 	memset(&inarg, 0, sizeof(inarg));
+ 	args.in_numargs = 1;
+ 	args.in_args[0].size = sizeof(inarg);
+@@ -763,6 +816,7 @@ void fuse_conn_put(struct fuse_conn *fc)
+ {
+ 	if (refcount_dec_and_test(&fc->count)) {
+ 		struct fuse_iqueue *fiq = &fc->iq;
++		struct fuse_sync_bucket *bucket;
+ 
+ 		if (IS_ENABLED(CONFIG_FUSE_DAX))
+ 			fuse_dax_conn_free(fc);
+@@ -770,6 +824,11 @@ void fuse_conn_put(struct fuse_conn *fc)
+ 			fiq->ops->release(fiq);
+ 		put_pid_ns(fc->pid_ns);
+ 		put_user_ns(fc->user_ns);
++		bucket = rcu_dereference_protected(fc->curr_bucket, 1);
++		if (bucket) {
++			WARN_ON(atomic_read(&bucket->count) != 1);
++			kfree(bucket);
++		}
+ 		fc->release(fc);
+ 	}
+ }
+@@ -1418,6 +1477,7 @@ int fuse_fill_super_common(struct super_block *sb, struct fuse_fs_context *ctx)
+ 	if (sb->s_flags & SB_MANDLOCK)
+ 		goto err;
+ 
++	rcu_assign_pointer(fc->curr_bucket, fuse_sync_bucket_alloc());
+ 	fuse_sb_defaults(sb);
+ 
+ 	if (ctx->is_bdev) {
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 5f4504dd0875a..ca76e3b8792ce 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -677,6 +677,7 @@ static int init_statfs(struct gfs2_sbd *sdp)
+ 			error = PTR_ERR(lsi->si_sc_inode);
+ 			fs_err(sdp, "can't find local \"sc\" file#%u: %d\n",
+ 			       jd->jd_jid, error);
++			kfree(lsi);
+ 			goto free_local;
+ 		}
+ 		lsi->si_jid = jd->jd_jid;
+@@ -1088,6 +1089,34 @@ void gfs2_online_uevent(struct gfs2_sbd *sdp)
+ 	kobject_uevent_env(&sdp->sd_kobj, KOBJ_ONLINE, envp);
+ }
+ 
++static int init_threads(struct gfs2_sbd *sdp)
++{
++	struct task_struct *p;
++	int error = 0;
++
++	p = kthread_run(gfs2_logd, sdp, "gfs2_logd");
++	if (IS_ERR(p)) {
++		error = PTR_ERR(p);
++		fs_err(sdp, "can't start logd thread: %d\n", error);
++		return error;
++	}
++	sdp->sd_logd_process = p;
++
++	p = kthread_run(gfs2_quotad, sdp, "gfs2_quotad");
++	if (IS_ERR(p)) {
++		error = PTR_ERR(p);
++		fs_err(sdp, "can't start quotad thread: %d\n", error);
++		goto fail;
++	}
++	sdp->sd_quotad_process = p;
++	return 0;
++
++fail:
++	kthread_stop(sdp->sd_logd_process);
++	sdp->sd_logd_process = NULL;
++	return error;
++}
++
+ /**
+  * gfs2_fill_super - Read in superblock
+  * @sb: The VFS superblock
+@@ -1216,6 +1245,14 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		goto fail_per_node;
+ 	}
+ 
++	if (!sb_rdonly(sb)) {
++		error = init_threads(sdp);
++		if (error) {
++			gfs2_withdraw_delayed(sdp);
++			goto fail_per_node;
++		}
++	}
++
+ 	error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
+ 	if (error)
+ 		goto fail_per_node;
+@@ -1225,6 +1262,12 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	gfs2_freeze_unlock(&freeze_gh);
+ 	if (error) {
++		if (sdp->sd_quotad_process)
++			kthread_stop(sdp->sd_quotad_process);
++		sdp->sd_quotad_process = NULL;
++		if (sdp->sd_logd_process)
++			kthread_stop(sdp->sd_logd_process);
++		sdp->sd_logd_process = NULL;
+ 		fs_err(sdp, "can't make FS RW: %d\n", error);
+ 		goto fail_per_node;
+ 	}
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 4d4ceb0b69031..2bdbba5ea8d79 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -119,34 +119,6 @@ int gfs2_jdesc_check(struct gfs2_jdesc *jd)
+ 	return 0;
+ }
+ 
+-static int init_threads(struct gfs2_sbd *sdp)
+-{
+-	struct task_struct *p;
+-	int error = 0;
+-
+-	p = kthread_run(gfs2_logd, sdp, "gfs2_logd");
+-	if (IS_ERR(p)) {
+-		error = PTR_ERR(p);
+-		fs_err(sdp, "can't start logd thread: %d\n", error);
+-		return error;
+-	}
+-	sdp->sd_logd_process = p;
+-
+-	p = kthread_run(gfs2_quotad, sdp, "gfs2_quotad");
+-	if (IS_ERR(p)) {
+-		error = PTR_ERR(p);
+-		fs_err(sdp, "can't start quotad thread: %d\n", error);
+-		goto fail;
+-	}
+-	sdp->sd_quotad_process = p;
+-	return 0;
+-
+-fail:
+-	kthread_stop(sdp->sd_logd_process);
+-	sdp->sd_logd_process = NULL;
+-	return error;
+-}
+-
+ /**
+  * gfs2_make_fs_rw - Turn a Read-Only FS into a Read-Write one
+  * @sdp: the filesystem
+@@ -161,26 +133,17 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	struct gfs2_log_header_host head;
+ 	int error;
+ 
+-	error = init_threads(sdp);
+-	if (error) {
+-		gfs2_withdraw_delayed(sdp);
+-		return error;
+-	}
+-
+ 	j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+-	if (gfs2_withdrawn(sdp)) {
+-		error = -EIO;
+-		goto fail;
+-	}
++	if (gfs2_withdrawn(sdp))
++		return -EIO;
+ 
+ 	error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+ 	if (error || gfs2_withdrawn(sdp))
+-		goto fail;
++		return error;
+ 
+ 	if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
+ 		gfs2_consist(sdp);
+-		error = -EIO;
+-		goto fail;
++		return -EIO;
+ 	}
+ 
+ 	/*  Initialize some head of the log stuff  */
+@@ -188,20 +151,8 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 
+ 	error = gfs2_quota_init(sdp);
+-	if (error || gfs2_withdrawn(sdp))
+-		goto fail;
+-
+-	set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+-
+-	return 0;
+-
+-fail:
+-	if (sdp->sd_quotad_process)
+-		kthread_stop(sdp->sd_quotad_process);
+-	sdp->sd_quotad_process = NULL;
+-	if (sdp->sd_logd_process)
+-		kthread_stop(sdp->sd_logd_process);
+-	sdp->sd_logd_process = NULL;
++	if (!error && !gfs2_withdrawn(sdp))
++		set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 	return error;
+ }
+ 
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 7d2ed8c7dd312..2cc7f75ff24d7 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -51,6 +51,10 @@ struct io_worker {
+ 
+ 	struct completion ref_done;
+ 
++	unsigned long create_state;
++	struct callback_head create_work;
++	int create_index;
++
+ 	struct rcu_head rcu;
+ };
+ 
+@@ -272,24 +276,18 @@ static void io_wqe_inc_running(struct io_worker *worker)
+ 	atomic_inc(&acct->nr_running);
+ }
+ 
+-struct create_worker_data {
+-	struct callback_head work;
+-	struct io_wqe *wqe;
+-	int index;
+-};
+-
+ static void create_worker_cb(struct callback_head *cb)
+ {
+-	struct create_worker_data *cwd;
++	struct io_worker *worker;
+ 	struct io_wq *wq;
+ 	struct io_wqe *wqe;
+ 	struct io_wqe_acct *acct;
+ 	bool do_create = false, first = false;
+ 
+-	cwd = container_of(cb, struct create_worker_data, work);
+-	wqe = cwd->wqe;
++	worker = container_of(cb, struct io_worker, create_work);
++	wqe = worker->wqe;
+ 	wq = wqe->wq;
+-	acct = &wqe->acct[cwd->index];
++	acct = &wqe->acct[worker->create_index];
+ 	raw_spin_lock_irq(&wqe->lock);
+ 	if (acct->nr_workers < acct->max_workers) {
+ 		if (!acct->nr_workers)
+@@ -299,33 +297,42 @@ static void create_worker_cb(struct callback_head *cb)
+ 	}
+ 	raw_spin_unlock_irq(&wqe->lock);
+ 	if (do_create) {
+-		create_io_worker(wq, wqe, cwd->index, first);
++		create_io_worker(wq, wqe, worker->create_index, first);
+ 	} else {
+ 		atomic_dec(&acct->nr_running);
+ 		io_worker_ref_put(wq);
+ 	}
+-	kfree(cwd);
++	clear_bit_unlock(0, &worker->create_state);
++	io_worker_release(worker);
+ }
+ 
+-static void io_queue_worker_create(struct io_wqe *wqe, struct io_wqe_acct *acct)
++static void io_queue_worker_create(struct io_wqe *wqe, struct io_worker *worker,
++				   struct io_wqe_acct *acct)
+ {
+-	struct create_worker_data *cwd;
+ 	struct io_wq *wq = wqe->wq;
+ 
+ 	/* raced with exit, just ignore create call */
+ 	if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
+ 		goto fail;
++	if (!io_worker_get(worker))
++		goto fail;
++	/*
++	 * create_state manages ownership of create_work/index. We should
++	 * only need one entry per worker, as the worker going to sleep
++	 * will trigger the condition, and waking will clear it once it
++	 * runs the task_work.
++	 */
++	if (test_bit(0, &worker->create_state) ||
++	    test_and_set_bit_lock(0, &worker->create_state))
++		goto fail_release;
+ 
+-	cwd = kmalloc(sizeof(*cwd), GFP_ATOMIC);
+-	if (cwd) {
+-		init_task_work(&cwd->work, create_worker_cb);
+-		cwd->wqe = wqe;
+-		cwd->index = acct->index;
+-		if (!task_work_add(wq->task, &cwd->work, TWA_SIGNAL))
+-			return;
+-
+-		kfree(cwd);
+-	}
++	init_task_work(&worker->create_work, create_worker_cb);
++	worker->create_index = acct->index;
++	if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL))
++		return;
++	clear_bit_unlock(0, &worker->create_state);
++fail_release:
++	io_worker_release(worker);
+ fail:
+ 	atomic_dec(&acct->nr_running);
+ 	io_worker_ref_put(wq);
+@@ -343,7 +350,7 @@ static void io_wqe_dec_running(struct io_worker *worker)
+ 	if (atomic_dec_and_test(&acct->nr_running) && io_wqe_run_queue(wqe)) {
+ 		atomic_inc(&acct->nr_running);
+ 		atomic_inc(&wqe->wq->worker_refs);
+-		io_queue_worker_create(wqe, acct);
++		io_queue_worker_create(wqe, worker, acct);
+ 	}
+ }
+ 
+@@ -416,7 +423,28 @@ static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
+ 	spin_unlock(&wq->hash->wait.lock);
+ }
+ 
+-static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
++/*
++ * We can always run the work if the worker is currently the same type as
++ * the work (eg both are bound, or both are unbound). If they are not the
++ * same, only allow it if incrementing the worker count would be allowed.
++ */
++static bool io_worker_can_run_work(struct io_worker *worker,
++				   struct io_wq_work *work)
++{
++	struct io_wqe_acct *acct;
++
++	if (!(worker->flags & IO_WORKER_F_BOUND) !=
++	    !(work->flags & IO_WQ_WORK_UNBOUND))
++		return true;
++
++	/* not the same type, check if we'd go over the limit */
++	acct = io_work_get_acct(worker->wqe, work);
++	return acct->nr_workers < acct->max_workers;
++}
++
++static struct io_wq_work *io_get_next_work(struct io_wqe *wqe,
++					   struct io_worker *worker,
++					   bool *stalled)
+ 	__must_hold(wqe->lock)
+ {
+ 	struct io_wq_work_node *node, *prev;
+@@ -428,6 +456,9 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
+ 
+ 		work = container_of(node, struct io_wq_work, list);
+ 
++		if (!io_worker_can_run_work(worker, work))
++			break;
++
+ 		/* not hashed, can run anytime */
+ 		if (!io_wq_is_hashed(work)) {
+ 			wq_list_del(&wqe->work_list, node, prev);
+@@ -454,6 +485,7 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
+ 		raw_spin_unlock(&wqe->lock);
+ 		io_wait_on_hash(wqe, stall_hash);
+ 		raw_spin_lock(&wqe->lock);
++		*stalled = true;
+ 	}
+ 
+ 	return NULL;
+@@ -493,6 +525,7 @@ static void io_worker_handle_work(struct io_worker *worker)
+ 
+ 	do {
+ 		struct io_wq_work *work;
++		bool stalled;
+ get_next:
+ 		/*
+ 		 * If we got some work, mark us as busy. If we didn't, but
+@@ -501,10 +534,11 @@ get_next:
+ 		 * can't make progress, any work completion or insertion will
+ 		 * clear the stalled flag.
+ 		 */
+-		work = io_get_next_work(wqe);
++		stalled = false;
++		work = io_get_next_work(wqe, worker, &stalled);
+ 		if (work)
+ 			__io_worker_busy(wqe, worker, work);
+-		else if (!wq_list_empty(&wqe->work_list))
++		else if (stalled)
+ 			wqe->flags |= IO_WQE_FLAG_STALLED;
+ 
+ 		raw_spin_unlock_irq(&wqe->lock);
+@@ -1004,12 +1038,12 @@ err_wq:
+ 
+ static bool io_task_work_match(struct callback_head *cb, void *data)
+ {
+-	struct create_worker_data *cwd;
++	struct io_worker *worker;
+ 
+ 	if (cb->func != create_worker_cb)
+ 		return false;
+-	cwd = container_of(cb, struct create_worker_data, work);
+-	return cwd->wqe->wq == data;
++	worker = container_of(cb, struct io_worker, create_work);
++	return worker->wqe->wq == data;
+ }
+ 
+ void io_wq_exit_start(struct io_wq *wq)
+@@ -1026,12 +1060,13 @@ static void io_wq_exit_workers(struct io_wq *wq)
+ 		return;
+ 
+ 	while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) {
+-		struct create_worker_data *cwd;
++		struct io_worker *worker;
+ 
+-		cwd = container_of(cb, struct create_worker_data, work);
+-		atomic_dec(&cwd->wqe->acct[cwd->index].nr_running);
++		worker = container_of(cb, struct io_worker, create_work);
++		atomic_dec(&worker->wqe->acct[worker->create_index].nr_running);
+ 		io_worker_ref_put(wq);
+-		kfree(cwd);
++		clear_bit_unlock(0, &worker->create_state);
++		io_worker_release(worker);
+ 	}
+ 
+ 	rcu_read_lock();
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index a2e20a6fbfed8..14bebc62db2d4 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1001,6 +1001,7 @@ static const struct io_op_def io_op_defs[] = {
+ 	},
+ 	[IORING_OP_WRITE] = {
+ 		.needs_file		= 1,
++		.hash_reg_file		= 1,
+ 		.unbound_nonreg_file	= 1,
+ 		.pollout		= 1,
+ 		.plug			= 1,
+@@ -1328,6 +1329,8 @@ static void io_kill_timeout(struct io_kiocb *req, int status)
+ 	struct io_timeout_data *io = req->async_data;
+ 
+ 	if (hrtimer_try_to_cancel(&io->timer) != -1) {
++		if (status)
++			req_set_fail(req);
+ 		atomic_set(&req->ctx->cq_timeouts,
+ 			atomic_read(&req->ctx->cq_timeouts) + 1);
+ 		list_del_init(&req->timeout.list);
+@@ -7722,6 +7725,8 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 		return -EINVAL;
+ 	if (nr_args > IORING_MAX_FIXED_FILES)
+ 		return -EMFILE;
++	if (nr_args > rlimit(RLIMIT_NOFILE))
++		return -EMFILE;
+ 	ret = io_rsrc_node_switch_start(ctx);
+ 	if (ret)
+ 		return ret;
+diff --git a/fs/iomap/swapfile.c b/fs/iomap/swapfile.c
+index 6250ca6a1f851..4ecf4e1f68ef9 100644
+--- a/fs/iomap/swapfile.c
++++ b/fs/iomap/swapfile.c
+@@ -31,11 +31,16 @@ static int iomap_swapfile_add_extent(struct iomap_swapfile_info *isi)
+ {
+ 	struct iomap *iomap = &isi->iomap;
+ 	unsigned long nr_pages;
++	unsigned long max_pages;
+ 	uint64_t first_ppage;
+ 	uint64_t first_ppage_reported;
+ 	uint64_t next_ppage;
+ 	int error;
+ 
++	if (unlikely(isi->nr_pages >= isi->sis->max))
++		return 0;
++	max_pages = isi->sis->max - isi->nr_pages;
++
+ 	/*
+ 	 * Round the start up and the end down so that the physical
+ 	 * extent aligns to a page boundary.
+@@ -48,6 +53,7 @@ static int iomap_swapfile_add_extent(struct iomap_swapfile_info *isi)
+ 	if (first_ppage >= next_ppage)
+ 		return 0;
+ 	nr_pages = next_ppage - first_ppage;
++	nr_pages = min(nr_pages, max_pages);
+ 
+ 	/*
+ 	 * Calculate how much swap space we're adding; the first page contains
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index 21edc423b79fa..678e2c51b855c 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -155,7 +155,6 @@ struct iso9660_options{
+ 	unsigned int overriderockperm:1;
+ 	unsigned int uid_set:1;
+ 	unsigned int gid_set:1;
+-	unsigned int utf8:1;
+ 	unsigned char map;
+ 	unsigned char check;
+ 	unsigned int blocksize;
+@@ -356,7 +355,6 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 	popt->gid = GLOBAL_ROOT_GID;
+ 	popt->uid = GLOBAL_ROOT_UID;
+ 	popt->iocharset = NULL;
+-	popt->utf8 = 0;
+ 	popt->overriderockperm = 0;
+ 	popt->session=-1;
+ 	popt->sbsector=-1;
+@@ -389,10 +387,13 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 		case Opt_cruft:
+ 			popt->cruft = 1;
+ 			break;
++#ifdef CONFIG_JOLIET
+ 		case Opt_utf8:
+-			popt->utf8 = 1;
++			kfree(popt->iocharset);
++			popt->iocharset = kstrdup("utf8", GFP_KERNEL);
++			if (!popt->iocharset)
++				return 0;
+ 			break;
+-#ifdef CONFIG_JOLIET
+ 		case Opt_iocharset:
+ 			kfree(popt->iocharset);
+ 			popt->iocharset = match_strdup(&args[0]);
+@@ -495,7 +496,6 @@ static int isofs_show_options(struct seq_file *m, struct dentry *root)
+ 	if (sbi->s_nocompress)		seq_puts(m, ",nocompress");
+ 	if (sbi->s_overriderockperm)	seq_puts(m, ",overriderockperm");
+ 	if (sbi->s_showassoc)		seq_puts(m, ",showassoc");
+-	if (sbi->s_utf8)		seq_puts(m, ",utf8");
+ 
+ 	if (sbi->s_check)		seq_printf(m, ",check=%c", sbi->s_check);
+ 	if (sbi->s_mapping)		seq_printf(m, ",map=%c", sbi->s_mapping);
+@@ -518,9 +518,10 @@ static int isofs_show_options(struct seq_file *m, struct dentry *root)
+ 		seq_printf(m, ",fmode=%o", sbi->s_fmode);
+ 
+ #ifdef CONFIG_JOLIET
+-	if (sbi->s_nls_iocharset &&
+-	    strcmp(sbi->s_nls_iocharset->charset, CONFIG_NLS_DEFAULT) != 0)
++	if (sbi->s_nls_iocharset)
+ 		seq_printf(m, ",iocharset=%s", sbi->s_nls_iocharset->charset);
++	else
++		seq_puts(m, ",iocharset=utf8");
+ #endif
+ 	return 0;
+ }
+@@ -863,14 +864,13 @@ root_found:
+ 	sbi->s_nls_iocharset = NULL;
+ 
+ #ifdef CONFIG_JOLIET
+-	if (joliet_level && opt.utf8 == 0) {
++	if (joliet_level) {
+ 		char *p = opt.iocharset ? opt.iocharset : CONFIG_NLS_DEFAULT;
+-		sbi->s_nls_iocharset = load_nls(p);
+-		if (! sbi->s_nls_iocharset) {
+-			/* Fail only if explicit charset specified */
+-			if (opt.iocharset)
++		if (strcmp(p, "utf8") != 0) {
++			sbi->s_nls_iocharset = opt.iocharset ?
++				load_nls(opt.iocharset) : load_nls_default();
++			if (!sbi->s_nls_iocharset)
+ 				goto out_freesbi;
+-			sbi->s_nls_iocharset = load_nls_default();
+ 		}
+ 	}
+ #endif
+@@ -886,7 +886,6 @@ root_found:
+ 	sbi->s_gid = opt.gid;
+ 	sbi->s_uid_set = opt.uid_set;
+ 	sbi->s_gid_set = opt.gid_set;
+-	sbi->s_utf8 = opt.utf8;
+ 	sbi->s_nocompress = opt.nocompress;
+ 	sbi->s_overriderockperm = opt.overriderockperm;
+ 	/*
+diff --git a/fs/isofs/isofs.h b/fs/isofs/isofs.h
+index 055ec6c586f7f..dcdc191ed1834 100644
+--- a/fs/isofs/isofs.h
++++ b/fs/isofs/isofs.h
+@@ -44,7 +44,6 @@ struct isofs_sb_info {
+ 	unsigned char s_session;
+ 	unsigned int  s_high_sierra:1;
+ 	unsigned int  s_rock:2;
+-	unsigned int  s_utf8:1;
+ 	unsigned int  s_cruft:1; /* Broken disks with high byte of length
+ 				  * containing junk */
+ 	unsigned int  s_nocompress:1;
+diff --git a/fs/isofs/joliet.c b/fs/isofs/joliet.c
+index be8b6a9d0b926..c0f04a1e7f695 100644
+--- a/fs/isofs/joliet.c
++++ b/fs/isofs/joliet.c
+@@ -41,14 +41,12 @@ uni16_to_x8(unsigned char *ascii, __be16 *uni, int len, struct nls_table *nls)
+ int
+ get_joliet_filename(struct iso_directory_record * de, unsigned char *outname, struct inode * inode)
+ {
+-	unsigned char utf8;
+ 	struct nls_table *nls;
+ 	unsigned char len = 0;
+ 
+-	utf8 = ISOFS_SB(inode->i_sb)->s_utf8;
+ 	nls = ISOFS_SB(inode->i_sb)->s_nls_iocharset;
+ 
+-	if (utf8) {
++	if (!nls) {
+ 		len = utf16s_to_utf8s((const wchar_t *) de->name,
+ 				de->name_len[0] >> 1, UTF16_BIG_ENDIAN,
+ 				outname, PAGE_SIZE);
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 61d3cc2283dc8..498cb70c2c0d0 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -634,7 +634,7 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	conflock->caller = "somehost";	/* FIXME */
+ 	conflock->len = strlen(conflock->caller);
+ 	conflock->oh.len = 0;		/* don't return OH info */
+-	conflock->svid = ((struct nlm_lockowner *)lock->fl.fl_owner)->pid;
++	conflock->svid = lock->fl.fl_pid;
+ 	conflock->fl.fl_type = lock->fl.fl_type;
+ 	conflock->fl.fl_start = lock->fl.fl_start;
+ 	conflock->fl.fl_end = lock->fl.fl_end;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fa67ecd5fe63f..2bedc7839ec56 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2687,9 +2687,9 @@ static void force_expire_client(struct nfs4_client *clp)
+ 
+ 	trace_nfsd_clid_admin_expired(&clp->cl_clientid);
+ 
+-	spin_lock(&clp->cl_lock);
++	spin_lock(&nn->client_lock);
+ 	clp->cl_time = 0;
+-	spin_unlock(&clp->cl_lock);
++	spin_unlock(&nn->client_lock);
+ 
+ 	wait_event(expiry_wq, atomic_read(&clp->cl_rpc_users) == 0);
+ 	spin_lock(&nn->client_lock);
+diff --git a/fs/udf/misc.c b/fs/udf/misc.c
+index eab94527340dc..1614d308d0f06 100644
+--- a/fs/udf/misc.c
++++ b/fs/udf/misc.c
+@@ -173,13 +173,22 @@ struct genericFormat *udf_get_extendedattr(struct inode *inode, uint32_t type,
+ 		else
+ 			offset = le32_to_cpu(eahd->appAttrLocation);
+ 
+-		while (offset < iinfo->i_lenEAttr) {
++		while (offset + sizeof(*gaf) < iinfo->i_lenEAttr) {
++			uint32_t attrLength;
++
+ 			gaf = (struct genericFormat *)&ea[offset];
++			attrLength = le32_to_cpu(gaf->attrLength);
++
++			/* Detect undersized elements and buffer overflows */
++			if ((attrLength < sizeof(*gaf)) ||
++			    (attrLength > (iinfo->i_lenEAttr - offset)))
++				break;
++
+ 			if (le32_to_cpu(gaf->attrType) == type &&
+ 					gaf->attrSubtype == subtype)
+ 				return gaf;
+ 			else
+-				offset += le32_to_cpu(gaf->attrLength);
++				offset += attrLength;
+ 		}
+ 	}
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 2f83c1204e20c..b2d7c57d06881 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -108,16 +108,10 @@ struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb)
+ 		return NULL;
+ 	lvid = (struct logicalVolIntegrityDesc *)UDF_SB(sb)->s_lvid_bh->b_data;
+ 	partnum = le32_to_cpu(lvid->numOfPartitions);
+-	if ((sb->s_blocksize - sizeof(struct logicalVolIntegrityDescImpUse) -
+-	     offsetof(struct logicalVolIntegrityDesc, impUse)) /
+-	     (2 * sizeof(uint32_t)) < partnum) {
+-		udf_err(sb, "Logical volume integrity descriptor corrupted "
+-			"(numOfPartitions = %u)!\n", partnum);
+-		return NULL;
+-	}
+ 	/* The offset is to skip freeSpaceTable and sizeTable arrays */
+ 	offset = partnum * 2 * sizeof(uint32_t);
+-	return (struct logicalVolIntegrityDescImpUse *)&(lvid->impUse[offset]);
++	return (struct logicalVolIntegrityDescImpUse *)
++					(((uint8_t *)(lvid + 1)) + offset);
+ }
+ 
+ /* UDF filesystem type */
+@@ -349,10 +343,10 @@ static int udf_show_options(struct seq_file *seq, struct dentry *root)
+ 		seq_printf(seq, ",lastblock=%u", sbi->s_last_block);
+ 	if (sbi->s_anchor != 0)
+ 		seq_printf(seq, ",anchor=%u", sbi->s_anchor);
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_UTF8))
+-		seq_puts(seq, ",utf8");
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP) && sbi->s_nls_map)
++	if (sbi->s_nls_map)
+ 		seq_printf(seq, ",iocharset=%s", sbi->s_nls_map->charset);
++	else
++		seq_puts(seq, ",iocharset=utf8");
+ 
+ 	return 0;
+ }
+@@ -558,19 +552,24 @@ static int udf_parse_options(char *options, struct udf_options *uopt,
+ 			/* Ignored (never implemented properly) */
+ 			break;
+ 		case Opt_utf8:
+-			uopt->flags |= (1 << UDF_FLAG_UTF8);
++			if (!remount) {
++				unload_nls(uopt->nls_map);
++				uopt->nls_map = NULL;
++			}
+ 			break;
+ 		case Opt_iocharset:
+ 			if (!remount) {
+-				if (uopt->nls_map)
+-					unload_nls(uopt->nls_map);
+-				/*
+-				 * load_nls() failure is handled later in
+-				 * udf_fill_super() after all options are
+-				 * parsed.
+-				 */
++				unload_nls(uopt->nls_map);
++				uopt->nls_map = NULL;
++			}
++			/* When nls_map is not loaded then UTF-8 is used */
++			if (!remount && strcmp(args[0].from, "utf8") != 0) {
+ 				uopt->nls_map = load_nls(args[0].from);
+-				uopt->flags |= (1 << UDF_FLAG_NLS_MAP);
++				if (!uopt->nls_map) {
++					pr_err("iocharset %s not found\n",
++						args[0].from);
++					return 0;
++				}
+ 			}
+ 			break;
+ 		case Opt_uforget:
+@@ -1542,6 +1541,7 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct logicalVolIntegrityDesc *lvid;
+ 	int indirections = 0;
++	u32 parts, impuselen;
+ 
+ 	while (++indirections <= UDF_MAX_LVID_NESTING) {
+ 		final_bh = NULL;
+@@ -1568,15 +1568,27 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+ 
+ 		lvid = (struct logicalVolIntegrityDesc *)final_bh->b_data;
+ 		if (lvid->nextIntegrityExt.extLength == 0)
+-			return;
++			goto check;
+ 
+ 		loc = leea_to_cpu(lvid->nextIntegrityExt);
+ 	}
+ 
+ 	udf_warn(sb, "Too many LVID indirections (max %u), ignoring.\n",
+ 		UDF_MAX_LVID_NESTING);
++out_err:
+ 	brelse(sbi->s_lvid_bh);
+ 	sbi->s_lvid_bh = NULL;
++	return;
++check:
++	parts = le32_to_cpu(lvid->numOfPartitions);
++	impuselen = le32_to_cpu(lvid->lengthOfImpUse);
++	if (parts >= sb->s_blocksize || impuselen >= sb->s_blocksize ||
++	    sizeof(struct logicalVolIntegrityDesc) + impuselen +
++	    2 * parts * sizeof(u32) > sb->s_blocksize) {
++		udf_warn(sb, "Corrupted LVID (parts=%u, impuselen=%u), "
++			 "ignoring.\n", parts, impuselen);
++		goto out_err;
++	}
+ }
+ 
+ /*
+@@ -2139,21 +2151,6 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ 	if (!udf_parse_options((char *)options, &uopt, false))
+ 		goto parse_options_failure;
+ 
+-	if (uopt.flags & (1 << UDF_FLAG_UTF8) &&
+-	    uopt.flags & (1 << UDF_FLAG_NLS_MAP)) {
+-		udf_err(sb, "utf8 cannot be combined with iocharset\n");
+-		goto parse_options_failure;
+-	}
+-	if ((uopt.flags & (1 << UDF_FLAG_NLS_MAP)) && !uopt.nls_map) {
+-		uopt.nls_map = load_nls_default();
+-		if (!uopt.nls_map)
+-			uopt.flags &= ~(1 << UDF_FLAG_NLS_MAP);
+-		else
+-			udf_debug("Using default NLS map\n");
+-	}
+-	if (!(uopt.flags & (1 << UDF_FLAG_NLS_MAP)))
+-		uopt.flags |= (1 << UDF_FLAG_UTF8);
+-
+ 	fileset.logicalBlockNum = 0xFFFFFFFF;
+ 	fileset.partitionReferenceNum = 0xFFFF;
+ 
+@@ -2308,8 +2305,7 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ error_out:
+ 	iput(sbi->s_vat_inode);
+ parse_options_failure:
+-	if (uopt.nls_map)
+-		unload_nls(uopt.nls_map);
++	unload_nls(uopt.nls_map);
+ 	if (lvid_open)
+ 		udf_close_lvid(sb);
+ 	brelse(sbi->s_lvid_bh);
+@@ -2359,8 +2355,7 @@ static void udf_put_super(struct super_block *sb)
+ 	sbi = UDF_SB(sb);
+ 
+ 	iput(sbi->s_vat_inode);
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
+-		unload_nls(sbi->s_nls_map);
++	unload_nls(sbi->s_nls_map);
+ 	if (!sb_rdonly(sb))
+ 		udf_close_lvid(sb);
+ 	brelse(sbi->s_lvid_bh);
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index 758efe557a199..4fa620543d302 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -20,8 +20,6 @@
+ #define UDF_FLAG_UNDELETE		6
+ #define UDF_FLAG_UNHIDE			7
+ #define UDF_FLAG_VARCONV		8
+-#define UDF_FLAG_NLS_MAP		9
+-#define UDF_FLAG_UTF8			10
+ #define UDF_FLAG_UID_FORGET     11    /* save -1 for uid to disk */
+ #define UDF_FLAG_GID_FORGET     12
+ #define UDF_FLAG_UID_SET	13
+diff --git a/fs/udf/unicode.c b/fs/udf/unicode.c
+index 5fcfa96463ebb..622569007b530 100644
+--- a/fs/udf/unicode.c
++++ b/fs/udf/unicode.c
+@@ -177,7 +177,7 @@ static int udf_name_from_CS0(struct super_block *sb,
+ 		return 0;
+ 	}
+ 
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
++	if (UDF_SB(sb)->s_nls_map)
+ 		conv_f = UDF_SB(sb)->s_nls_map->uni2char;
+ 	else
+ 		conv_f = NULL;
+@@ -285,7 +285,7 @@ static int udf_name_to_CS0(struct super_block *sb,
+ 	if (ocu_max_len <= 0)
+ 		return 0;
+ 
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
++	if (UDF_SB(sb)->s_nls_map)
+ 		conv_f = UDF_SB(sb)->s_nls_map->char2uni;
+ 	else
+ 		conv_f = NULL;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index d3afea47ade67..4b0f8bb0671d1 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1521,6 +1521,22 @@ static inline int queue_limit_discard_alignment(struct queue_limits *lim, sector
+ 	return offset << SECTOR_SHIFT;
+ }
+ 
++/*
++ * Two cases of handling DISCARD merge:
++ * If max_discard_segments > 1, the driver takes every bio
++ * as a range and send them to controller together. The ranges
++ * needn't to be contiguous.
++ * Otherwise, the bios/requests will be handled as same as
++ * others which should be contiguous.
++ */
++static inline bool blk_discard_mergable(struct request *req)
++{
++	if (req_op(req) == REQ_OP_DISCARD &&
++	    queue_max_discard_segments(req->q) > 1)
++		return true;
++	return false;
++}
++
+ static inline int bdev_discard_alignment(struct block_device *bdev)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 3f221dbf5f95d..1834752c56175 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -53,6 +53,22 @@ struct em_perf_domain {
+ #ifdef CONFIG_ENERGY_MODEL
+ #define EM_MAX_POWER 0xFFFF
+ 
++/*
++ * Increase resolution of energy estimation calculations for 64-bit
++ * architectures. The extra resolution improves decision made by EAS for the
++ * task placement when two Performance Domains might provide similar energy
++ * estimation values (w/o better resolution the values could be equal).
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ */
++#ifdef CONFIG_64BIT
++#define em_scale_power(p) ((p) * 1000)
++#else
++#define em_scale_power(p) (p)
++#endif
++
+ struct em_data_callback {
+ 	/**
+ 	 * active_power() - Provide power at the next performance state of
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index bb5e7b0a42746..77295af724264 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -318,16 +318,12 @@ struct clock_event_device;
+ 
+ extern void hrtimer_interrupt(struct clock_event_device *dev);
+ 
+-extern void clock_was_set_delayed(void);
+-
+ extern unsigned int hrtimer_resolution;
+ 
+ #else
+ 
+ #define hrtimer_resolution	(unsigned int)LOW_RES_NSEC
+ 
+-static inline void clock_was_set_delayed(void) { }
+-
+ #endif
+ 
+ static inline ktime_t
+@@ -351,7 +347,6 @@ hrtimer_expires_remaining_adjusted(const struct hrtimer *timer)
+ 						    timer->base->get_time());
+ }
+ 
+-extern void clock_was_set(void);
+ #ifdef CONFIG_TIMERFD
+ extern void timerfd_clock_was_set(void);
+ #else
+diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
+index ded90b097e6e8..3f02b818625ef 100644
+--- a/include/linux/local_lock_internal.h
++++ b/include/linux/local_lock_internal.h
+@@ -14,29 +14,14 @@ typedef struct {
+ } local_lock_t;
+ 
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LL_DEP_MAP_INIT(lockname)			\
++# define LOCAL_LOCK_DEBUG_INIT(lockname)		\
+ 	.dep_map = {					\
+ 		.name = #lockname,			\
+ 		.wait_type_inner = LD_WAIT_CONFIG,	\
+-		.lock_type = LD_LOCK_PERCPU,			\
+-	}
+-#else
+-# define LL_DEP_MAP_INIT(lockname)
+-#endif
+-
+-#define INIT_LOCAL_LOCK(lockname)	{ LL_DEP_MAP_INIT(lockname) }
+-
+-#define __local_lock_init(lock)					\
+-do {								\
+-	static struct lock_class_key __key;			\
+-								\
+-	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
+-	lockdep_init_map_type(&(lock)->dep_map, #lock, &__key, 0, \
+-			      LD_WAIT_CONFIG, LD_WAIT_INV,	\
+-			      LD_LOCK_PERCPU);			\
+-} while (0)
++		.lock_type = LD_LOCK_PERCPU,		\
++	},						\
++	.owner = NULL,
+ 
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ static inline void local_lock_acquire(local_lock_t *l)
+ {
+ 	lock_map_acquire(&l->dep_map);
+@@ -51,11 +36,30 @@ static inline void local_lock_release(local_lock_t *l)
+ 	lock_map_release(&l->dep_map);
+ }
+ 
++static inline void local_lock_debug_init(local_lock_t *l)
++{
++	l->owner = NULL;
++}
+ #else /* CONFIG_DEBUG_LOCK_ALLOC */
++# define LOCAL_LOCK_DEBUG_INIT(lockname)
+ static inline void local_lock_acquire(local_lock_t *l) { }
+ static inline void local_lock_release(local_lock_t *l) { }
++static inline void local_lock_debug_init(local_lock_t *l) { }
+ #endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+ 
++#define INIT_LOCAL_LOCK(lockname)	{ LOCAL_LOCK_DEBUG_INIT(lockname) }
++
++#define __local_lock_init(lock)					\
++do {								\
++	static struct lock_class_key __key;			\
++								\
++	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
++	lockdep_init_map_type(&(lock)->dep_map, #lock, &__key,  \
++			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
++			      LD_LOCK_PERCPU);			\
++	local_lock_debug_init(lock);				\
++} while (0)
++
+ #define __local_lock(lock)					\
+ 	do {							\
+ 		preempt_disable();				\
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index b0009aa3647f4..6bbae0c3bc0b9 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -921,7 +921,8 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
+ 	u8         scatter_fcs[0x1];
+ 	u8         enhanced_multi_pkt_send_wqe[0x1];
+ 	u8         tunnel_lso_const_out_ip_id[0x1];
+-	u8         reserved_at_1c[0x2];
++	u8         tunnel_lro_gre[0x1];
++	u8         tunnel_lro_vxlan[0x1];
+ 	u8         tunnel_stateless_gre[0x1];
+ 	u8         tunnel_stateless_vxlan[0x1];
+ 
+diff --git a/include/linux/power/max17042_battery.h b/include/linux/power/max17042_battery.h
+index d55c746ac56e2..e00ad1cfb1f1d 100644
+--- a/include/linux/power/max17042_battery.h
++++ b/include/linux/power/max17042_battery.h
+@@ -69,7 +69,7 @@ enum max17042_register {
+ 	MAX17042_RelaxCFG	= 0x2A,
+ 	MAX17042_MiscCFG	= 0x2B,
+ 	MAX17042_TGAIN		= 0x2C,
+-	MAx17042_TOFF		= 0x2D,
++	MAX17042_TOFF		= 0x2D,
+ 	MAX17042_CGAIN		= 0x2E,
+ 	MAX17042_COFF		= 0x2F,
+ 
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index e91d51ea028bb..65185d1e07ea6 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -523,6 +523,7 @@ void		   svc_wake_up(struct svc_serv *);
+ void		   svc_reserve(struct svc_rqst *rqstp, int space);
+ struct svc_pool *  svc_pool_for_cpu(struct svc_serv *serv, int cpu);
+ char *		   svc_print_addr(struct svc_rqst *, char *, size_t);
++const char *	   svc_proc_name(const struct svc_rqst *rqstp);
+ int		   svc_encode_result_payload(struct svc_rqst *rqstp,
+ 					     unsigned int offset,
+ 					     unsigned int length);
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 5117cb5b56561..81b9686a20799 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -25,7 +25,9 @@ struct itimerspec64 {
+ #define TIME64_MIN			(-TIME64_MAX - 1)
+ 
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
++#define KTIME_MIN			(-KTIME_MAX - 1)
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
++#define KTIME_SEC_MIN			(KTIME_MIN / NSEC_PER_SEC)
+ 
+ /*
+  * Limits for settimeofday():
+@@ -124,10 +126,13 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+  */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
+-	/* Prevent multiplication overflow */
+-	if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
++	/* Prevent multiplication overflow / underflow */
++	if (ts->tv_sec >= KTIME_SEC_MAX)
+ 		return KTIME_MAX;
+ 
++	if (ts->tv_sec <= KTIME_SEC_MIN)
++		return KTIME_MIN;
++
+ 	return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+ 
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index 33f40c1ec379f..048d297623c9a 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -699,8 +699,6 @@ struct dsa_switch_ops {
+ 	int	(*port_bridge_flags)(struct dsa_switch *ds, int port,
+ 				     struct switchdev_brport_flags flags,
+ 				     struct netlink_ext_ack *extack);
+-	int	(*port_set_mrouter)(struct dsa_switch *ds, int port, bool mrouter,
+-				    struct netlink_ext_ack *extack);
+ 
+ 	/*
+ 	 * VLAN support
+diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
+index 298a8d10168b6..fb34b66aefa73 100644
+--- a/include/net/pkt_cls.h
++++ b/include/net/pkt_cls.h
+@@ -824,10 +824,9 @@ enum tc_htb_command {
+ struct tc_htb_qopt_offload {
+ 	struct netlink_ext_ack *extack;
+ 	enum tc_htb_command command;
+-	u16 classid;
+ 	u32 parent_classid;
++	u16 classid;
+ 	u16 qid;
+-	u16 moved_qid;
+ 	u64 rate;
+ 	u64 ceil;
+ };
+diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
+index e4e44a2b4aa91..0dd30de00e5b4 100644
+--- a/include/trace/events/io_uring.h
++++ b/include/trace/events/io_uring.h
+@@ -295,14 +295,14 @@ TRACE_EVENT(io_uring_fail_link,
+  */
+ TRACE_EVENT(io_uring_complete,
+ 
+-	TP_PROTO(void *ctx, u64 user_data, long res, unsigned cflags),
++	TP_PROTO(void *ctx, u64 user_data, int res, unsigned cflags),
+ 
+ 	TP_ARGS(ctx, user_data, res, cflags),
+ 
+ 	TP_STRUCT__entry (
+ 		__field(  void *,	ctx		)
+ 		__field(  u64,		user_data	)
+-		__field(  long,		res		)
++		__field(  int,		res		)
+ 		__field(  unsigned,	cflags		)
+ 	),
+ 
+@@ -313,7 +313,7 @@ TRACE_EVENT(io_uring_complete,
+ 		__entry->cflags		= cflags;
+ 	),
+ 
+-	TP_printk("ring %p, user_data 0x%llx, result %ld, cflags %x",
++	TP_printk("ring %p, user_data 0x%llx, result %d, cflags %x",
+ 			  __entry->ctx, (unsigned long long)__entry->user_data,
+ 			  __entry->res, __entry->cflags)
+ );
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 861f199896c6a..d323f5a049c88 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1642,7 +1642,7 @@ TRACE_EVENT(svc_process,
+ 		__field(u32, vers)
+ 		__field(u32, proc)
+ 		__string(service, name)
+-		__string(procedure, rqst->rq_procinfo->pc_name)
++		__string(procedure, svc_proc_name(rqst))
+ 		__string(addr, rqst->rq_xprt ?
+ 			 rqst->rq_xprt->xpt_remotebuf : "(null)")
+ 	),
+@@ -1652,7 +1652,7 @@ TRACE_EVENT(svc_process,
+ 		__entry->vers = rqst->rq_vers;
+ 		__entry->proc = rqst->rq_proc;
+ 		__assign_str(service, name);
+-		__assign_str(procedure, rqst->rq_procinfo->pc_name);
++		__assign_str(procedure, svc_proc_name(rqst));
+ 		__assign_str(addr, rqst->rq_xprt ?
+ 			     rqst->rq_xprt->xpt_remotebuf : "(null)");
+ 	),
+@@ -1918,7 +1918,7 @@ TRACE_EVENT(svc_stats_latency,
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+ 		__field(unsigned long, execute)
+-		__string(procedure, rqst->rq_procinfo->pc_name)
++		__string(procedure, svc_proc_name(rqst))
+ 		__string(addr, rqst->rq_xprt->xpt_remotebuf)
+ 	),
+ 
+@@ -1926,7 +1926,7 @@ TRACE_EVENT(svc_stats_latency,
+ 		__entry->xid = be32_to_cpu(rqst->rq_xid);
+ 		__entry->execute = ktime_to_us(ktime_sub(ktime_get(),
+ 							 rqst->rq_stime));
+-		__assign_str(procedure, rqst->rq_procinfo->pc_name);
++		__assign_str(procedure, svc_proc_name(rqst));
+ 		__assign_str(addr, rqst->rq_xprt->xpt_remotebuf);
+ 	),
+ 
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index bf9252c7381e8..5cdff1631608c 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -3249,7 +3249,7 @@ union bpf_attr {
+  * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
+  *	Description
+  *		Select a **SO_REUSEPORT** socket from a
+- *		**BPF_MAP_TYPE_REUSEPORT_ARRAY** *map*.
++ *		**BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
+  *		It checks the selected socket is matching the incoming
+  *		request in the socket buffer.
+  *	Return
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 49f07e2bf23b9..9d94ac6ff50c4 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11414,10 +11414,11 @@ static void convert_pseudo_ld_imm64(struct bpf_verifier_env *env)
+  * insni[off, off + cnt).  Adjust corresponding insn_aux_data by copying
+  * [0, off) and [off, end) to new locations, so the patched range stays zero
+  */
+-static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+-				struct bpf_prog *new_prog, u32 off, u32 cnt)
++static void adjust_insn_aux_data(struct bpf_verifier_env *env,
++				 struct bpf_insn_aux_data *new_data,
++				 struct bpf_prog *new_prog, u32 off, u32 cnt)
+ {
+-	struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
++	struct bpf_insn_aux_data *old_data = env->insn_aux_data;
+ 	struct bpf_insn *insn = new_prog->insnsi;
+ 	u32 old_seen = old_data[off].seen;
+ 	u32 prog_len;
+@@ -11430,12 +11431,9 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	old_data[off].zext_dst = insn_has_def32(env, insn + off + cnt - 1);
+ 
+ 	if (cnt == 1)
+-		return 0;
++		return;
+ 	prog_len = new_prog->len;
+-	new_data = vzalloc(array_size(prog_len,
+-				      sizeof(struct bpf_insn_aux_data)));
+-	if (!new_data)
+-		return -ENOMEM;
++
+ 	memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
+ 	memcpy(new_data + off + cnt - 1, old_data + off,
+ 	       sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+@@ -11446,7 +11444,6 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	}
+ 	env->insn_aux_data = new_data;
+ 	vfree(old_data);
+-	return 0;
+ }
+ 
+ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len)
+@@ -11481,6 +11478,14 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 					    const struct bpf_insn *patch, u32 len)
+ {
+ 	struct bpf_prog *new_prog;
++	struct bpf_insn_aux_data *new_data = NULL;
++
++	if (len > 1) {
++		new_data = vzalloc(array_size(env->prog->len + len - 1,
++					      sizeof(struct bpf_insn_aux_data)));
++		if (!new_data)
++			return NULL;
++	}
+ 
+ 	new_prog = bpf_patch_insn_single(env->prog, off, patch, len);
+ 	if (IS_ERR(new_prog)) {
+@@ -11488,10 +11493,10 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 			verbose(env,
+ 				"insn %d cannot be patched due to 16-bit range\n",
+ 				env->insn_aux_data[off].orig_idx);
++		vfree(new_data);
+ 		return NULL;
+ 	}
+-	if (adjust_insn_aux_data(env, new_prog, off, len))
+-		return NULL;
++	adjust_insn_aux_data(env, new_data, new_prog, off, len);
+ 	adjust_subprog_starts(env, off, len);
+ 	adjust_poke_descs(new_prog, off, len);
+ 	return new_prog;
+@@ -12008,6 +12013,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 		if (is_narrower_load && size < target_size) {
+ 			u8 shift = bpf_ctx_narrow_access_offset(
+ 				off, size, size_default) * 8;
++			if (shift && cnt + 1 >= ARRAY_SIZE(insn_buf)) {
++				verbose(env, "bpf verifier narrow ctx load misconfigured\n");
++				return -EINVAL;
++			}
+ 			if (ctx_field_size <= 4) {
+ 				if (shift)
+ 					insn_buf[cnt++] = BPF_ALU32_IMM(BPF_RSH,
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index adb5190c44296..13b5be6df4da2 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1114,7 +1114,7 @@ enum subparts_cmd {
+  * cpus_allowed can be granted or an error code will be returned.
+  *
+  * For partcmd_disable, the cpuset is being transofrmed from a partition
+- * root back to a non-partition root. any CPUs in cpus_allowed that are in
++ * root back to a non-partition root. Any CPUs in cpus_allowed that are in
+  * parent's subparts_cpus will be taken away from that cpumask and put back
+  * into parent's effective_cpus. 0 should always be returned.
+  *
+@@ -1148,6 +1148,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	struct cpuset *parent = parent_cs(cpuset);
+ 	int adding;	/* Moving cpus from effective_cpus to subparts_cpus */
+ 	int deleting;	/* Moving cpus from subparts_cpus to effective_cpus */
++	int new_prs;
+ 	bool part_error = false;	/* Partition error? */
+ 
+ 	percpu_rwsem_assert_held(&cpuset_rwsem);
+@@ -1183,6 +1184,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	 * A cpumask update cannot make parent's effective_cpus become empty.
+ 	 */
+ 	adding = deleting = false;
++	new_prs = cpuset->partition_root_state;
+ 	if (cmd == partcmd_enable) {
+ 		cpumask_copy(tmp->addmask, cpuset->cpus_allowed);
+ 		adding = true;
+@@ -1225,7 +1227,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		/*
+ 		 * partcmd_update w/o newmask:
+ 		 *
+-		 * addmask = cpus_allowed & parent->effectiveb_cpus
++		 * addmask = cpus_allowed & parent->effective_cpus
+ 		 *
+ 		 * Note that parent's subparts_cpus may have been
+ 		 * pre-shrunk in case there is a change in the cpu list.
+@@ -1247,11 +1249,11 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		switch (cpuset->partition_root_state) {
+ 		case PRS_ENABLED:
+ 			if (part_error)
+-				cpuset->partition_root_state = PRS_ERROR;
++				new_prs = PRS_ERROR;
+ 			break;
+ 		case PRS_ERROR:
+ 			if (!part_error)
+-				cpuset->partition_root_state = PRS_ENABLED;
++				new_prs = PRS_ENABLED;
+ 			break;
+ 		}
+ 		/*
+@@ -1260,10 +1262,10 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		part_error = (prev_prs == PRS_ERROR);
+ 	}
+ 
+-	if (!part_error && (cpuset->partition_root_state == PRS_ERROR))
++	if (!part_error && (new_prs == PRS_ERROR))
+ 		return 0;	/* Nothing need to be done */
+ 
+-	if (cpuset->partition_root_state == PRS_ERROR) {
++	if (new_prs == PRS_ERROR) {
+ 		/*
+ 		 * Remove all its cpus from parent's subparts_cpus.
+ 		 */
+@@ -1272,7 +1274,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 				       parent->subparts_cpus);
+ 	}
+ 
+-	if (!adding && !deleting)
++	if (!adding && !deleting && (new_prs == cpuset->partition_root_state))
+ 		return 0;
+ 
+ 	/*
+@@ -1299,6 +1301,9 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	}
+ 
+ 	parent->nr_subparts_cpus = cpumask_weight(parent->subparts_cpus);
++
++	if (cpuset->partition_root_state != new_prs)
++		cpuset->partition_root_state = new_prs;
+ 	spin_unlock_irq(&callback_lock);
+ 
+ 	return cmd == partcmd_update;
+@@ -1321,6 +1326,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 	struct cpuset *cp;
+ 	struct cgroup_subsys_state *pos_css;
+ 	bool need_rebuild_sched_domains = false;
++	int new_prs;
+ 
+ 	rcu_read_lock();
+ 	cpuset_for_each_descendant_pre(cp, pos_css, cs) {
+@@ -1360,17 +1366,18 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 		 * update_tasks_cpumask() again for tasks in the parent
+ 		 * cpuset if the parent's subparts_cpus changes.
+ 		 */
+-		if ((cp != cs) && cp->partition_root_state) {
++		new_prs = cp->partition_root_state;
++		if ((cp != cs) && new_prs) {
+ 			switch (parent->partition_root_state) {
+ 			case PRS_DISABLED:
+ 				/*
+ 				 * If parent is not a partition root or an
+-				 * invalid partition root, clear the state
+-				 * state and the CS_CPU_EXCLUSIVE flag.
++				 * invalid partition root, clear its state
++				 * and its CS_CPU_EXCLUSIVE flag.
+ 				 */
+ 				WARN_ON_ONCE(cp->partition_root_state
+ 					     != PRS_ERROR);
+-				cp->partition_root_state = 0;
++				new_prs = PRS_DISABLED;
+ 
+ 				/*
+ 				 * clear_bit() is an atomic operation and
+@@ -1391,11 +1398,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 				/*
+ 				 * When parent is invalid, it has to be too.
+ 				 */
+-				cp->partition_root_state = PRS_ERROR;
+-				if (cp->nr_subparts_cpus) {
+-					cp->nr_subparts_cpus = 0;
+-					cpumask_clear(cp->subparts_cpus);
+-				}
++				new_prs = PRS_ERROR;
+ 				break;
+ 			}
+ 		}
+@@ -1407,8 +1410,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 		spin_lock_irq(&callback_lock);
+ 
+ 		cpumask_copy(cp->effective_cpus, tmp->new_cpus);
+-		if (cp->nr_subparts_cpus &&
+-		   (cp->partition_root_state != PRS_ENABLED)) {
++		if (cp->nr_subparts_cpus && (new_prs != PRS_ENABLED)) {
+ 			cp->nr_subparts_cpus = 0;
+ 			cpumask_clear(cp->subparts_cpus);
+ 		} else if (cp->nr_subparts_cpus) {
+@@ -1435,6 +1437,10 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 					= cpumask_weight(cp->subparts_cpus);
+ 			}
+ 		}
++
++		if (new_prs != cp->partition_root_state)
++			cp->partition_root_state = new_prs;
++
+ 		spin_unlock_irq(&callback_lock);
+ 
+ 		WARN_ON(!is_in_v2_mode() &&
+@@ -1937,34 +1943,32 @@ out:
+ 
+ /*
+  * update_prstate - update partititon_root_state
+- * cs:	the cpuset to update
+- * val: 0 - disabled, 1 - enabled
++ * cs: the cpuset to update
++ * new_prs: new partition root state
+  *
+  * Call with cpuset_mutex held.
+  */
+-static int update_prstate(struct cpuset *cs, int val)
++static int update_prstate(struct cpuset *cs, int new_prs)
+ {
+-	int err;
++	int err, old_prs = cs->partition_root_state;
+ 	struct cpuset *parent = parent_cs(cs);
+-	struct tmpmasks tmp;
++	struct tmpmasks tmpmask;
+ 
+-	if ((val != 0) && (val != 1))
+-		return -EINVAL;
+-	if (val == cs->partition_root_state)
++	if (old_prs == new_prs)
+ 		return 0;
+ 
+ 	/*
+ 	 * Cannot force a partial or invalid partition root to a full
+ 	 * partition root.
+ 	 */
+-	if (val && cs->partition_root_state)
++	if (new_prs && (old_prs == PRS_ERROR))
+ 		return -EINVAL;
+ 
+-	if (alloc_cpumasks(NULL, &tmp))
++	if (alloc_cpumasks(NULL, &tmpmask))
+ 		return -ENOMEM;
+ 
+ 	err = -EINVAL;
+-	if (!cs->partition_root_state) {
++	if (!old_prs) {
+ 		/*
+ 		 * Turning on partition root requires setting the
+ 		 * CS_CPU_EXCLUSIVE bit implicitly as well and cpus_allowed
+@@ -1978,31 +1982,27 @@ static int update_prstate(struct cpuset *cs, int val)
+ 			goto out;
+ 
+ 		err = update_parent_subparts_cpumask(cs, partcmd_enable,
+-						     NULL, &tmp);
++						     NULL, &tmpmask);
+ 		if (err) {
+ 			update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 			goto out;
+ 		}
+-		cs->partition_root_state = PRS_ENABLED;
+ 	} else {
+ 		/*
+ 		 * Turning off partition root will clear the
+ 		 * CS_CPU_EXCLUSIVE bit.
+ 		 */
+-		if (cs->partition_root_state == PRS_ERROR) {
+-			cs->partition_root_state = 0;
++		if (old_prs == PRS_ERROR) {
+ 			update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 			err = 0;
+ 			goto out;
+ 		}
+ 
+ 		err = update_parent_subparts_cpumask(cs, partcmd_disable,
+-						     NULL, &tmp);
++						     NULL, &tmpmask);
+ 		if (err)
+ 			goto out;
+ 
+-		cs->partition_root_state = 0;
+-
+ 		/* Turning off CS_CPU_EXCLUSIVE will not return error */
+ 		update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 	}
+@@ -2015,11 +2015,17 @@ static int update_prstate(struct cpuset *cs, int val)
+ 		update_tasks_cpumask(parent);
+ 
+ 	if (parent->child_ecpus_count)
+-		update_sibling_cpumasks(parent, cs, &tmp);
++		update_sibling_cpumasks(parent, cs, &tmpmask);
+ 
+ 	rebuild_sched_domains_locked();
+ out:
+-	free_cpumasks(NULL, &tmp);
++	if (!err) {
++		spin_lock_irq(&callback_lock);
++		cs->partition_root_state = new_prs;
++		spin_unlock_irq(&callback_lock);
++	}
++
++	free_cpumasks(NULL, &tmpmask);
+ 	return err;
+ }
+ 
+@@ -3060,7 +3066,7 @@ retry:
+ 		goto retry;
+ 	}
+ 
+-	parent =  parent_cs(cs);
++	parent = parent_cs(cs);
+ 	compute_effective_cpumask(&new_cpus, cs, parent);
+ 	nodes_and(new_mems, cs->mems_allowed, parent->effective_mems);
+ 
+@@ -3082,8 +3088,10 @@ retry:
+ 	if (is_partition_root(cs) && (cpumask_empty(&new_cpus) ||
+ 	   (parent->partition_root_state == PRS_ERROR))) {
+ 		if (cs->nr_subparts_cpus) {
++			spin_lock_irq(&callback_lock);
+ 			cs->nr_subparts_cpus = 0;
+ 			cpumask_clear(cs->subparts_cpus);
++			spin_unlock_irq(&callback_lock);
+ 			compute_effective_cpumask(&new_cpus, cs, parent);
+ 		}
+ 
+@@ -3097,7 +3105,9 @@ retry:
+ 		     cpumask_empty(&new_cpus)) {
+ 			update_parent_subparts_cpumask(cs, partcmd_disable,
+ 						       NULL, tmp);
++			spin_lock_irq(&callback_lock);
+ 			cs->partition_root_state = PRS_ERROR;
++			spin_unlock_irq(&callback_lock);
+ 		}
+ 		cpuset_force_rebuild();
+ 	}
+@@ -3168,6 +3178,13 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
+ 	cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus);
+ 	mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
+ 
++	/*
++	 * In the rare case that hotplug removes all the cpus in subparts_cpus,
++	 * we assumed that cpus are updated.
++	 */
++	if (!cpus_updated && top_cpuset.nr_subparts_cpus)
++		cpus_updated = true;
++
+ 	/* synchronize cpus_allowed to cpu_active_mask */
+ 	if (cpus_updated) {
+ 		spin_lock_irq(&callback_lock);
+diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
+index f7e1d0eccdbc6..246efc74e3f34 100644
+--- a/kernel/cpu_pm.c
++++ b/kernel/cpu_pm.c
+@@ -13,19 +13,32 @@
+ #include <linux/spinlock.h>
+ #include <linux/syscore_ops.h>
+ 
+-static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
++/*
++ * atomic_notifiers use a spinlock_t, which can block under PREEMPT_RT.
++ * Notifications for cpu_pm will be issued by the idle task itself, which can
++ * never block, IOW it requires using a raw_spinlock_t.
++ */
++static struct {
++	struct raw_notifier_head chain;
++	raw_spinlock_t lock;
++} cpu_pm_notifier = {
++	.chain = RAW_NOTIFIER_INIT(cpu_pm_notifier.chain),
++	.lock  = __RAW_SPIN_LOCK_UNLOCKED(cpu_pm_notifier.lock),
++};
+ 
+ static int cpu_pm_notify(enum cpu_pm_event event)
+ {
+ 	int ret;
+ 
+ 	/*
+-	 * atomic_notifier_call_chain has a RCU read critical section, which
+-	 * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
+-	 * RCU know this.
++	 * This introduces a RCU read critical section, which could be
++	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
++	 * this.
+ 	 */
+ 	rcu_irq_enter_irqson();
+-	ret = atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL);
++	rcu_read_lock();
++	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
++	rcu_read_unlock();
+ 	rcu_irq_exit_irqson();
+ 
+ 	return notifier_to_errno(ret);
+@@ -33,10 +46,13 @@ static int cpu_pm_notify(enum cpu_pm_event event)
+ 
+ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event event_down)
+ {
++	unsigned long flags;
+ 	int ret;
+ 
+ 	rcu_irq_enter_irqson();
+-	ret = atomic_notifier_call_chain_robust(&cpu_pm_notifier_chain, event_up, event_down, NULL);
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
+ 	rcu_irq_exit_irqson();
+ 
+ 	return notifier_to_errno(ret);
+@@ -49,12 +65,17 @@ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event ev
+  * Add a driver to a list of drivers that are notified about
+  * CPU and CPU cluster low power entry and exit.
+  *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_register.
++ * This function has the same return conditions as raw_notifier_chain_register.
+  */
+ int cpu_pm_register_notifier(struct notifier_block *nb)
+ {
+-	return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
++	unsigned long flags;
++	int ret;
++
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_chain_register(&cpu_pm_notifier.chain, nb);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+ 
+@@ -64,12 +85,17 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+  *
+  * Remove a driver from the CPU PM notifier list.
+  *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_unregister.
++ * This function has the same return conditions as raw_notifier_chain_unregister.
+  */
+ int cpu_pm_unregister_notifier(struct notifier_block *nb)
+ {
+-	return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
++	unsigned long flags;
++	int ret;
++
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_chain_unregister(&cpu_pm_notifier.chain, nb);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
+ 
+diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
+index 4d2a702d7aa95..c43e2ac2f8def 100644
+--- a/kernel/irq/timings.c
++++ b/kernel/irq/timings.c
+@@ -799,12 +799,14 @@ static int __init irq_timings_test_irqs(struct timings_intervals *ti)
+ 
+ 		__irq_timings_store(irq, irqs, ti->intervals[i]);
+ 		if (irqs->circ_timings[i & IRQ_TIMINGS_MASK] != index) {
++			ret = -EBADSLT;
+ 			pr_err("Failed to store in the circular buffer\n");
+ 			goto out;
+ 		}
+ 	}
+ 
+ 	if (irqs->count != ti->count) {
++		ret = -ERANGE;
+ 		pr_err("Count differs\n");
+ 		goto out;
+ 	}
+diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
+index d2df5e68b5039..fb30e1436dfb3 100644
+--- a/kernel/locking/mutex.c
++++ b/kernel/locking/mutex.c
+@@ -928,7 +928,6 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
+ 		    struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+ {
+ 	struct mutex_waiter waiter;
+-	bool first = false;
+ 	struct ww_mutex *ww;
+ 	int ret;
+ 
+@@ -1007,6 +1006,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
+ 
+ 	set_current_state(state);
+ 	for (;;) {
++		bool first;
++
+ 		/*
+ 		 * Once we hold wait_lock, we're serialized against
+ 		 * mutex_unlock() handing the lock off to us, do a trylock
+@@ -1035,15 +1036,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
+ 		spin_unlock(&lock->wait_lock);
+ 		schedule_preempt_disabled();
+ 
+-		/*
+-		 * ww_mutex needs to always recheck its position since its waiter
+-		 * list is not FIFO ordered.
+-		 */
+-		if (ww_ctx || !first) {
+-			first = __mutex_waiter_is_first(lock, &waiter);
+-			if (first)
+-				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+-		}
++		first = __mutex_waiter_is_first(lock, &waiter);
++		if (first)
++			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+ 
+ 		set_current_state(state);
+ 		/*
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 0f4530b3a8cd9..a332ccd829e24 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -170,7 +170,9 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 	/* Compute the cost of each performance state. */
+ 	fmax = (u64) table[nr_states - 1].frequency;
+ 	for (i = 0; i < nr_states; i++) {
+-		table[i].cost = div64_u64(fmax * table[i].power,
++		unsigned long power_res = em_scale_power(table[i].power);
++
++		table[i].cost = div64_u64(fmax * power_res,
+ 					  table[i].frequency);
+ 	}
+ 
+diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
+index 6c76988cc019f..0e7a60706d1c0 100644
+--- a/kernel/rcu/tree_stall.h
++++ b/kernel/rcu/tree_stall.h
+@@ -7,6 +7,8 @@
+  * Author: Paul E. McKenney <paulmck@linux.ibm.com>
+  */
+ 
++#include <linux/kvm_para.h>
++
+ //////////////////////////////////////////////////////////////////////////////
+ //
+ // Controlling CPU stall warnings, including delay calculation.
+@@ -267,8 +269,10 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 	struct task_struct *ts[8];
+ 
+ 	lockdep_assert_irqs_disabled();
+-	if (!rcu_preempt_blocked_readers_cgp(rnp))
++	if (!rcu_preempt_blocked_readers_cgp(rnp)) {
++		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 		return 0;
++	}
+ 	pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
+ 	       rnp->level, rnp->grplo, rnp->grphi);
+ 	t = list_entry(rnp->gp_tasks->prev,
+@@ -280,8 +284,8 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 			break;
+ 	}
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+-	for (i--; i; i--) {
+-		t = ts[i];
++	while (i) {
++		t = ts[--i];
+ 		if (!try_invoke_on_locked_down_task(t, check_slow_task, &rscr))
+ 			pr_cont(" P%d", t->pid);
+ 		else
+@@ -696,6 +700,14 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 	    (READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
+ 	    cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like an RCU stall. Check to see if the host
++		 * stopped the vm.
++		 */
++		if (kvm_check_and_clear_guest_paused())
++			return;
++
+ 		/* We haven't checked in, so go dump stack. */
+ 		print_cpu_stall(gps);
+ 		if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
+@@ -705,6 +717,14 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 		   ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
+ 		   cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like an RCU stall. Check to see if the host
++		 * stopped the vm.
++		 */
++		if (kvm_check_and_clear_guest_paused())
++			return;
++
+ 		/* They had a few time units to dump stack, so complain. */
+ 		print_other_cpu_stall(gs2, gps);
+ 		if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index f3b27c6c51535..a2403432f3abb 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1633,6 +1633,23 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ 		uclamp_rq_dec_id(rq, p, clamp_id);
+ }
+ 
++static inline void uclamp_rq_reinc_id(struct rq *rq, struct task_struct *p,
++				      enum uclamp_id clamp_id)
++{
++	if (!p->uclamp[clamp_id].active)
++		return;
++
++	uclamp_rq_dec_id(rq, p, clamp_id);
++	uclamp_rq_inc_id(rq, p, clamp_id);
++
++	/*
++	 * Make sure to clear the idle flag if we've transiently reached 0
++	 * active tasks on rq.
++	 */
++	if (clamp_id == UCLAMP_MAX && (rq->uclamp_flags & UCLAMP_FLAG_IDLE))
++		rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE;
++}
++
+ static inline void
+ uclamp_update_active(struct task_struct *p)
+ {
+@@ -1656,12 +1673,8 @@ uclamp_update_active(struct task_struct *p)
+ 	 * affecting a valid clamp bucket, the next time it's enqueued,
+ 	 * it will already see the updated clamp bucket value.
+ 	 */
+-	for_each_clamp_id(clamp_id) {
+-		if (p->uclamp[clamp_id].active) {
+-			uclamp_rq_dec_id(rq, p, clamp_id);
+-			uclamp_rq_inc_id(rq, p, clamp_id);
+-		}
+-	}
++	for_each_clamp_id(clamp_id)
++		uclamp_rq_reinc_id(rq, p, clamp_id);
+ 
+ 	task_rq_unlock(rq, p, &rf);
+ }
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index aaacd6cfd42f0..e94314633b39d 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1733,6 +1733,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ 	 */
+ 	raw_spin_rq_lock(rq);
+ 	if (p->dl.dl_non_contending) {
++		update_rq_clock(rq);
+ 		sub_running_bw(&p->dl, &rq->dl);
+ 		p->dl.dl_non_contending = 0;
+ 		/*
+@@ -2741,7 +2742,7 @@ void __setparam_dl(struct task_struct *p, const struct sched_attr *attr)
+ 	dl_se->dl_runtime = attr->sched_runtime;
+ 	dl_se->dl_deadline = attr->sched_deadline;
+ 	dl_se->dl_period = attr->sched_period ?: dl_se->dl_deadline;
+-	dl_se->flags = attr->sched_flags;
++	dl_se->flags = attr->sched_flags & SCHED_DL_FLAGS;
+ 	dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
+ 	dl_se->dl_density = to_ratio(dl_se->dl_deadline, dl_se->dl_runtime);
+ }
+@@ -2754,7 +2755,8 @@ void __getparam_dl(struct task_struct *p, struct sched_attr *attr)
+ 	attr->sched_runtime = dl_se->dl_runtime;
+ 	attr->sched_deadline = dl_se->dl_deadline;
+ 	attr->sched_period = dl_se->dl_period;
+-	attr->sched_flags = dl_se->flags;
++	attr->sched_flags &= ~SCHED_DL_FLAGS;
++	attr->sched_flags |= dl_se->flags;
+ }
+ 
+ /*
+@@ -2851,7 +2853,7 @@ bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+ 	if (dl_se->dl_runtime != attr->sched_runtime ||
+ 	    dl_se->dl_deadline != attr->sched_deadline ||
+ 	    dl_se->dl_period != attr->sched_period ||
+-	    dl_se->flags != attr->sched_flags)
++	    dl_se->flags != (attr->sched_flags & SCHED_DL_FLAGS))
+ 		return true;
+ 
+ 	return false;
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 0c5ec2776ddf0..7e08e3d947c20 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -388,6 +388,13 @@ void update_sched_domain_debugfs(void)
+ {
+ 	int cpu, i;
+ 
++	/*
++	 * This can unfortunately be invoked before sched_debug_init() creates
++	 * the debug directory. Don't touch sd_sysctl_cpus until then.
++	 */
++	if (!debugfs_sched)
++		return;
++
+ 	if (!cpumask_available(sd_sysctl_cpus)) {
+ 		if (!alloc_cpumask_var(&sd_sysctl_cpus, GFP_KERNEL))
+ 			return;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 44c452072a1b0..30a6984a58f71 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1486,7 +1486,7 @@ static inline bool is_core_idle(int cpu)
+ 		if (cpu == sibling)
+ 			continue;
+ 
+-		if (!idle_cpu(cpu))
++		if (!idle_cpu(sibling))
+ 			return false;
+ 	}
+ #endif
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index ddefb0419d7ae..d53d197708666 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -227,6 +227,8 @@ static inline void update_avg(u64 *avg, u64 sample)
+  */
+ #define SCHED_FLAG_SUGOV	0x10000000
+ 
++#define SCHED_DL_FLAGS (SCHED_FLAG_RECLAIM | SCHED_FLAG_DL_OVERRUN | SCHED_FLAG_SUGOV)
++
+ static inline bool dl_entity_is_special(struct sched_dl_entity *dl_se)
+ {
+ #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index b77ad49dc14f6..4e8698e62f075 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1482,6 +1482,8 @@ int				sched_max_numa_distance;
+ static int			*sched_domains_numa_distance;
+ static struct cpumask		***sched_domains_numa_masks;
+ int __read_mostly		node_reclaim_distance = RECLAIM_DISTANCE;
++
++static unsigned long __read_mostly *sched_numa_onlined_nodes;
+ #endif
+ 
+ /*
+@@ -1833,6 +1835,16 @@ void sched_init_numa(void)
+ 			sched_domains_numa_masks[i][j] = mask;
+ 
+ 			for_each_node(k) {
++				/*
++				 * Distance information can be unreliable for
++				 * offline nodes, defer building the node
++				 * masks to its bringup.
++				 * This relies on all unique distance values
++				 * still being visible at init time.
++				 */
++				if (!node_online(j))
++					continue;
++
+ 				if (sched_debug() && (node_distance(j, k) != node_distance(k, j)))
+ 					sched_numa_warn("Node-distance not symmetric");
+ 
+@@ -1886,6 +1898,53 @@ void sched_init_numa(void)
+ 	sched_max_numa_distance = sched_domains_numa_distance[nr_levels - 1];
+ 
+ 	init_numa_topology_type();
++
++	sched_numa_onlined_nodes = bitmap_alloc(nr_node_ids, GFP_KERNEL);
++	if (!sched_numa_onlined_nodes)
++		return;
++
++	bitmap_zero(sched_numa_onlined_nodes, nr_node_ids);
++	for_each_online_node(i)
++		bitmap_set(sched_numa_onlined_nodes, i, 1);
++}
++
++static void __sched_domains_numa_masks_set(unsigned int node)
++{
++	int i, j;
++
++	/*
++	 * NUMA masks are not built for offline nodes in sched_init_numa().
++	 * Thus, when a CPU of a never-onlined-before node gets plugged in,
++	 * adding that new CPU to the right NUMA masks is not sufficient: the
++	 * masks of that CPU's node must also be updated.
++	 */
++	if (test_bit(node, sched_numa_onlined_nodes))
++		return;
++
++	bitmap_set(sched_numa_onlined_nodes, node, 1);
++
++	for (i = 0; i < sched_domains_numa_levels; i++) {
++		for (j = 0; j < nr_node_ids; j++) {
++			if (!node_online(j) || node == j)
++				continue;
++
++			if (node_distance(j, node) > sched_domains_numa_distance[i])
++				continue;
++
++			/* Add remote nodes in our masks */
++			cpumask_or(sched_domains_numa_masks[i][node],
++				   sched_domains_numa_masks[i][node],
++				   sched_domains_numa_masks[0][j]);
++		}
++	}
++
++	/*
++	 * A new node has been brought up, potentially changing the topology
++	 * classification.
++	 *
++	 * Note that this is racy vs any use of sched_numa_topology_type :/
++	 */
++	init_numa_topology_type();
+ }
+ 
+ void sched_domains_numa_masks_set(unsigned int cpu)
+@@ -1893,8 +1952,14 @@ void sched_domains_numa_masks_set(unsigned int cpu)
+ 	int node = cpu_to_node(cpu);
+ 	int i, j;
+ 
++	__sched_domains_numa_masks_set(node);
++
+ 	for (i = 0; i < sched_domains_numa_levels; i++) {
+ 		for (j = 0; j < nr_node_ids; j++) {
++			if (!node_online(j))
++				continue;
++
++			/* Set ourselves in the remote node's masks */
+ 			if (node_distance(j, node) <= sched_domains_numa_distance[i])
+ 				cpumask_set_cpu(cpu, sched_domains_numa_masks[i][j]);
+ 		}
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 4a66725b1d4ac..5af7584734888 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -758,22 +758,6 @@ static void hrtimer_switch_to_hres(void)
+ 	retrigger_next_event(NULL);
+ }
+ 
+-static void clock_was_set_work(struct work_struct *work)
+-{
+-	clock_was_set();
+-}
+-
+-static DECLARE_WORK(hrtimer_work, clock_was_set_work);
+-
+-/*
+- * Called from timekeeping and resume code to reprogram the hrtimer
+- * interrupt device on all cpus.
+- */
+-void clock_was_set_delayed(void)
+-{
+-	schedule_work(&hrtimer_work);
+-}
+-
+ #else
+ 
+ static inline int hrtimer_is_hres_enabled(void) { return 0; }
+@@ -891,6 +875,22 @@ void clock_was_set(void)
+ 	timerfd_clock_was_set();
+ }
+ 
++static void clock_was_set_work(struct work_struct *work)
++{
++	clock_was_set();
++}
++
++static DECLARE_WORK(hrtimer_work, clock_was_set_work);
++
++/*
++ * Called from timekeeping and resume code to reprogram the hrtimer
++ * interrupt device on all cpus and to notify timerfd.
++ */
++void clock_was_set_delayed(void)
++{
++	schedule_work(&hrtimer_work);
++}
++
+ /*
+  * During resume we might have to reprogram the high resolution timer
+  * interrupt on all online CPUs.  However, all other CPUs will be
+@@ -1030,12 +1030,13 @@ static void __remove_hrtimer(struct hrtimer *timer,
+  * remove hrtimer, called with base lock held
+  */
+ static inline int
+-remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool restart)
++remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
++	       bool restart, bool keep_local)
+ {
+ 	u8 state = timer->state;
+ 
+ 	if (state & HRTIMER_STATE_ENQUEUED) {
+-		int reprogram;
++		bool reprogram;
+ 
+ 		/*
+ 		 * Remove the timer and force reprogramming when high
+@@ -1048,8 +1049,16 @@ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool rest
+ 		debug_deactivate(timer);
+ 		reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
+ 
++		/*
++		 * If the timer is not restarted then reprogramming is
++		 * required if the timer is local. If it is local and about
++		 * to be restarted, avoid programming it twice (on removal
++		 * and a moment later when it's requeued).
++		 */
+ 		if (!restart)
+ 			state = HRTIMER_STATE_INACTIVE;
++		else
++			reprogram &= !keep_local;
+ 
+ 		__remove_hrtimer(timer, base, state, reprogram);
+ 		return 1;
+@@ -1103,9 +1112,31 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 				    struct hrtimer_clock_base *base)
+ {
+ 	struct hrtimer_clock_base *new_base;
++	bool force_local, first;
++
++	/*
++	 * If the timer is on the local cpu base and is the first expiring
++	 * timer then this might end up reprogramming the hardware twice
++	 * (on removal and on enqueue). To avoid that by prevent the
++	 * reprogram on removal, keep the timer local to the current CPU
++	 * and enforce reprogramming after it is queued no matter whether
++	 * it is the new first expiring timer again or not.
++	 */
++	force_local = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
++	force_local &= base->cpu_base->next_timer == timer;
+ 
+-	/* Remove an active timer from the queue: */
+-	remove_hrtimer(timer, base, true);
++	/*
++	 * Remove an active timer from the queue. In case it is not queued
++	 * on the current CPU, make sure that remove_hrtimer() updates the
++	 * remote data correctly.
++	 *
++	 * If it's on the current CPU and the first expiring timer, then
++	 * skip reprogramming, keep the timer local and enforce
++	 * reprogramming later if it was the first expiring timer.  This
++	 * avoids programming the underlying clock event twice (once at
++	 * removal and once after enqueue).
++	 */
++	remove_hrtimer(timer, base, true, force_local);
+ 
+ 	if (mode & HRTIMER_MODE_REL)
+ 		tim = ktime_add_safe(tim, base->get_time());
+@@ -1115,9 +1146,24 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 	hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+ 
+ 	/* Switch the timer base, if necessary: */
+-	new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
++	if (!force_local) {
++		new_base = switch_hrtimer_base(timer, base,
++					       mode & HRTIMER_MODE_PINNED);
++	} else {
++		new_base = base;
++	}
++
++	first = enqueue_hrtimer(timer, new_base, mode);
++	if (!force_local)
++		return first;
+ 
+-	return enqueue_hrtimer(timer, new_base, mode);
++	/*
++	 * Timer was forced to stay on the current CPU to avoid
++	 * reprogramming on removal and enqueue. Force reprogram the
++	 * hardware by evaluating the new first expiring timer.
++	 */
++	hrtimer_force_reprogram(new_base->cpu_base, 1);
++	return 0;
+ }
+ 
+ /**
+@@ -1183,7 +1229,7 @@ int hrtimer_try_to_cancel(struct hrtimer *timer)
+ 	base = lock_hrtimer_base(timer, &flags);
+ 
+ 	if (!hrtimer_callback_running(timer))
+-		ret = remove_hrtimer(timer, base, false);
++		ret = remove_hrtimer(timer, base, false, false);
+ 
+ 	unlock_hrtimer_base(timer, &flags);
+ 
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 517be7fd175ef..a002685f688d6 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1346,8 +1346,6 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clkid,
+ 			}
+ 		}
+ 
+-		if (!*newval)
+-			return;
+ 		*newval += now;
+ 	}
+ 
+diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
+index 6a742a29e545f..cd610faa25235 100644
+--- a/kernel/time/tick-internal.h
++++ b/kernel/time/tick-internal.h
+@@ -165,3 +165,6 @@ DECLARE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases);
+ 
+ extern u64 get_next_timer_interrupt(unsigned long basej, u64 basem);
+ void timer_clear_idle(void);
++
++void clock_was_set(void);
++void clock_was_set_delayed(void);
+diff --git a/lib/mpi/mpiutil.c b/lib/mpi/mpiutil.c
+index 9a75ca3f7edf9..bc81419f400c5 100644
+--- a/lib/mpi/mpiutil.c
++++ b/lib/mpi/mpiutil.c
+@@ -148,7 +148,7 @@ int mpi_resize(MPI a, unsigned nlimbs)
+ 		return 0;	/* no need to do it */
+ 
+ 	if (a->d) {
+-		p = kmalloc_array(nlimbs, sizeof(mpi_limb_t), GFP_KERNEL);
++		p = kcalloc(nlimbs, sizeof(mpi_limb_t), GFP_KERNEL);
+ 		if (!p)
+ 			return -ENOMEM;
+ 		memcpy(p, a->d, a->alloced * sizeof(mpi_limb_t));
+diff --git a/lib/test_scanf.c b/lib/test_scanf.c
+index 84fe09eaf55e7..abae88848972f 100644
+--- a/lib/test_scanf.c
++++ b/lib/test_scanf.c
+@@ -271,7 +271,7 @@ static u32 __init next_test_random(u32 max_bits)
+ {
+ 	u32 n_bits = hweight32(prandom_u32_state(&rnd_state)) % (max_bits + 1);
+ 
+-	return prandom_u32_state(&rnd_state) & (UINT_MAX >> (32 - n_bits));
++	return prandom_u32_state(&rnd_state) & GENMASK(n_bits, 0);
+ }
+ 
+ static unsigned long long __init next_test_random_ull(void)
+@@ -280,7 +280,7 @@ static unsigned long long __init next_test_random_ull(void)
+ 	u32 n_bits = (hweight32(rand1) * 3) % 64;
+ 	u64 val = (u64)prandom_u32_state(&rnd_state) * rand1;
+ 
+-	return val & (ULLONG_MAX >> (64 - n_bits));
++	return val & GENMASK_ULL(n_bits, 0);
+ }
+ 
+ #define random_for_type(T)				\
+diff --git a/net/6lowpan/debugfs.c b/net/6lowpan/debugfs.c
+index 1c140af06d527..600b9563bfc53 100644
+--- a/net/6lowpan/debugfs.c
++++ b/net/6lowpan/debugfs.c
+@@ -170,7 +170,8 @@ static void lowpan_dev_debugfs_ctx_init(struct net_device *dev,
+ 	struct dentry *root;
+ 	char buf[32];
+ 
+-	WARN_ON_ONCE(id > LOWPAN_IPHC_CTX_TABLE_SIZE);
++	if (WARN_ON_ONCE(id >= LOWPAN_IPHC_CTX_TABLE_SIZE))
++		return;
+ 
+ 	sprintf(buf, "%d", id);
+ 
+diff --git a/net/bluetooth/cmtp/cmtp.h b/net/bluetooth/cmtp/cmtp.h
+index c32638dddbf94..f6b9dc4e408f2 100644
+--- a/net/bluetooth/cmtp/cmtp.h
++++ b/net/bluetooth/cmtp/cmtp.h
+@@ -26,7 +26,7 @@
+ #include <linux/types.h>
+ #include <net/bluetooth/bluetooth.h>
+ 
+-#define BTNAMSIZ 18
++#define BTNAMSIZ 21
+ 
+ /* CMTP ioctl defines */
+ #define CMTPCONNADD	_IOW('C', 200, int)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index e1a545c8a69f8..4c25bcd1ac4c0 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1343,6 +1343,12 @@ int hci_inquiry(void __user *arg)
+ 		goto done;
+ 	}
+ 
++	/* Restrict maximum inquiry length to 60 seconds */
++	if (ir.length > 60) {
++		err = -EINVAL;
++		goto done;
++	}
++
+ 	hci_dev_lock(hdev);
+ 	if (inquiry_cache_age(hdev) > INQUIRY_CACHE_AGE_MAX ||
+ 	    inquiry_cache_empty(hdev) || ir.flags & IREQ_CACHE_FLUSH) {
+@@ -1727,6 +1733,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 	hci_request_cancel_all(hdev);
+ 	hci_req_sync_lock(hdev);
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	if (!test_and_clear_bit(HCI_UP, &hdev->flags)) {
+ 		cancel_delayed_work_sync(&hdev->cmd_timer);
+ 		hci_req_sync_unlock(hdev);
+@@ -1798,14 +1812,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 		clear_bit(HCI_INIT, &hdev->flags);
+ 	}
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	/* flush cmd  work */
+ 	flush_work(&hdev->cmd_work);
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 3663f880df110..1e21e014efd22 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7725,7 +7725,7 @@ static int add_advertising(struct sock *sk, struct hci_dev *hdev,
+ 	 * advertising.
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY))
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_ADVERTISING,
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_ADVERTISING,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+ 
+ 	if (cp->instance < 1 || cp->instance > hdev->le_num_of_adv_sets)
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index d9a4e88dacbb7..b5ab842c7c4a8 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -85,7 +85,6 @@ static void sco_sock_timeout(struct timer_list *t)
+ 	sk->sk_state_change(sk);
+ 	bh_unlock_sock(sk);
+ 
+-	sco_sock_kill(sk);
+ 	sock_put(sk);
+ }
+ 
+@@ -177,7 +176,6 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_sock_clear_timer(sk);
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+-		sco_sock_kill(sk);
+ 		sock_put(sk);
+ 	}
+ 
+@@ -394,8 +392,7 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+  */
+ static void sco_sock_kill(struct sock *sk)
+ {
+-	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
+-	    sock_flag(sk, SOCK_DEAD))
++	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
+ 		return;
+ 
+ 	BT_DBG("sk %p state %d", sk, sk->sk_state);
+@@ -447,7 +444,6 @@ static void sco_sock_close(struct sock *sk)
+ 	lock_sock(sk);
+ 	__sco_sock_close(sk);
+ 	release_sock(sk);
+-	sco_sock_kill(sk);
+ }
+ 
+ static void sco_skb_put_cmsg(struct sk_buff *skb, struct msghdr *msg,
+@@ -773,6 +769,11 @@ static void sco_conn_defer_accept(struct hci_conn *conn, u16 setting)
+ 			cp.max_latency = cpu_to_le16(0xffff);
+ 			cp.retrans_effort = 0xff;
+ 			break;
++		default:
++			/* use CVSD settings as fallback */
++			cp.max_latency = cpu_to_le16(0xffff);
++			cp.retrans_effort = 0xff;
++			break;
+ 		}
+ 
+ 		hci_send_cmd(hdev, HCI_OP_ACCEPT_SYNC_CONN_REQ,
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 85032626de248..5a85a7b0feb25 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -3801,10 +3801,12 @@ static void devlink_param_notify(struct devlink *devlink,
+ 				 struct devlink_param_item *param_item,
+ 				 enum devlink_command cmd);
+ 
+-static void devlink_reload_netns_change(struct devlink *devlink,
+-					struct net *dest_net)
++static void devlink_ns_change_notify(struct devlink *devlink,
++				     struct net *dest_net, struct net *curr_net,
++				     bool new)
+ {
+ 	struct devlink_param_item *param_item;
++	enum devlink_command cmd;
+ 
+ 	/* Userspace needs to be notified about devlink objects
+ 	 * removed from original and entering new network namespace.
+@@ -3812,17 +3814,18 @@ static void devlink_reload_netns_change(struct devlink *devlink,
+ 	 * reload process so the notifications are generated separatelly.
+ 	 */
+ 
+-	list_for_each_entry(param_item, &devlink->param_list, list)
+-		devlink_param_notify(devlink, 0, param_item,
+-				     DEVLINK_CMD_PARAM_DEL);
+-	devlink_notify(devlink, DEVLINK_CMD_DEL);
++	if (!dest_net || net_eq(dest_net, curr_net))
++		return;
+ 
+-	__devlink_net_set(devlink, dest_net);
++	if (new)
++		devlink_notify(devlink, DEVLINK_CMD_NEW);
+ 
+-	devlink_notify(devlink, DEVLINK_CMD_NEW);
++	cmd = new ? DEVLINK_CMD_PARAM_NEW : DEVLINK_CMD_PARAM_DEL;
+ 	list_for_each_entry(param_item, &devlink->param_list, list)
+-		devlink_param_notify(devlink, 0, param_item,
+-				     DEVLINK_CMD_PARAM_NEW);
++		devlink_param_notify(devlink, 0, param_item, cmd);
++
++	if (!new)
++		devlink_notify(devlink, DEVLINK_CMD_DEL);
+ }
+ 
+ static bool devlink_reload_supported(const struct devlink_ops *ops)
+@@ -3902,6 +3905,7 @@ static int devlink_reload(struct devlink *devlink, struct net *dest_net,
+ 			  u32 *actions_performed, struct netlink_ext_ack *extack)
+ {
+ 	u32 remote_reload_stats[DEVLINK_RELOAD_STATS_ARRAY_SIZE];
++	struct net *curr_net;
+ 	int err;
+ 
+ 	if (!devlink->reload_enabled)
+@@ -3909,18 +3913,22 @@ static int devlink_reload(struct devlink *devlink, struct net *dest_net,
+ 
+ 	memcpy(remote_reload_stats, devlink->stats.remote_reload_stats,
+ 	       sizeof(remote_reload_stats));
++
++	curr_net = devlink_net(devlink);
++	devlink_ns_change_notify(devlink, dest_net, curr_net, false);
+ 	err = devlink->ops->reload_down(devlink, !!dest_net, action, limit, extack);
+ 	if (err)
+ 		return err;
+ 
+-	if (dest_net && !net_eq(dest_net, devlink_net(devlink)))
+-		devlink_reload_netns_change(devlink, dest_net);
++	if (dest_net && !net_eq(dest_net, curr_net))
++		__devlink_net_set(devlink, dest_net);
+ 
+ 	err = devlink->ops->reload_up(devlink, action, limit, actions_performed, extack);
+ 	devlink_reload_failed_set(devlink, !!err);
+ 	if (err)
+ 		return err;
+ 
++	devlink_ns_change_notify(devlink, dest_net, curr_net, true);
+ 	WARN_ON(!(*actions_performed & BIT(action)));
+ 	/* Catch driver on updating the remote action within devlink reload */
+ 	WARN_ON(memcmp(remote_reload_stats, devlink->stats.remote_reload_stats,
+@@ -4117,7 +4125,7 @@ out_free_msg:
+ 
+ static void devlink_flash_update_begin_notify(struct devlink *devlink)
+ {
+-	struct devlink_flash_notify params = { 0 };
++	struct devlink_flash_notify params = {};
+ 
+ 	__devlink_flash_update_notify(devlink,
+ 				      DEVLINK_CMD_FLASH_UPDATE,
+@@ -4126,7 +4134,7 @@ static void devlink_flash_update_begin_notify(struct devlink *devlink)
+ 
+ static void devlink_flash_update_end_notify(struct devlink *devlink)
+ {
+-	struct devlink_flash_notify params = { 0 };
++	struct devlink_flash_notify params = {};
+ 
+ 	__devlink_flash_update_notify(devlink,
+ 				      DEVLINK_CMD_FLASH_UPDATE_END,
+diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig
+index 00bb89b2d86fc..970906eb5b2cd 100644
+--- a/net/dsa/Kconfig
++++ b/net/dsa/Kconfig
+@@ -18,16 +18,6 @@ if NET_DSA
+ 
+ # Drivers must select the appropriate tagging format(s)
+ 
+-config NET_DSA_TAG_8021Q
+-	tristate
+-	select VLAN_8021Q
+-	help
+-	  Unlike the other tagging protocols, the 802.1Q config option simply
+-	  provides helpers for other tagging implementations that might rely on
+-	  VLAN in one way or another. It is not a complete solution.
+-
+-	  Drivers which use these helpers should select this as dependency.
+-
+ config NET_DSA_TAG_AR9331
+ 	tristate "Tag driver for Atheros AR9331 SoC with built-in switch"
+ 	help
+@@ -126,7 +116,6 @@ config NET_DSA_TAG_OCELOT_8021Q
+ 	tristate "Tag driver for Ocelot family of switches, using VLAN"
+ 	depends on MSCC_OCELOT_SWITCH_LIB || \
+ 	          (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
+-	select NET_DSA_TAG_8021Q
+ 	help
+ 	  Say Y or M if you want to enable support for tagging frames with a
+ 	  custom VLAN-based header. Frames that require timestamping, such as
+@@ -149,7 +138,7 @@ config NET_DSA_TAG_LAN9303
+ 
+ config NET_DSA_TAG_SJA1105
+ 	tristate "Tag driver for NXP SJA1105 switches"
+-	select NET_DSA_TAG_8021Q
++	depends on (NET_DSA_SJA1105 && NET_DSA_SJA1105_PTP) || !NET_DSA_SJA1105 || !NET_DSA_SJA1105_PTP
+ 	select PACKING
+ 	help
+ 	  Say Y or M if you want to enable support for tagging frames with the
+diff --git a/net/dsa/Makefile b/net/dsa/Makefile
+index 44bc79952b8b8..67ea009f242cb 100644
+--- a/net/dsa/Makefile
++++ b/net/dsa/Makefile
+@@ -1,10 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # the core
+ obj-$(CONFIG_NET_DSA) += dsa_core.o
+-dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o
++dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o tag_8021q.o
+ 
+ # tagging formats
+-obj-$(CONFIG_NET_DSA_TAG_8021Q) += tag_8021q.o
+ obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o
+ obj-$(CONFIG_NET_DSA_TAG_BRCM_COMMON) += tag_brcm.o
+ obj-$(CONFIG_NET_DSA_TAG_DSA_COMMON) += tag_dsa.o
+diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
+index f201c33980bf3..cddf7cb0f398f 100644
+--- a/net/dsa/dsa_priv.h
++++ b/net/dsa/dsa_priv.h
+@@ -234,8 +234,6 @@ int dsa_port_pre_bridge_flags(const struct dsa_port *dp,
+ int dsa_port_bridge_flags(const struct dsa_port *dp,
+ 			  struct switchdev_brport_flags flags,
+ 			  struct netlink_ext_ack *extack);
+-int dsa_port_mrouter(struct dsa_port *dp, bool mrouter,
+-		     struct netlink_ext_ack *extack);
+ int dsa_port_vlan_add(struct dsa_port *dp,
+ 		      const struct switchdev_obj_port_vlan *vlan,
+ 		      struct netlink_ext_ack *extack);
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 28b45b7e66df1..23e30198a90e6 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -186,10 +186,6 @@ static int dsa_port_switchdev_sync(struct dsa_port *dp,
+ 	if (err && err != -EOPNOTSUPP)
+ 		return err;
+ 
+-	err = dsa_port_mrouter(dp->cpu_dp, br_multicast_router(br), extack);
+-	if (err && err != -EOPNOTSUPP)
+-		return err;
+-
+ 	err = dsa_port_ageing_time(dp, br_get_ageing_time(br));
+ 	if (err && err != -EOPNOTSUPP)
+ 		return err;
+@@ -272,12 +268,6 @@ static void dsa_port_switchdev_unsync_attrs(struct dsa_port *dp)
+ 
+ 	/* VLAN filtering is handled by dsa_switch_bridge_leave */
+ 
+-	/* Some drivers treat the notification for having a local multicast
+-	 * router by allowing multicast to be flooded to the CPU, so we should
+-	 * allow this in standalone mode too.
+-	 */
+-	dsa_port_mrouter(dp->cpu_dp, true, NULL);
+-
+ 	/* Ageing time may be global to the switch chip, so don't change it
+ 	 * here because we have no good reason (or value) to change it to.
+ 	 */
+@@ -607,17 +597,6 @@ int dsa_port_bridge_flags(const struct dsa_port *dp,
+ 	return ds->ops->port_bridge_flags(ds, dp->index, flags, extack);
+ }
+ 
+-int dsa_port_mrouter(struct dsa_port *dp, bool mrouter,
+-		     struct netlink_ext_ack *extack)
+-{
+-	struct dsa_switch *ds = dp->ds;
+-
+-	if (!ds->ops->port_set_mrouter)
+-		return -EOPNOTSUPP;
+-
+-	return ds->ops->port_set_mrouter(ds, dp->index, mrouter, extack);
+-}
+-
+ int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
+ 			bool targeted_match)
+ {
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 23be8e01026bf..b34116b15d436 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -314,12 +314,6 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
+ 
+ 		ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, extack);
+ 		break;
+-	case SWITCHDEV_ATTR_ID_BRIDGE_MROUTER:
+-		if (!dsa_port_offloads_bridge(dp, attr->orig_dev))
+-			return -EOPNOTSUPP;
+-
+-		ret = dsa_port_mrouter(dp->cpu_dp, attr->u.mrouter, extack);
+-		break;
+ 	default:
+ 		ret = -EOPNOTSUPP;
+ 		break;
+diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
+index 4aa29f90eceae..0d1db3e37668d 100644
+--- a/net/dsa/tag_8021q.c
++++ b/net/dsa/tag_8021q.c
+@@ -493,5 +493,3 @@ void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id,
+ 	skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
+ }
+ EXPORT_SYMBOL_GPL(dsa_8021q_rcv);
+-
+-MODULE_LICENSE("GPL v2");
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index a6f20ee353355..94e33d3eaf621 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -586,18 +586,25 @@ static void fnhe_flush_routes(struct fib_nh_exception *fnhe)
+ 	}
+ }
+ 
+-static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
++static void fnhe_remove_oldest(struct fnhe_hash_bucket *hash)
+ {
+-	struct fib_nh_exception *fnhe, *oldest;
++	struct fib_nh_exception __rcu **fnhe_p, **oldest_p;
++	struct fib_nh_exception *fnhe, *oldest = NULL;
+ 
+-	oldest = rcu_dereference(hash->chain);
+-	for (fnhe = rcu_dereference(oldest->fnhe_next); fnhe;
+-	     fnhe = rcu_dereference(fnhe->fnhe_next)) {
+-		if (time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp))
++	for (fnhe_p = &hash->chain; ; fnhe_p = &fnhe->fnhe_next) {
++		fnhe = rcu_dereference_protected(*fnhe_p,
++						 lockdep_is_held(&fnhe_lock));
++		if (!fnhe)
++			break;
++		if (!oldest ||
++		    time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp)) {
+ 			oldest = fnhe;
++			oldest_p = fnhe_p;
++		}
+ 	}
+ 	fnhe_flush_routes(oldest);
+-	return oldest;
++	*oldest_p = oldest->fnhe_next;
++	kfree_rcu(oldest, rcu);
+ }
+ 
+ static u32 fnhe_hashfun(__be32 daddr)
+@@ -676,16 +683,21 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr,
+ 		if (rt)
+ 			fill_route_from_fnhe(rt, fnhe);
+ 	} else {
+-		if (depth > FNHE_RECLAIM_DEPTH)
+-			fnhe = fnhe_oldest(hash);
+-		else {
+-			fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC);
+-			if (!fnhe)
+-				goto out_unlock;
+-
+-			fnhe->fnhe_next = hash->chain;
+-			rcu_assign_pointer(hash->chain, fnhe);
++		/* Randomize max depth to avoid some side channels attacks. */
++		int max_depth = FNHE_RECLAIM_DEPTH +
++				prandom_u32_max(FNHE_RECLAIM_DEPTH);
++
++		while (depth > max_depth) {
++			fnhe_remove_oldest(hash);
++			depth--;
+ 		}
++
++		fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC);
++		if (!fnhe)
++			goto out_unlock;
++
++		fnhe->fnhe_next = hash->chain;
++
+ 		fnhe->fnhe_genid = genid;
+ 		fnhe->fnhe_daddr = daddr;
+ 		fnhe->fnhe_gw = gw;
+@@ -693,6 +705,8 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr,
+ 		fnhe->fnhe_mtu_locked = lock;
+ 		fnhe->fnhe_expires = max(1UL, expires);
+ 
++		rcu_assign_pointer(hash->chain, fnhe);
++
+ 		/* Exception created; mark the cached routes for the nexthop
+ 		 * stale, so anyone caching it rechecks if this exception
+ 		 * applies to them.
+@@ -3170,7 +3184,7 @@ static struct sk_buff *inet_rtm_getroute_build_skb(__be32 src, __be32 dst,
+ 		udph = skb_put_zero(skb, sizeof(struct udphdr));
+ 		udph->source = sport;
+ 		udph->dest = dport;
+-		udph->len = sizeof(struct udphdr);
++		udph->len = htons(sizeof(struct udphdr));
+ 		udph->check = 0;
+ 		break;
+ 	}
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a692626c19e44..db07c05736b25 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2451,6 +2451,7 @@ static void *tcp_get_idx(struct seq_file *seq, loff_t pos)
+ static void *tcp_seek_last_pos(struct seq_file *seq)
+ {
+ 	struct tcp_iter_state *st = seq->private;
++	int bucket = st->bucket;
+ 	int offset = st->offset;
+ 	int orig_num = st->num;
+ 	void *rc = NULL;
+@@ -2461,7 +2462,7 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
+ 			break;
+ 		st->state = TCP_SEQ_STATE_LISTENING;
+ 		rc = listening_get_next(seq, NULL);
+-		while (offset-- && rc)
++		while (offset-- && rc && bucket == st->bucket)
+ 			rc = listening_get_next(seq, rc);
+ 		if (rc)
+ 			break;
+@@ -2472,7 +2473,7 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
+ 		if (st->bucket > tcp_hashinfo.ehash_mask)
+ 			break;
+ 		rc = established_get_first(seq);
+-		while (offset-- && rc)
++		while (offset-- && rc && bucket == st->bucket)
+ 			rc = established_get_next(seq, rc);
+ 	}
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index c5e8ecb96426b..6033403021019 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1657,6 +1657,7 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ 	struct in6_addr *src_key = NULL;
+ 	struct rt6_exception *rt6_ex;
+ 	struct fib6_nh *nh = res->nh;
++	int max_depth;
+ 	int err = 0;
+ 
+ 	spin_lock_bh(&rt6_exception_lock);
+@@ -1711,7 +1712,9 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ 	bucket->depth++;
+ 	net->ipv6.rt6_stats->fib_rt_cache++;
+ 
+-	if (bucket->depth > FIB6_MAX_DEPTH)
++	/* Randomize max depth to avoid some side channels attacks. */
++	max_depth = FIB6_MAX_DEPTH + prandom_u32_max(FIB6_MAX_DEPTH);
++	while (bucket->depth > max_depth)
+ 		rt6_exception_remove_oldest(bucket);
+ 
+ out:
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index fcae76ddd586c..45fb517591ee9 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1020,7 +1020,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 
+ 			iftd = &sband->iftype_data[i];
+ 
+-			supp_he = supp_he || (iftd && iftd->he_cap.has_he);
++			supp_he = supp_he || iftd->he_cap.has_he;
+ 		}
+ 
+ 		/* HT, VHT, HE require QoS, thus >= 4 queues */
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 8509778ff31f2..fa09a369214db 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3242,7 +3242,9 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata,
+ 	if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ 		return true;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
++	if (!ieee80211_amsdu_realloc_pad(local, skb,
++					 sizeof(*amsdu_hdr) +
++					 local->hw.extra_tx_headroom))
+ 		return false;
+ 
+ 	data = skb_push(skb, sizeof(*amsdu_hdr));
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index baf235721c43f..000bb3da4f77f 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -187,14 +187,14 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		}
+ 	doi_def->map.std->lvl.local = kcalloc(doi_def->map.std->lvl.local_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 	if (doi_def->map.std->lvl.local == NULL) {
+ 		ret_val = -ENOMEM;
+ 		goto add_std_failure;
+ 	}
+ 	doi_def->map.std->lvl.cipso = kcalloc(doi_def->map.std->lvl.cipso_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 	if (doi_def->map.std->lvl.cipso == NULL) {
+ 		ret_val = -ENOMEM;
+ 		goto add_std_failure;
+@@ -263,7 +263,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		doi_def->map.std->cat.local = kcalloc(
+ 					      doi_def->map.std->cat.local_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 		if (doi_def->map.std->cat.local == NULL) {
+ 			ret_val = -ENOMEM;
+ 			goto add_std_failure;
+@@ -271,7 +271,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		doi_def->map.std->cat.cipso = kcalloc(
+ 					      doi_def->map.std->cat.cipso_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 		if (doi_def->map.std->cat.cipso == NULL) {
+ 			ret_val = -ENOMEM;
+ 			goto add_std_failure;
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 0c30908628bae..bdbda61db8b96 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -493,7 +493,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		goto err;
+ 	}
+ 
+-	if (!size || len != ALIGN(size, 4) + hdrlen)
++	if (!size || size & 3 || len != size + hdrlen)
+ 		goto err;
+ 
+ 	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+@@ -506,8 +506,12 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 
+ 	if (cb->type == QRTR_TYPE_NEW_SERVER) {
+ 		/* Remote node endpoint can bridge other distant nodes */
+-		const struct qrtr_ctrl_pkt *pkt = data + hdrlen;
++		const struct qrtr_ctrl_pkt *pkt;
+ 
++		if (size < sizeof(*pkt))
++			goto err;
++
++		pkt = data + hdrlen;
+ 		qrtr_node_assign(node, le32_to_cpu(pkt->server.node));
+ 	}
+ 
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index b79a7e27bb315..38a3a8394bbda 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -1614,7 +1614,7 @@ cbq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, struct nlattr **t
+ 	err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack);
+ 	if (err) {
+ 		kfree(cl);
+-		return err;
++		goto failure;
+ 	}
+ 
+ 	if (tca[TCA_RATE]) {
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 5f7ac27a52649..f22d26a2c89fa 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -125,6 +125,7 @@ struct htb_class {
+ 		struct htb_class_leaf {
+ 			int		deficit[TC_HTB_MAXDEPTH];
+ 			struct Qdisc	*q;
++			struct netdev_queue *offload_queue;
+ 		} leaf;
+ 		struct htb_class_inner {
+ 			struct htb_prio clprio[TC_HTB_NUMPRIO];
+@@ -1411,24 +1412,47 @@ htb_graft_helper(struct netdev_queue *dev_queue, struct Qdisc *new_q)
+ 	return old_q;
+ }
+ 
+-static void htb_offload_move_qdisc(struct Qdisc *sch, u16 qid_old, u16 qid_new)
++static struct netdev_queue *htb_offload_get_queue(struct htb_class *cl)
++{
++	struct netdev_queue *queue;
++
++	queue = cl->leaf.offload_queue;
++	if (!(cl->leaf.q->flags & TCQ_F_BUILTIN))
++		WARN_ON(cl->leaf.q->dev_queue != queue);
++
++	return queue;
++}
++
++static void htb_offload_move_qdisc(struct Qdisc *sch, struct htb_class *cl_old,
++				   struct htb_class *cl_new, bool destroying)
+ {
+ 	struct netdev_queue *queue_old, *queue_new;
+ 	struct net_device *dev = qdisc_dev(sch);
+-	struct Qdisc *qdisc;
+ 
+-	queue_old = netdev_get_tx_queue(dev, qid_old);
+-	queue_new = netdev_get_tx_queue(dev, qid_new);
++	queue_old = htb_offload_get_queue(cl_old);
++	queue_new = htb_offload_get_queue(cl_new);
+ 
+-	if (dev->flags & IFF_UP)
+-		dev_deactivate(dev);
+-	qdisc = dev_graft_qdisc(queue_old, NULL);
+-	qdisc->dev_queue = queue_new;
+-	qdisc = dev_graft_qdisc(queue_new, qdisc);
+-	if (dev->flags & IFF_UP)
+-		dev_activate(dev);
++	if (!destroying) {
++		struct Qdisc *qdisc;
+ 
+-	WARN_ON(!(qdisc->flags & TCQ_F_BUILTIN));
++		if (dev->flags & IFF_UP)
++			dev_deactivate(dev);
++		qdisc = dev_graft_qdisc(queue_old, NULL);
++		WARN_ON(qdisc != cl_old->leaf.q);
++	}
++
++	if (!(cl_old->leaf.q->flags & TCQ_F_BUILTIN))
++		cl_old->leaf.q->dev_queue = queue_new;
++	cl_old->leaf.offload_queue = queue_new;
++
++	if (!destroying) {
++		struct Qdisc *qdisc;
++
++		qdisc = dev_graft_qdisc(queue_new, cl_old->leaf.q);
++		if (dev->flags & IFF_UP)
++			dev_activate(dev);
++		WARN_ON(!(qdisc->flags & TCQ_F_BUILTIN));
++	}
+ }
+ 
+ static int htb_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
+@@ -1442,10 +1466,8 @@ static int htb_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
+ 	if (cl->level)
+ 		return -EINVAL;
+ 
+-	if (q->offload) {
+-		dev_queue = new->dev_queue;
+-		WARN_ON(dev_queue != cl->leaf.q->dev_queue);
+-	}
++	if (q->offload)
++		dev_queue = htb_offload_get_queue(cl);
+ 
+ 	if (!new) {
+ 		new = qdisc_create_dflt(dev_queue, &pfifo_qdisc_ops,
+@@ -1514,6 +1536,8 @@ static void htb_parent_to_leaf(struct Qdisc *sch, struct htb_class *cl,
+ 	parent->ctokens = parent->cbuffer;
+ 	parent->t_c = ktime_get_ns();
+ 	parent->cmode = HTB_CAN_SEND;
++	if (q->offload)
++		parent->leaf.offload_queue = cl->leaf.offload_queue;
+ }
+ 
+ static void htb_parent_to_leaf_offload(struct Qdisc *sch,
+@@ -1534,6 +1558,7 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
+ 				     struct netlink_ext_ack *extack)
+ {
+ 	struct tc_htb_qopt_offload offload_opt;
++	struct netdev_queue *dev_queue;
+ 	struct Qdisc *q = cl->leaf.q;
+ 	struct Qdisc *old = NULL;
+ 	int err;
+@@ -1542,16 +1567,15 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
+ 		return -EINVAL;
+ 
+ 	WARN_ON(!q);
+-	if (!destroying) {
+-		/* On destroy of HTB, two cases are possible:
+-		 * 1. q is a normal qdisc, but q->dev_queue has noop qdisc.
+-		 * 2. q is a noop qdisc (for nodes that were inner),
+-		 *    q->dev_queue is noop_netdev_queue.
++	dev_queue = htb_offload_get_queue(cl);
++	old = htb_graft_helper(dev_queue, NULL);
++	if (destroying)
++		/* Before HTB is destroyed, the kernel grafts noop_qdisc to
++		 * all queues.
+ 		 */
+-		old = htb_graft_helper(q->dev_queue, NULL);
+-		WARN_ON(!old);
++		WARN_ON(!(old->flags & TCQ_F_BUILTIN));
++	else
+ 		WARN_ON(old != q);
+-	}
+ 
+ 	if (cl->parent) {
+ 		cl->parent->bstats_bias.bytes += q->bstats.bytes;
+@@ -1570,18 +1594,17 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
+ 	if (!err || destroying)
+ 		qdisc_put(old);
+ 	else
+-		htb_graft_helper(q->dev_queue, old);
++		htb_graft_helper(dev_queue, old);
+ 
+ 	if (last_child)
+ 		return err;
+ 
+-	if (!err && offload_opt.moved_qid != 0) {
+-		if (destroying)
+-			q->dev_queue = netdev_get_tx_queue(qdisc_dev(sch),
+-							   offload_opt.qid);
+-		else
+-			htb_offload_move_qdisc(sch, offload_opt.moved_qid,
+-					       offload_opt.qid);
++	if (!err && offload_opt.classid != TC_H_MIN(cl->common.classid)) {
++		u32 classid = TC_H_MAJ(sch->handle) |
++			      TC_H_MIN(offload_opt.classid);
++		struct htb_class *moved_cl = htb_find(classid, sch);
++
++		htb_offload_move_qdisc(sch, moved_cl, cl, destroying);
+ 	}
+ 
+ 	return err;
+@@ -1704,9 +1727,11 @@ static int htb_delete(struct Qdisc *sch, unsigned long arg,
+ 	}
+ 
+ 	if (last_child) {
+-		struct netdev_queue *dev_queue;
++		struct netdev_queue *dev_queue = sch->dev_queue;
++
++		if (q->offload)
++			dev_queue = htb_offload_get_queue(cl);
+ 
+-		dev_queue = q->offload ? cl->leaf.q->dev_queue : sch->dev_queue;
+ 		new_q = qdisc_create_dflt(dev_queue, &pfifo_qdisc_ops,
+ 					  cl->parent->common.classid,
+ 					  NULL);
+@@ -1878,7 +1903,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ 			}
+ 			dev_queue = netdev_get_tx_queue(dev, offload_opt.qid);
+ 		} else { /* First child. */
+-			dev_queue = parent->leaf.q->dev_queue;
++			dev_queue = htb_offload_get_queue(parent);
+ 			old_q = htb_graft_helper(dev_queue, NULL);
+ 			WARN_ON(old_q != parent->leaf.q);
+ 			offload_opt = (struct tc_htb_qopt_offload) {
+@@ -1935,6 +1960,8 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ 
+ 		/* leaf (we) needs elementary qdisc */
+ 		cl->leaf.q = new_q ? new_q : &noop_qdisc;
++		if (q->offload)
++			cl->leaf.offload_queue = dev_queue;
+ 
+ 		cl->parent = parent;
+ 
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 0de918cb3d90d..a47e290b0668e 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -1629,6 +1629,21 @@ u32 svc_max_payload(const struct svc_rqst *rqstp)
+ }
+ EXPORT_SYMBOL_GPL(svc_max_payload);
+ 
++/**
++ * svc_proc_name - Return RPC procedure name in string form
++ * @rqstp: svc_rqst to operate on
++ *
++ * Return value:
++ *   Pointer to a NUL-terminated string
++ */
++const char *svc_proc_name(const struct svc_rqst *rqstp)
++{
++	if (rqstp && rqstp->rq_procinfo)
++		return rqstp->rq_procinfo->pc_name;
++	return "unknown";
++}
++
++
+ /**
+  * svc_encode_result_payload - mark a range of bytes as a result payload
+  * @rqstp: svc_rqst to operate on
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index 5764116125237..c7d7d35867302 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -831,7 +831,7 @@ int main(int argc, char **argv)
+ 	memset(cpu, 0, n_cpus * sizeof(int));
+ 
+ 	/* Parse commands line args */
+-	while ((opt = getopt_long(argc, argv, "hSd:s:p:q:c:xzFf:e:r:m:",
++	while ((opt = getopt_long(argc, argv, "hSd:s:p:q:c:xzFf:e:r:m:n",
+ 				  long_options, &longindex)) != -1) {
+ 		switch (opt) {
+ 		case 'd':
+diff --git a/samples/pktgen/pktgen_sample04_many_flows.sh b/samples/pktgen/pktgen_sample04_many_flows.sh
+index 56c5f5af350f6..cff51f861506d 100755
+--- a/samples/pktgen/pktgen_sample04_many_flows.sh
++++ b/samples/pktgen/pktgen_sample04_many_flows.sh
+@@ -13,13 +13,15 @@ root_check_run_with_sudo "$@"
+ # Parameter parsing via include
+ source ${basedir}/parameters.sh
+ # Set some default params, if they didn't get set
+-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
++if [ -z "$DEST_IP" ]; then
++    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
++fi
+ [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+ [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+ [ -z "$COUNT" ]     && COUNT="0" # Zero means indefinitely
+ if [ -n "$DEST_IP" ]; then
+-    validate_addr $DEST_IP
+-    read -r DST_MIN DST_MAX <<< $(parse_addr $DEST_IP)
++    validate_addr${IP6} $DEST_IP
++    read -r DST_MIN DST_MAX <<< $(parse_addr${IP6} $DEST_IP)
+ fi
+ if [ -n "$DST_PORT" ]; then
+     read -r UDP_DST_MIN UDP_DST_MAX <<< $(parse_ports $DST_PORT)
+@@ -62,8 +64,8 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 
+     # Single destination
+     pg_set $dev "dst_mac $DST_MAC"
+-    pg_set $dev "dst_min $DST_MIN"
+-    pg_set $dev "dst_max $DST_MAX"
++    pg_set $dev "dst${IP6}_min $DST_MIN"
++    pg_set $dev "dst${IP6}_max $DST_MAX"
+ 
+     if [ -n "$DST_PORT" ]; then
+ 	# Single destination port or random port range
+diff --git a/samples/pktgen/pktgen_sample05_flow_per_thread.sh b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
+index 6e0effabca594..3578d0aa4ac55 100755
+--- a/samples/pktgen/pktgen_sample05_flow_per_thread.sh
++++ b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
+@@ -17,14 +17,16 @@ root_check_run_with_sudo "$@"
+ # Parameter parsing via include
+ source ${basedir}/parameters.sh
+ # Set some default params, if they didn't get set
+-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
++if [ -z "$DEST_IP" ]; then
++    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
++fi
+ [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+ [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+ [ -z "$BURST" ]     && BURST=32
+ [ -z "$COUNT" ]     && COUNT="0" # Zero means indefinitely
+ if [ -n "$DEST_IP" ]; then
+-    validate_addr $DEST_IP
+-    read -r DST_MIN DST_MAX <<< $(parse_addr $DEST_IP)
++    validate_addr${IP6} $DEST_IP
++    read -r DST_MIN DST_MAX <<< $(parse_addr${IP6} $DEST_IP)
+ fi
+ if [ -n "$DST_PORT" ]; then
+     read -r UDP_DST_MIN UDP_DST_MAX <<< $(parse_ports $DST_PORT)
+@@ -52,8 +54,8 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 
+     # Single destination
+     pg_set $dev "dst_mac $DST_MAC"
+-    pg_set $dev "dst_min $DST_MIN"
+-    pg_set $dev "dst_max $DST_MAX"
++    pg_set $dev "dst${IP6}_min $DST_MIN"
++    pg_set $dev "dst${IP6}_max $DST_MAX"
+ 
+     if [ -n "$DST_PORT" ]; then
+ 	# Single destination port or random port range
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index d0ceada99243a..f3a9cc201c8c2 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -6,7 +6,6 @@ config IMA
+ 	select SECURITYFS
+ 	select CRYPTO
+ 	select CRYPTO_HMAC
+-	select CRYPTO_MD5
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_HASH_INFO
+ 	select TCG_TPM if HAS_IOMEM && !UML
+diff --git a/security/integrity/ima/ima_mok.c b/security/integrity/ima/ima_mok.c
+index 1e5c019161738..95cc31525c573 100644
+--- a/security/integrity/ima/ima_mok.c
++++ b/security/integrity/ima/ima_mok.c
+@@ -21,7 +21,7 @@ struct key *ima_blacklist_keyring;
+ /*
+  * Allocate the IMA blacklist keyring
+  */
+-__init int ima_mok_init(void)
++static __init int ima_mok_init(void)
+ {
+ 	struct key_restriction *restriction;
+ 
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 4a56a52adab5d..b9d5d7a0975b3 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -117,6 +117,13 @@ static struct snd_soc_dai_driver rt5682_dai[] = {
+ 	},
+ };
+ 
++static void rt5682_i2c_disable_regulators(void *data)
++{
++	struct rt5682_priv *rt5682 = data;
++
++	regulator_bulk_disable(ARRAY_SIZE(rt5682->supplies), rt5682->supplies);
++}
++
+ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 		const struct i2c_device_id *id)
+ {
+@@ -157,6 +164,11 @@ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 		return ret;
+ 	}
+ 
++	ret = devm_add_action_or_reset(&i2c->dev, rt5682_i2c_disable_regulators,
++				       rt5682);
++	if (ret)
++		return ret;
++
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(rt5682->supplies),
+ 				    rt5682->supplies);
+ 	if (ret) {
+@@ -282,10 +294,7 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
+ 
+ static int rt5682_i2c_remove(struct i2c_client *client)
+ {
+-	struct rt5682_priv *rt5682 = i2c_get_clientdata(client);
+-
+ 	rt5682_i2c_shutdown(client);
+-	regulator_bulk_disable(ARRAY_SIZE(rt5682->supplies), rt5682->supplies);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index 2e9175b37dc9c..254a016cb1f36 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -1131,7 +1131,7 @@ static struct snd_soc_dai_driver aic32x4_tas2505_dai = {
+ 	.playback = {
+ 			 .stream_name = "Playback",
+ 			 .channels_min = 1,
+-			 .channels_max = 1,
++			 .channels_max = 2,
+ 			 .rates = SNDRV_PCM_RATE_8000_96000,
+ 			 .formats = AIC32X4_FORMATS,},
+ 	.ops = &aic32x4_ops,
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 86c92e03ea5d4..d885ced34f606 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -4076,6 +4076,16 @@ static int wcd9335_setup_irqs(struct wcd9335_codec *wcd)
+ 	return ret;
+ }
+ 
++static void wcd9335_teardown_irqs(struct wcd9335_codec *wcd)
++{
++	int i;
++
++	/* disable interrupts on all slave ports */
++	for (i = 0; i < WCD9335_SLIM_NUM_PORT_REG; i++)
++		regmap_write(wcd->if_regmap, WCD9335_SLIM_PGD_PORT_INT_EN0 + i,
++			     0x00);
++}
++
+ static void wcd9335_cdc_sido_ccl_enable(struct wcd9335_codec *wcd,
+ 					bool ccl_flag)
+ {
+@@ -4844,6 +4854,7 @@ static void wcd9335_codec_init(struct snd_soc_component *component)
+ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct wcd9335_codec *wcd = dev_get_drvdata(component->dev);
++	int ret;
+ 	int i;
+ 
+ 	snd_soc_component_init_regmap(component, wcd->regmap);
+@@ -4861,7 +4872,15 @@ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ 	for (i = 0; i < NUM_CODEC_DAIS; i++)
+ 		INIT_LIST_HEAD(&wcd->dai[i].slim_ch_list);
+ 
+-	return wcd9335_setup_irqs(wcd);
++	ret = wcd9335_setup_irqs(wcd);
++	if (ret)
++		goto free_clsh_ctrl;
++
++	return 0;
++
++free_clsh_ctrl:
++	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
++	return ret;
+ }
+ 
+ static void wcd9335_codec_remove(struct snd_soc_component *comp)
+@@ -4869,7 +4888,7 @@ static void wcd9335_codec_remove(struct snd_soc_component *comp)
+ 	struct wcd9335_codec *wcd = dev_get_drvdata(comp->dev);
+ 
+ 	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
+-	free_irq(regmap_irq_get_virq(wcd->irq_data, WCD9335_IRQ_SLIMBUS), wcd);
++	wcd9335_teardown_irqs(wcd);
+ }
+ 
+ static int wcd9335_codec_set_sysclk(struct snd_soc_component *comp,
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index fe15cbc7bcafd..a4d4cbf716a1c 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -747,6 +747,8 @@ static void wm_adsp2_init_debugfs(struct wm_adsp *dsp,
+ static void wm_adsp2_cleanup_debugfs(struct wm_adsp *dsp)
+ {
+ 	wm_adsp_debugfs_clear(dsp);
++	debugfs_remove_recursive(dsp->debugfs_root);
++	dsp->debugfs_root = NULL;
+ }
+ #else
+ static inline void wm_adsp2_init_debugfs(struct wm_adsp *dsp,
+diff --git a/sound/soc/fsl/fsl_rpmsg.c b/sound/soc/fsl/fsl_rpmsg.c
+index ea5c973e2e846..d60f4dac6c1b3 100644
+--- a/sound/soc/fsl/fsl_rpmsg.c
++++ b/sound/soc/fsl/fsl_rpmsg.c
+@@ -165,25 +165,25 @@ static int fsl_rpmsg_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Get the optional clocks */
+-	rpmsg->ipg = devm_clk_get(&pdev->dev, "ipg");
++	rpmsg->ipg = devm_clk_get_optional(&pdev->dev, "ipg");
+ 	if (IS_ERR(rpmsg->ipg))
+-		rpmsg->ipg = NULL;
++		return PTR_ERR(rpmsg->ipg);
+ 
+-	rpmsg->mclk = devm_clk_get(&pdev->dev, "mclk");
++	rpmsg->mclk = devm_clk_get_optional(&pdev->dev, "mclk");
+ 	if (IS_ERR(rpmsg->mclk))
+-		rpmsg->mclk = NULL;
++		return PTR_ERR(rpmsg->mclk);
+ 
+-	rpmsg->dma = devm_clk_get(&pdev->dev, "dma");
++	rpmsg->dma = devm_clk_get_optional(&pdev->dev, "dma");
+ 	if (IS_ERR(rpmsg->dma))
+-		rpmsg->dma = NULL;
++		return PTR_ERR(rpmsg->dma);
+ 
+-	rpmsg->pll8k = devm_clk_get(&pdev->dev, "pll8k");
++	rpmsg->pll8k = devm_clk_get_optional(&pdev->dev, "pll8k");
+ 	if (IS_ERR(rpmsg->pll8k))
+-		rpmsg->pll8k = NULL;
++		return PTR_ERR(rpmsg->pll8k);
+ 
+-	rpmsg->pll11k = devm_clk_get(&pdev->dev, "pll11k");
++	rpmsg->pll11k = devm_clk_get_optional(&pdev->dev, "pll11k");
+ 	if (IS_ERR(rpmsg->pll11k))
+-		rpmsg->pll11k = NULL;
++		return PTR_ERR(rpmsg->pll11k);
+ 
+ 	platform_set_drvdata(pdev, rpmsg);
+ 	pm_runtime_enable(&pdev->dev);
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
+index a31a7a7bbf667..2b43459adc33a 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
+@@ -199,7 +199,7 @@ static int kabylake_ssp0_hw_params(struct snd_pcm_substream *substream,
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX98373_DEV0_NAME)) {
+ 			ret = snd_soc_dai_set_tdm_slot(codec_dai,
+-							0x03, 3, 8, 24);
++							0x30, 3, 8, 16);
+ 			if (ret < 0) {
+ 				dev_err(runtime->dev,
+ 						"DEV0 TDM slot err:%d\n", ret);
+@@ -208,10 +208,10 @@ static int kabylake_ssp0_hw_params(struct snd_pcm_substream *substream,
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX98373_DEV1_NAME)) {
+ 			ret = snd_soc_dai_set_tdm_slot(codec_dai,
+-							0x0C, 3, 8, 24);
++							0xC0, 3, 8, 16);
+ 			if (ret < 0) {
+ 				dev_err(runtime->dev,
+-						"DEV0 TDM slot err:%d\n", ret);
++						"DEV1 TDM slot err:%d\n", ret);
+ 				return ret;
+ 			}
+ 		}
+@@ -311,24 +311,6 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	 * The above 2 loops are mutually exclusive based on the stream direction,
+ 	 * thus rtd_dpcm variable will never be overwritten
+ 	 */
+-	/*
+-	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE,
+-	 * where as kblda7219m98927 & kblmax98927 supports S16_LE by default.
+-	 * Skipping the port wise FE and BE configuration for kblda7219m98373 &
+-	 * kblmax98373 as the topology (FE & BE) supports S24_LE only.
+-	 */
+-
+-	if (!strcmp(rtd->card->name, "kblda7219m98373") ||
+-		!strcmp(rtd->card->name, "kblmax98373")) {
+-		/* The ADSP will convert the FE rate to 48k, stereo */
+-		rate->min = rate->max = 48000;
+-		chan->min = chan->max = DUAL_CHANNEL;
+-
+-		/* set SSP to 24 bit */
+-		snd_mask_none(fmt);
+-		snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S24_LE);
+-		return 0;
+-	}
+ 
+ 	/*
+ 	 * The ADSP will convert the FE rate to 48k, stereo, 24 bit
+@@ -479,31 +461,20 @@ static struct snd_pcm_hw_constraint_list constraints_channels_quad = {
+ static int kbl_fe_startup(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_soc_pcm_runtime *soc_rt = asoc_substream_to_rtd(substream);
+ 
+ 	/*
+ 	 * On this platform for PCM device we support,
+ 	 * 48Khz
+ 	 * stereo
++	 * 16 bit audio
+ 	 */
+ 
+ 	runtime->hw.channels_max = DUAL_CHANNEL;
+ 	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 					   &constraints_channels);
+-	/*
+-	 * Setup S24_LE (32 bit container and 24 bit valid data) for
+-	 * kblda7219m98373 & kblmax98373. For kblda7219m98927 &
+-	 * kblmax98927 keeping it as 16/16 due to topology FW dependency.
+-	 */
+-	if (!strcmp(soc_rt->card->name, "kblda7219m98373") ||
+-		!strcmp(soc_rt->card->name, "kblmax98373")) {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S24_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 32, 24);
+-
+-	} else {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 16, 16);
+-	}
++
++	runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
++	snd_pcm_hw_constraint_msbits(runtime, 0, 16, 16);
+ 
+ 	snd_pcm_hw_constraint_list(runtime, 0,
+ 				SNDRV_PCM_HW_PARAM_RATE, &constraints_rates);
+@@ -536,23 +507,11 @@ static int kabylake_dmic_fixup(struct snd_soc_pcm_runtime *rtd,
+ static int kabylake_dmic_startup(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_soc_pcm_runtime *soc_rt = asoc_substream_to_rtd(substream);
+ 
+ 	runtime->hw.channels_min = runtime->hw.channels_max = QUAD_CHANNEL;
+ 	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 			&constraints_channels_quad);
+ 
+-	/*
+-	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE.
+-	 * The DMIC also configured for S24_LE. Forcing the DMIC format to
+-	 * S24_LE due to the topology FW dependency.
+-	 */
+-	if (!strcmp(soc_rt->card->name, "kblda7219m98373") ||
+-		!strcmp(soc_rt->card->name, "kblmax98373")) {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S24_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 32, 24);
+-	}
+-
+ 	return snd_pcm_hw_constraint_list(substream->runtime, 0,
+ 			SNDRV_PCM_HW_PARAM_RATE, &constraints_rates);
+ }
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cml-match.c b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+index 42ef51c3fb4f4..b591c6fd13fdd 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cml-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+@@ -75,7 +75,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_cml_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "cml_da7219_max98357a",
++		.drv_name = "cml_da7219_mx98357a",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &max98390_spk_codecs,
+ 		.sof_fw_filename = "sof-cml.ri",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+index ba5ff468c265a..741bf2f9e081f 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+@@ -87,7 +87,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_kbl_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "kbl_da7219_max98357a",
++		.drv_name = "kbl_da7219_mx98357a",
+ 		.fw_filename = "intel/dsp_fw_kbl.bin",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &kbl_7219_98357_codecs,
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index c0fdab39e7c28..09037d555ec49 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -113,7 +113,7 @@ static int is_skl_dsp_widget_type(struct snd_soc_dapm_widget *w,
+ 
+ static void skl_dump_mconfig(struct skl_dev *skl, struct skl_module_cfg *mcfg)
+ {
+-	struct skl_module_iface *iface = &mcfg->module->formats[0];
++	struct skl_module_iface *iface = &mcfg->module->formats[mcfg->fmt_idx];
+ 
+ 	dev_dbg(skl->dev, "Dumping config\n");
+ 	dev_dbg(skl->dev, "Input Format:\n");
+@@ -195,8 +195,8 @@ static void skl_tplg_update_params_fixup(struct skl_module_cfg *m_cfg,
+ 	struct skl_module_fmt *in_fmt, *out_fmt;
+ 
+ 	/* Fixups will be applied to pin 0 only */
+-	in_fmt = &m_cfg->module->formats[0].inputs[0].fmt;
+-	out_fmt = &m_cfg->module->formats[0].outputs[0].fmt;
++	in_fmt = &m_cfg->module->formats[m_cfg->fmt_idx].inputs[0].fmt;
++	out_fmt = &m_cfg->module->formats[m_cfg->fmt_idx].outputs[0].fmt;
+ 
+ 	if (params->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		if (is_fe) {
+@@ -239,9 +239,9 @@ static void skl_tplg_update_buffer_size(struct skl_dev *skl,
+ 	/* Since fixups is applied to pin 0 only, ibs, obs needs
+ 	 * change for pin 0 only
+ 	 */
+-	res = &mcfg->module->resources[0];
+-	in_fmt = &mcfg->module->formats[0].inputs[0].fmt;
+-	out_fmt = &mcfg->module->formats[0].outputs[0].fmt;
++	res = &mcfg->module->resources[mcfg->res_idx];
++	in_fmt = &mcfg->module->formats[mcfg->fmt_idx].inputs[0].fmt;
++	out_fmt = &mcfg->module->formats[mcfg->fmt_idx].outputs[0].fmt;
+ 
+ 	if (mcfg->m_type == SKL_MODULE_TYPE_SRCINT)
+ 		multiplier = 5;
+@@ -1463,12 +1463,6 @@ static int skl_tplg_tlv_control_set(struct snd_kcontrol *kcontrol,
+ 	struct skl_dev *skl = get_skl_ctx(w->dapm->dev);
+ 
+ 	if (ac->params) {
+-		/*
+-		 * Widget data is expected to be stripped of T and L
+-		 */
+-		size -= 2 * sizeof(unsigned int);
+-		data += 2;
+-
+ 		if (size > ac->max)
+ 			return -EINVAL;
+ 		ac->size = size;
+@@ -1637,11 +1631,12 @@ int skl_tplg_update_pipe_params(struct device *dev,
+ 			struct skl_module_cfg *mconfig,
+ 			struct skl_pipe_params *params)
+ {
+-	struct skl_module_res *res = &mconfig->module->resources[0];
++	struct skl_module_res *res;
+ 	struct skl_dev *skl = get_skl_ctx(dev);
+ 	struct skl_module_fmt *format = NULL;
+ 	u8 cfg_idx = mconfig->pipe->cur_config_idx;
+ 
++	res = &mconfig->module->resources[mconfig->res_idx];
+ 	skl_tplg_fill_dma_id(mconfig, params);
+ 	mconfig->fmt_idx = mconfig->mod_cfg[cfg_idx].fmt_idx;
+ 	mconfig->res_idx = mconfig->mod_cfg[cfg_idx].res_idx;
+@@ -1650,9 +1645,9 @@ int skl_tplg_update_pipe_params(struct device *dev,
+ 		return 0;
+ 
+ 	if (params->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		format = &mconfig->module->formats[0].inputs[0].fmt;
++		format = &mconfig->module->formats[mconfig->fmt_idx].inputs[0].fmt;
+ 	else
+-		format = &mconfig->module->formats[0].outputs[0].fmt;
++		format = &mconfig->module->formats[mconfig->fmt_idx].outputs[0].fmt;
+ 
+ 	/* set the hw_params */
+ 	format->s_freq = params->s_freq;
+diff --git a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+index c4a598cbbdaa1..14e77df06b011 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
++++ b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+@@ -1119,25 +1119,26 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->regmap = syscon_node_to_regmap(dev->parent->of_node);
+ 	if (IS_ERR(afe->regmap)) {
+ 		dev_err(dev, "could not get regmap from parent\n");
+-		return PTR_ERR(afe->regmap);
++		ret = PTR_ERR(afe->regmap);
++		goto err_pm_disable;
+ 	}
+ 	ret = regmap_attach_dev(dev, afe->regmap, &mt8183_afe_regmap_config);
+ 	if (ret) {
+ 		dev_warn(dev, "regmap_attach_dev fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	rstc = devm_reset_control_get(dev, "audiosys");
+ 	if (IS_ERR(rstc)) {
+ 		ret = PTR_ERR(rstc);
+ 		dev_err(dev, "could not get audiosys reset:%d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = reset_control_reset(rstc);
+ 	if (ret) {
+ 		dev_err(dev, "failed to trigger audio reset:%d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* enable clock for regcache get default value from hw */
+@@ -1147,7 +1148,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	ret = regmap_reinit_cache(afe->regmap, &mt8183_afe_regmap_config);
+ 	if (ret) {
+ 		dev_err(dev, "regmap_reinit_cache fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+@@ -1160,8 +1161,10 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->memif_size = MT8183_MEMIF_NUM;
+ 	afe->memif = devm_kcalloc(dev, afe->memif_size, sizeof(*afe->memif),
+ 				  GFP_KERNEL);
+-	if (!afe->memif)
+-		return -ENOMEM;
++	if (!afe->memif) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->memif_size; i++) {
+ 		afe->memif[i].data = &memif_data[i];
+@@ -1178,22 +1181,26 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->irqs_size = MT8183_IRQ_NUM;
+ 	afe->irqs = devm_kcalloc(dev, afe->irqs_size, sizeof(*afe->irqs),
+ 				 GFP_KERNEL);
+-	if (!afe->irqs)
+-		return -ENOMEM;
++	if (!afe->irqs) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->irqs_size; i++)
+ 		afe->irqs[i].irq_data = &irq_data[i];
+ 
+ 	/* request irq */
+ 	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id < 0)
+-		return irq_id;
++	if (irq_id < 0) {
++		ret = irq_id;
++		goto err_pm_disable;
++	}
+ 
+ 	ret = devm_request_irq(dev, irq_id, mt8183_afe_irq_handler,
+ 			       IRQF_TRIGGER_NONE, "asys-isr", (void *)afe);
+ 	if (ret) {
+ 		dev_err(dev, "could not request_irq for asys-isr\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* init sub_dais */
+@@ -1204,7 +1211,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_warn(afe->dev, "dai register i %d fail, ret %d\n",
+ 				 i, ret);
+-			return ret;
++			goto err_pm_disable;
+ 		}
+ 	}
+ 
+@@ -1213,7 +1220,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_warn(afe->dev, "mtk_afe_combine_sub_dai fail, ret %d\n",
+ 			 ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	afe->mtk_afe_hardware = &mt8183_afe_hardware;
+@@ -1229,7 +1236,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 					      NULL, 0);
+ 	if (ret) {
+ 		dev_warn(dev, "err_platform\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_component(afe->dev,
+@@ -1238,10 +1245,14 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 					      afe->num_dai_drivers);
+ 	if (ret) {
+ 		dev_warn(dev, "err_dai_component\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	return ret;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++	return ret;
+ }
+ 
+ static int mt8183_afe_pcm_dev_remove(struct platform_device *pdev)
+diff --git a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+index 7a1724f5ff4c6..31c280339c503 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
++++ b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+@@ -2229,12 +2229,13 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->regmap = syscon_node_to_regmap(dev->parent->of_node);
+ 	if (IS_ERR(afe->regmap)) {
+ 		dev_err(dev, "could not get regmap from parent\n");
+-		return PTR_ERR(afe->regmap);
++		ret = PTR_ERR(afe->regmap);
++		goto err_pm_disable;
+ 	}
+ 	ret = regmap_attach_dev(dev, afe->regmap, &mt8192_afe_regmap_config);
+ 	if (ret) {
+ 		dev_warn(dev, "regmap_attach_dev fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* enable clock for regcache get default value from hw */
+@@ -2244,7 +2245,7 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	ret = regmap_reinit_cache(afe->regmap, &mt8192_afe_regmap_config);
+ 	if (ret) {
+ 		dev_err(dev, "regmap_reinit_cache fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+@@ -2257,8 +2258,10 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->memif_size = MT8192_MEMIF_NUM;
+ 	afe->memif = devm_kcalloc(dev, afe->memif_size, sizeof(*afe->memif),
+ 				  GFP_KERNEL);
+-	if (!afe->memif)
+-		return -ENOMEM;
++	if (!afe->memif) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->memif_size; i++) {
+ 		afe->memif[i].data = &memif_data[i];
+@@ -2272,22 +2275,26 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->irqs_size = MT8192_IRQ_NUM;
+ 	afe->irqs = devm_kcalloc(dev, afe->irqs_size, sizeof(*afe->irqs),
+ 				 GFP_KERNEL);
+-	if (!afe->irqs)
+-		return -ENOMEM;
++	if (!afe->irqs) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->irqs_size; i++)
+ 		afe->irqs[i].irq_data = &irq_data[i];
+ 
+ 	/* request irq */
+ 	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id < 0)
+-		return irq_id;
++	if (irq_id < 0) {
++		ret = irq_id;
++		goto err_pm_disable;
++	}
+ 
+ 	ret = devm_request_irq(dev, irq_id, mt8192_afe_irq_handler,
+ 			       IRQF_TRIGGER_NONE, "asys-isr", (void *)afe);
+ 	if (ret) {
+ 		dev_err(dev, "could not request_irq for Afe_ISR_Handle\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* init sub_dais */
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index a1f8c3a026f57..6abfc9d079e7c 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -68,6 +68,7 @@ static int pid[SNDRV_CARDS] = { [0 ... (SNDRV_CARDS-1)] = -1 };
+ static int device_setup[SNDRV_CARDS]; /* device parameter for this card */
+ static bool ignore_ctl_error;
+ static bool autoclock = true;
++static bool lowlatency = true;
+ static char *quirk_alias[SNDRV_CARDS];
+ static char *delayed_register[SNDRV_CARDS];
+ static bool implicit_fb[SNDRV_CARDS];
+@@ -92,6 +93,8 @@ MODULE_PARM_DESC(ignore_ctl_error,
+ 		 "Ignore errors from USB controller for mixer interfaces.");
+ module_param(autoclock, bool, 0444);
+ MODULE_PARM_DESC(autoclock, "Enable auto-clock selection for UAC2 devices (default: yes).");
++module_param(lowlatency, bool, 0444);
++MODULE_PARM_DESC(lowlatency, "Enable low latency playback (default: yes).");
+ module_param_array(quirk_alias, charp, NULL, 0444);
+ MODULE_PARM_DESC(quirk_alias, "Quirk aliases, e.g. 0123abcd:5678beef.");
+ module_param_array(delayed_register, charp, NULL, 0444);
+@@ -599,6 +602,7 @@ static int snd_usb_audio_create(struct usb_interface *intf,
+ 	chip->setup = device_setup[idx];
+ 	chip->generic_implicit_fb = implicit_fb[idx];
+ 	chip->autoclock = autoclock;
++	chip->lowlatency = lowlatency;
+ 	atomic_set(&chip->active, 1); /* avoid autopm during probing */
+ 	atomic_set(&chip->usage_count, 0);
+ 	atomic_set(&chip->shutdown, 0);
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index f5cbf61ac366e..5dc9266180e37 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -617,7 +617,8 @@ static int snd_usb_pcm_prepare(struct snd_pcm_substream *substream)
+ 	/* check whether early start is needed for playback stream */
+ 	subs->early_playback_start =
+ 		subs->direction == SNDRV_PCM_STREAM_PLAYBACK &&
+-		subs->data_endpoint->nominal_queue_size >= subs->buffer_bytes;
++		(!chip->lowlatency ||
++		 (subs->data_endpoint->nominal_queue_size >= subs->buffer_bytes));
+ 
+ 	if (subs->early_playback_start)
+ 		ret = start_endpoints(subs);
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 538831cbd9254..8b70c9ea91b96 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -57,6 +57,7 @@ struct snd_usb_audio {
+ 	bool generic_implicit_fb;	/* from the 'implicit_fb' module param */
+ 	bool autoclock;			/* from the 'autoclock' module param */
+ 
++	bool lowlatency;		/* from the 'lowlatency' module param */
+ 	struct usb_host_interface *ctrl_intf;	/* the audio control interface */
+ 	struct media_device *media_dev;
+ 	struct media_intf_devnode *ctl_intf_media_devnode;
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index f45fa992e01d3..fd67496a947f3 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -111,9 +111,11 @@ static void xbc_show_list(void)
+ 	char key[XBC_KEYLEN_MAX];
+ 	struct xbc_node *leaf;
+ 	const char *val;
++	int ret;
+ 
+ 	xbc_for_each_key_value(leaf, val) {
+-		if (xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX) < 0) {
++		ret = xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX);
++		if (ret < 0) {
+ 			fprintf(stderr, "Failed to compose key %d\n", ret);
+ 			break;
+ 		}
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index cc48726740ade..9d709b4276655 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -781,6 +781,8 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 		kernel_syms_destroy(&dd);
+ 	}
+ 
++	btf__free(btf);
++
+ 	return 0;
+ }
+ 
+@@ -2002,8 +2004,8 @@ static char *profile_target_name(int tgt_fd)
+ 	struct bpf_prog_info_linear *info_linear;
+ 	struct bpf_func_info *func_info;
+ 	const struct btf_type *t;
++	struct btf *btf = NULL;
+ 	char *name = NULL;
+-	struct btf *btf;
+ 
+ 	info_linear = bpf_program__get_prog_info_linear(
+ 		tgt_fd, 1UL << BPF_PROG_INFO_FUNC_INFO);
+@@ -2027,6 +2029,7 @@ static char *profile_target_name(int tgt_fd)
+ 	}
+ 	name = strdup(btf__name_by_offset(btf, t->name_off));
+ out:
++	btf__free(btf);
+ 	free(info_linear);
+ 	return name;
+ }
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index bf9252c7381e8..5cdff1631608c 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -3249,7 +3249,7 @@ union bpf_attr {
+  * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
+  *	Description
+  *		Select a **SO_REUSEPORT** socket from a
+- *		**BPF_MAP_TYPE_REUSEPORT_ARRAY** *map*.
++ *		**BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
+  *		It checks the selected socket is matching the incoming
+  *		request in the socket buffer.
+  *	Return
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index ec14aa725bb00..74c3b73a5fbe8 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -4,8 +4,9 @@
+ RM ?= rm
+ srctree = $(abs_srctree)
+ 
++VERSION_SCRIPT := libbpf.map
+ LIBBPF_VERSION := $(shell \
+-	grep -oE '^LIBBPF_([0-9.]+)' libbpf.map | \
++	grep -oE '^LIBBPF_([0-9.]+)' $(VERSION_SCRIPT) | \
+ 	sort -rV | head -n1 | cut -d'_' -f2)
+ LIBBPF_MAJOR_VERSION := $(firstword $(subst ., ,$(LIBBPF_VERSION)))
+ 
+@@ -110,7 +111,6 @@ SHARED_OBJDIR	:= $(OUTPUT)sharedobjs/
+ STATIC_OBJDIR	:= $(OUTPUT)staticobjs/
+ BPF_IN_SHARED	:= $(SHARED_OBJDIR)libbpf-in.o
+ BPF_IN_STATIC	:= $(STATIC_OBJDIR)libbpf-in.o
+-VERSION_SCRIPT	:= libbpf.map
+ BPF_HELPER_DEFS	:= $(OUTPUT)bpf_helper_defs.h
+ 
+ LIB_TARGET	:= $(addprefix $(OUTPUT),$(LIB_TARGET))
+@@ -163,10 +163,10 @@ $(BPF_HELPER_DEFS): $(srctree)/tools/include/uapi/linux/bpf.h
+ 
+ $(OUTPUT)libbpf.so: $(OUTPUT)libbpf.so.$(LIBBPF_VERSION)
+ 
+-$(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED)
++$(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED) $(VERSION_SCRIPT)
+ 	$(QUIET_LINK)$(CC) $(LDFLAGS) \
+ 		--shared -Wl,-soname,libbpf.so.$(LIBBPF_MAJOR_VERSION) \
+-		-Wl,--version-script=$(VERSION_SCRIPT) $^ -lelf -lz -o $@
++		-Wl,--version-script=$(VERSION_SCRIPT) $< -lelf -lz -o $@
+ 	@ln -sf $(@F) $(OUTPUT)libbpf.so
+ 	@ln -sf $(@F) $(OUTPUT)libbpf.so.$(LIBBPF_MAJOR_VERSION)
+ 
+@@ -181,7 +181,7 @@ $(OUTPUT)libbpf.pc:
+ 
+ check: check_abi
+ 
+-check_abi: $(OUTPUT)libbpf.so
++check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
+ 	@if [ "$(GLOBAL_SYM_COUNT)" != "$(VERSIONED_SYM_COUNT)" ]; then	 \
+ 		echo "Warning: Num of global symbols in $(BPF_IN_SHARED)"	 \
+ 		     "($(GLOBAL_SYM_COUNT)) does NOT match with num of"	 \
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 6f5e2757bb3cf..2234d5c33177a 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4479,6 +4479,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ {
+ 	struct bpf_create_map_attr create_attr;
+ 	struct bpf_map_def *def = &map->def;
++	int err = 0;
+ 
+ 	memset(&create_attr, 0, sizeof(create_attr));
+ 
+@@ -4521,8 +4522,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type)) {
+ 		if (map->inner_map) {
+-			int err;
+-
+ 			err = bpf_object__create_map(obj, map->inner_map, true);
+ 			if (err) {
+ 				pr_warn("map '%s': failed to create inner map: %d\n",
+@@ -4547,8 +4546,8 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ 	if (map->fd < 0 && (create_attr.btf_key_type_id ||
+ 			    create_attr.btf_value_type_id)) {
+ 		char *cp, errmsg[STRERR_BUFSIZE];
+-		int err = -errno;
+ 
++		err = -errno;
+ 		cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+ 		pr_warn("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n",
+ 			map->name, cp, err);
+@@ -4560,8 +4559,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ 		map->fd = bpf_create_map_xattr(&create_attr);
+ 	}
+ 
+-	if (map->fd < 0)
+-		return -errno;
++	err = map->fd < 0 ? -errno : 0;
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type) && map->inner_map) {
+ 		if (obj->gen_loader)
+@@ -4570,7 +4568,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ 		zfree(&map->inner_map);
+ 	}
+ 
+-	return 0;
++	return err;
+ }
+ 
+ static int init_map_slots(struct bpf_object *obj, struct bpf_map *map)
+@@ -7588,8 +7586,10 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
+ 	kconfig = OPTS_GET(opts, kconfig, NULL);
+ 	if (kconfig) {
+ 		obj->kconfig = strdup(kconfig);
+-		if (!obj->kconfig)
+-			return ERR_PTR(-ENOMEM);
++		if (!obj->kconfig) {
++			err = -ENOMEM;
++			goto out;
++		}
+ 	}
+ 
+ 	err = bpf_object__elf_init(obj);
+@@ -9515,7 +9515,7 @@ static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
+ 	struct bpf_prog_info_linear *info_linear;
+ 	struct bpf_prog_info *info;
+ 	struct btf *btf = NULL;
+-	int err = -EINVAL;
++	int err;
+ 
+ 	info_linear = bpf_program__get_prog_info_linear(attach_prog_fd, 0);
+ 	err = libbpf_get_error(info_linear);
+@@ -9524,6 +9524,8 @@ static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
+ 			attach_prog_fd);
+ 		return err;
+ 	}
++
++	err = -EINVAL;
+ 	info = &info_linear->info;
+ 	if (!info->btf_id) {
+ 		pr_warn("The target program doesn't have BTF\n");
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index cdecda1ddd36e..17a9844e4fbf8 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -296,7 +296,7 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session,
+ 
+ out:
+ 	free(info_linear);
+-	free(btf);
++	btf__free(btf);
+ 	return err ? -1 : 0;
+ }
+ 
+@@ -486,7 +486,7 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ 	perf_env__fetch_btf(env, btf_id, btf);
+ 
+ out:
+-	free(btf);
++	btf__free(btf);
+ 	close(fd);
+ }
+ 
+diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
+index 8150e03367bba..beca55129b0b2 100644
+--- a/tools/perf/util/bpf_counter.c
++++ b/tools/perf/util/bpf_counter.c
+@@ -64,8 +64,8 @@ static char *bpf_target_prog_name(int tgt_fd)
+ 	struct bpf_prog_info_linear *info_linear;
+ 	struct bpf_func_info *func_info;
+ 	const struct btf_type *t;
++	struct btf *btf = NULL;
+ 	char *name = NULL;
+-	struct btf *btf;
+ 
+ 	info_linear = bpf_program__get_prog_info_linear(
+ 		tgt_fd, 1UL << BPF_PROG_INFO_FUNC_INFO);
+@@ -89,6 +89,7 @@ static char *bpf_target_prog_name(int tgt_fd)
+ 	}
+ 	name = strdup(btf__name_by_offset(btf, t->name_off));
+ out:
++	btf__free(btf);
+ 	free(info_linear);
+ 	return name;
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index 857e3f26086fe..68e415f4d33cd 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -4386,6 +4386,7 @@ skip:
+ 	fprintf(stderr, "OK");
+ 
+ done:
++	btf__free(btf);
+ 	free(func_info);
+ 	bpf_object__close(obj);
+ }
+diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c b/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
+index 2e4775c354149..92267abb462fc 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
++++ b/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
+@@ -121,7 +121,7 @@ static int dump_tcp_sock(struct seq_file *seq, struct tcp_sock *tp,
+ 	}
+ 
+ 	BPF_SEQ_PRINTF(seq, "%4d: %08X:%04X %08X:%04X ",
+-		       seq_num, src, srcp, destp, destp);
++		       seq_num, src, srcp, dest, destp);
+ 	BPF_SEQ_PRINTF(seq, "%02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d ",
+ 		       state,
+ 		       tp->write_seq - tp->snd_una, rx_queue,
+diff --git a/tools/testing/selftests/bpf/progs/test_core_autosize.c b/tools/testing/selftests/bpf/progs/test_core_autosize.c
+index 44f5aa2e8956f..9a7829c5e4a72 100644
+--- a/tools/testing/selftests/bpf/progs/test_core_autosize.c
++++ b/tools/testing/selftests/bpf/progs/test_core_autosize.c
+@@ -125,6 +125,16 @@ int handle_downsize(void *ctx)
+ 	return 0;
+ }
+ 
++#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
++#define bpf_core_read_int bpf_core_read
++#else
++#define bpf_core_read_int(dst, sz, src) ({ \
++	/* Prevent "subtraction from stack pointer prohibited" */ \
++	volatile long __off = sizeof(*dst) - (sz); \
++	bpf_core_read((char *)(dst) + __off, sz, src); \
++})
++#endif
++
+ SEC("raw_tp/sys_enter")
+ int handle_probed(void *ctx)
+ {
+@@ -132,23 +142,23 @@ int handle_probed(void *ctx)
+ 	__u64 tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
+ 	ptr_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val1), &in->val1);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val1), &in->val1);
+ 	val1_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val2), &in->val2);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val2), &in->val2);
+ 	val2_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val3), &in->val3);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val3), &in->val3);
+ 	val3_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val4), &in->val4);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val4), &in->val4);
+ 	val4_probed = tmp;
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 30cbf5d98f7dc..abdfc41f7685a 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -764,8 +764,8 @@ static void test_sockmap(unsigned int tasks, void *data)
+ 	udp = socket(AF_INET, SOCK_DGRAM, 0);
+ 	i = 0;
+ 	err = bpf_map_update_elem(fd, &i, &udp, BPF_ANY);
+-	if (!err) {
+-		printf("Failed socket SOCK_DGRAM allowed '%i:%i'\n",
++	if (err) {
++		printf("Failed socket update SOCK_DGRAM '%i:%i'\n",
+ 		       i, udp);
+ 		goto out_sockmap;
+ 	}


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-16 11:03 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-16 11:03 UTC (permalink / raw
  To: gentoo-commits

commit:     33ff57f825f3237ce6b555afb0a55a7bf9a10eeb
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 16 11:03:00 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 16 11:03:00 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=33ff57f8

Linux patch 5.14.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1004_linux-5.14.5.patch | 56 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 60 insertions(+)

diff --git a/0000_README b/0000_README
index 79faaf3..3b101ac 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1003_linux-5.14.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.4
 
+Patch:  1004_linux-5.14.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.14.5.patch b/1004_linux-5.14.5.patch
new file mode 100644
index 0000000..5fbff40
--- /dev/null
+++ b/1004_linux-5.14.5.patch
@@ -0,0 +1,56 @@
+diff --git a/Makefile b/Makefile
+index e16a1a80074cd..0eaa5623f4060 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 81b9686a20799..5117cb5b56561 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -25,9 +25,7 @@ struct itimerspec64 {
+ #define TIME64_MIN			(-TIME64_MAX - 1)
+ 
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
+-#define KTIME_MIN			(-KTIME_MAX - 1)
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
+-#define KTIME_SEC_MIN			(KTIME_MIN / NSEC_PER_SEC)
+ 
+ /*
+  * Limits for settimeofday():
+@@ -126,13 +124,10 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+  */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
+-	/* Prevent multiplication overflow / underflow */
+-	if (ts->tv_sec >= KTIME_SEC_MAX)
++	/* Prevent multiplication overflow */
++	if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
+ 		return KTIME_MAX;
+ 
+-	if (ts->tv_sec <= KTIME_SEC_MIN)
+-		return KTIME_MIN;
+-
+ 	return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+ 
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index a002685f688d6..517be7fd175ef 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1346,6 +1346,8 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clkid,
+ 			}
+ 		}
+ 
++		if (!*newval)
++			return;
+ 		*newval += now;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-17 12:40 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-17 12:40 UTC (permalink / raw
  To: gentoo-commits

commit:     134deb6dddfa446971fcc458a2607aa15f3ff397
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 17 12:39:51 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 17 12:39:51 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=134deb6d

Update CPU Opt Patch 2021-09-14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 158 +++++++++++++-------------
 1 file changed, 76 insertions(+), 82 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index e37528f..b9e8ebb 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
+From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Sun, 6 Jun 2021 09:41:36 -0400
-Subject: [PATCH] more uarches for kernel 5.8+
+Date: Tue, 14 Sep 2021 15:35:34 -0400
+Subject: [PATCH] more uarches for kernel 5.15+
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
 https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
-linux version >=5.8
+linux version >=5.15
 gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
@@ -102,17 +102,17 @@ REFERENCES
 Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  47 ++++-
+ arch/x86/Makefile               |  40 +++-
  arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 428 insertions(+), 17 deletions(-)
+ 3 files changed, 424 insertions(+), 14 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..8acf6519d279 100644
+index 814fe0d349b0..61f0d7757499 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
- 
- 
+
+
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -121,7 +121,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
- 
+
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -130,7 +130,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
- 
+
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -138,7 +138,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
- 
+
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -230,17 +230,17 @@ index 814fe0d349b0..8acf6519d279 100644
  	depends on X86_32
 @@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
+
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
- 
+
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
- 
+
 +	  Enables -march=core2
 +
  config MATOM
@@ -249,7 +249,7 @@ index 814fe0d349b0..8acf6519d279 100644
 @@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
- 
+
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -432,7 +432,7 @@ index 814fe0d349b0..8acf6519d279 100644
 @@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
- 
+
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -478,7 +478,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +	  Enables -march=native
 +
  endchoice
- 
+
  config X86_GENERIC
 @@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -488,19 +488,19 @@ index 814fe0d349b0..8acf6519d279 100644
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
+
 @@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
- 
+
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
 +	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
- 
+
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  config X86_USE_3DNOW
  	def_bool y
 @@ -360,26 +668,26 @@ config X86_USE_3DNOW
@@ -509,24 +509,24 @@ index 814fe0d349b0..8acf6519d279 100644
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
 +	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
- 
+
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
- 
+
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
- 
+
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
@@ -534,65 +534,58 @@ index 814fe0d349b0..8acf6519d279 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
- 
+
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 78faf9c7e3ae..ee0cd507af8b 100644
+index 7488cfbbd2f6..01876b6fb8e1 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -114,11 +114,48 @@ else
+@@ -119,8 +119,44 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
--
--        cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
-+        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
-+
-+        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
-+        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
-+        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
-+        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
-+        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
-+        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
-+        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
-+        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
-+        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
-+        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
-+        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
-+        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
-+        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
-+        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
-+        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
-+        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
-+        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
-+        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
-+        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
-+        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
-+        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
-+        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         cflags-$(CONFIG_MK8)		+= -march=k8
+         cflags-$(CONFIG_MPSC)		+= -march=nocona
+-        cflags-$(CONFIG_MCORE2)		+= -march=core2
+-        cflags-$(CONFIG_MATOM)		+= -march=atom
++        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
++        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
++        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
++        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
++        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
++        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
++        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2
++        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3
++        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4
++        cflags-$(CONFIG_MZEN) 		+= -march=znver1
++        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
++        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
++        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
++        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
++        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
++        cflags-$(CONFIG_MCORE2) 	+= -march=core2
++        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
++        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
++        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
++        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
++        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
++        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
++        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
++        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
++        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
++        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
++        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
++        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
++        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
++        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
++        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
++        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
++        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
++        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
++        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
++        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
++        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
+         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
          KBUILD_CFLAGS += $(cflags-y)
- 
+
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -677,6 +670,7 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
--- 
-2.31.1
+--
+2.33.0
+
 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-17 12:48 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-17 12:48 UTC (permalink / raw
  To: gentoo-commits

commit:     85d76d624355214078ba879920b222f449b5c858
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 17 12:48:16 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 17 12:48:16 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=85d76d62

Add correct CPU OPT Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 158 +++++++++++++-------------
 1 file changed, 82 insertions(+), 76 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index b9e8ebb..d437e1a 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
+From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Tue, 14 Sep 2021 15:35:34 -0400
-Subject: [PATCH] more uarches for kernel 5.15+
+Date: Sun, 6 Jun 2021 09:41:36 -0400
+Subject: [PATCH] more uarches for kernel 5.8-5.14
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
 https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
-linux version >=5.15
+linux version 5.8-5.14
 gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
@@ -102,17 +102,17 @@ REFERENCES
 Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  40 +++-
+ arch/x86/Makefile               |  47 ++++-
  arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 424 insertions(+), 14 deletions(-)
+ 3 files changed, 428 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..61f0d7757499 100644
+index 814fe0d349b0..8acf6519d279 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
-
-
+ 
+ 
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -121,7 +121,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
-
+ 
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -130,7 +130,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
-
+ 
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -138,7 +138,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
-
+ 
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -230,17 +230,17 @@ index 814fe0d349b0..61f0d7757499 100644
  	depends on X86_32
 @@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-
+ 
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
-
+ 
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
-
+ 
 +	  Enables -march=core2
 +
  config MATOM
@@ -249,7 +249,7 @@ index 814fe0d349b0..61f0d7757499 100644
 @@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
-
+ 
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -432,7 +432,7 @@ index 814fe0d349b0..61f0d7757499 100644
 @@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
-
+ 
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -478,7 +478,7 @@ index 814fe0d349b0..61f0d7757499 100644
 +	  Enables -march=native
 +
  endchoice
-
+ 
  config X86_GENERIC
 @@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -488,19 +488,19 @@ index 814fe0d349b0..61f0d7757499 100644
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-
+ 
 @@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
-
+ 
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
 +	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
-
+ 
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+ 
  config X86_USE_3DNOW
  	def_bool y
 @@ -360,26 +668,26 @@ config X86_USE_3DNOW
@@ -509,24 +509,24 @@ index 814fe0d349b0..61f0d7757499 100644
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
 +	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
-
+ 
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
-
+ 
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+ 
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
-
+ 
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
@@ -534,58 +534,65 @@ index 814fe0d349b0..61f0d7757499 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
-
+ 
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 7488cfbbd2f6..01876b6fb8e1 100644
+index 78faf9c7e3ae..ee0cd507af8b 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -119,8 +119,44 @@ else
+@@ -114,11 +114,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8)		+= -march=k8
-         cflags-$(CONFIG_MPSC)		+= -march=nocona
--        cflags-$(CONFIG_MCORE2)		+= -march=core2
--        cflags-$(CONFIG_MATOM)		+= -march=atom
-+        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
-+        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
-+        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
-+        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
-+        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
-+        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
-+        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2
-+        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3
-+        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4
-+        cflags-$(CONFIG_MZEN) 		+= -march=znver1
-+        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
-+        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
-+        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
-+        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
-+        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
-+        cflags-$(CONFIG_MCORE2) 	+= -march=core2
-+        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
-+        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
-+        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
-+        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
-+        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
-+        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
-+        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
-+        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
-+        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
-+        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
-+        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
-+        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
-+        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
-+        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
-+        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
-+        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
-+        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
-+        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
-+        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
-+        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
-         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-
+-        cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
++        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
++
++        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
++        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
++        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
++        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
++        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
++        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
++        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
++        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
++        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
++        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
++        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
++        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
++        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
++        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
++        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
++        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
++        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
++        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
++        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
++        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
++        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
++        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
++        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
-
+ 
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -670,7 +677,6 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
---
-2.33.0
-
+-- 
+2.31.1
 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-18 16:06 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-18 16:06 UTC (permalink / raw
  To: gentoo-commits

commit:     01156e7380d93bbadbe80280f8d16087e58272e3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 18 16:06:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 18 16:06:25 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=01156e73

Linux patch 5.14.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1005_linux-5.14.6.patch | 19126 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 19130 insertions(+)

diff --git a/0000_README b/0000_README
index 3b101ac..df8a957 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1004_linux-5.14.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.5
 
+Patch:  1005_linux-5.14.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.14.6.patch b/1005_linux-5.14.6.patch
new file mode 100644
index 0000000..657c7b3
--- /dev/null
+++ b/1005_linux-5.14.6.patch
@@ -0,0 +1,19126 @@
+diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt
+index 9c2be821c2254..922c23bb4372a 100644
+--- a/Documentation/admin-guide/devices.txt
++++ b/Documentation/admin-guide/devices.txt
+@@ -2993,10 +2993,10 @@
+ 		65 = /dev/infiniband/issm1     Second InfiniBand IsSM device
+ 		  ...
+ 		127 = /dev/infiniband/issm63    63rd InfiniBand IsSM device
+-		128 = /dev/infiniband/uverbs0   First InfiniBand verbs device
+-		129 = /dev/infiniband/uverbs1   Second InfiniBand verbs device
++		192 = /dev/infiniband/uverbs0   First InfiniBand verbs device
++		193 = /dev/infiniband/uverbs1   Second InfiniBand verbs device
+ 		  ...
+-		159 = /dev/infiniband/uverbs31  31st InfiniBand verbs device
++		223 = /dev/infiniband/uverbs31  31st InfiniBand verbs device
+ 
+  232 char	Biometric Devices
+ 		0 = /dev/biometric/sensor0/fingerprint	first fingerprint sensor on first device
+diff --git a/Documentation/devicetree/bindings/display/panel/samsung,lms397kf04.yaml b/Documentation/devicetree/bindings/display/panel/samsung,lms397kf04.yaml
+index 4cb75a5f2e3a2..cd62968426fb5 100644
+--- a/Documentation/devicetree/bindings/display/panel/samsung,lms397kf04.yaml
++++ b/Documentation/devicetree/bindings/display/panel/samsung,lms397kf04.yaml
+@@ -33,8 +33,11 @@ properties:
+ 
+   backlight: true
+ 
++  spi-cpha: true
++
++  spi-cpol: true
++
+   spi-max-frequency:
+-    $ref: /schemas/types.yaml#/definitions/uint32
+     description: inherited as a SPI client node, the datasheet specifies
+       maximum 300 ns minimum cycle which gives around 3 MHz max frequency
+     maximum: 3000000
+@@ -44,6 +47,9 @@ properties:
+ required:
+   - compatible
+   - reg
++  - spi-cpha
++  - spi-cpol
++  - port
+ 
+ additionalProperties: false
+ 
+@@ -52,15 +58,23 @@ examples:
+     #include <dt-bindings/gpio/gpio.h>
+ 
+     spi {
++      compatible = "spi-gpio";
++      sck-gpios = <&gpio 0 GPIO_ACTIVE_HIGH>;
++      miso-gpios = <&gpio 1 GPIO_ACTIVE_HIGH>;
++      mosi-gpios = <&gpio 2 GPIO_ACTIVE_HIGH>;
++      cs-gpios = <&gpio 3 GPIO_ACTIVE_HIGH>;
++      num-chipselects = <1>;
+       #address-cells = <1>;
+       #size-cells = <0>;
+       panel@0 {
+         compatible = "samsung,lms397kf04";
+         spi-max-frequency = <3000000>;
++        spi-cpha;
++        spi-cpol;
+         reg = <0>;
+         vci-supply = <&lcd_3v0_reg>;
+         vccio-supply = <&lcd_1v8_reg>;
+-        reset-gpios = <&gpio 1 GPIO_ACTIVE_LOW>;
++        reset-gpios = <&gpio 4 GPIO_ACTIVE_LOW>;
+         backlight = <&ktd259>;
+ 
+         port {
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
+index 38dc56a577604..ecec514b31550 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
+@@ -43,19 +43,19 @@ group emmc_nb
+ 
+ group pwm0
+  - pin 11 (GPIO1-11)
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm1
+  - pin 12
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm2
+  - pin 13
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm3
+  - pin 14
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pmic1
+  - pin 7
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index ff9e7cc97c65a..b5285599d9725 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -185,6 +185,7 @@ fault_type=%d		 Support configuring fault injection type, should be
+ 			 FAULT_KVMALLOC		  0x000000002
+ 			 FAULT_PAGE_ALLOC	  0x000000004
+ 			 FAULT_PAGE_GET		  0x000000008
++			 FAULT_ALLOC_BIO	  0x000000010 (obsolete)
+ 			 FAULT_ALLOC_NID	  0x000000020
+ 			 FAULT_ORPHAN		  0x000000040
+ 			 FAULT_BLOCK		  0x000000080
+diff --git a/Makefile b/Makefile
+index 0eaa5623f4060..f9c8bbf8cf71e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+@@ -404,6 +404,11 @@ ifeq ($(ARCH),sparc64)
+        SRCARCH := sparc
+ endif
+ 
++# Additional ARCH settings for parisc
++ifeq ($(ARCH),parisc64)
++       SRCARCH := parisc
++endif
++
+ export cross_compiling :=
+ ifneq ($(SRCARCH),$(SUBARCH))
+ cross_compiling := 1
+@@ -803,6 +808,8 @@ else
+ # Disabled for clang while comment to attribute conversion happens and
+ # https://github.com/ClangBuiltLinux/linux/issues/636 is discussed.
+ KBUILD_CFLAGS += $(call cc-option,-Wimplicit-fallthrough=5,)
++# gcc inanely warns about local variables called 'main'
++KBUILD_CFLAGS += -Wno-main
+ endif
+ 
+ # These warnings generated too much noise in a regular build.
+diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
+index 9d91ae1091b0b..91265e7ff672f 100644
+--- a/arch/arm/boot/compressed/Makefile
++++ b/arch/arm/boot/compressed/Makefile
+@@ -85,6 +85,8 @@ compress-$(CONFIG_KERNEL_LZ4)  = lz4
+ libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o
+ 
+ ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y)
++CFLAGS_REMOVE_atags_to_fdt.o += -Wframe-larger-than=${CONFIG_FRAME_WARN}
++CFLAGS_atags_to_fdt.o += -Wframe-larger-than=1280
+ OBJS	+= $(libfdt_objs) atags_to_fdt.o
+ endif
+ ifeq ($(CONFIG_USE_OF),y)
+diff --git a/arch/arm/boot/dts/at91-kizbox3_common.dtsi b/arch/arm/boot/dts/at91-kizbox3_common.dtsi
+index c4b3750495da8..abe27adfa4d65 100644
+--- a/arch/arm/boot/dts/at91-kizbox3_common.dtsi
++++ b/arch/arm/boot/dts/at91-kizbox3_common.dtsi
+@@ -336,7 +336,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index ebbc9b23aef1c..b1068cca42287 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -662,7 +662,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	status = "okay";
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+index a9e6fee55a2a8..8034e5dacc808 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+@@ -138,7 +138,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 				atmel,wakeup-rtc-timer;
+ 
+ 				input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+index ff83967fd0082..c145c4e5ef582 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+@@ -205,7 +205,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index bd64721fa23ca..34faca597c352 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -693,7 +693,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index dfd150eb0fd86..3f972a4086c37 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -203,7 +203,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 
+ 				input@0 {
+ 					reg = <0>;
+diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+index 509c732a0d8b4..627b7bf88d83b 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+@@ -347,7 +347,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 				atmel,wakeup-rtc-timer;
+ 
+ 				input@0 {
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index 5a5fa6190a528..37d0cffea99c5 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -70,6 +70,12 @@
+ 		clock-frequency = <11289600>;
+ 	};
+ 
++	achc_24M: achc-clock {
++		compatible = "fixed-clock";
++		#clock-cells = <0>;
++		clock-frequency = <24000000>;
++	};
++
+ 	sgtlsound: sound {
+ 		compatible = "fsl,imx53-cpuvo-sgtl5000",
+ 			     "fsl,imx-audio-sgtl5000";
+@@ -314,16 +320,13 @@
+ 		    &gpio4 12 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+-	spidev0: spi@0 {
+-		compatible = "ge,achc";
+-		reg = <0>;
+-		spi-max-frequency = <1000000>;
+-	};
+-
+-	spidev1: spi@1 {
+-		compatible = "ge,achc";
+-		reg = <1>;
+-		spi-max-frequency = <1000000>;
++	spidev0: spi@1 {
++		compatible = "ge,achc", "nxp,kinetis-k20";
++		reg = <1>, <0>;
++		vdd-supply = <&reg_3v3>;
++		vdda-supply = <&reg_3v3>;
++		clocks = <&achc_24M>;
++		reset-gpios = <&gpio3 6 GPIO_ACTIVE_LOW>;
+ 	};
+ 
+ 	gpioxra0: gpio@2 {
+diff --git a/arch/arm/boot/dts/intel-ixp42x-linksys-nslu2.dts b/arch/arm/boot/dts/intel-ixp42x-linksys-nslu2.dts
+index 5b8dcc19deeef..b9a5268fe7ad6 100644
+--- a/arch/arm/boot/dts/intel-ixp42x-linksys-nslu2.dts
++++ b/arch/arm/boot/dts/intel-ixp42x-linksys-nslu2.dts
+@@ -124,20 +124,20 @@
+ 			 */
+ 			interrupt-map =
+ 			/* IDSEL 1 */
+-			<0x0800 0 0 1 &gpio0 11 3>, /* INT A on slot 1 is irq 11 */
+-			<0x0800 0 0 2 &gpio0 10 3>, /* INT B on slot 1 is irq 10 */
+-			<0x0800 0 0 3 &gpio0 9  3>, /* INT C on slot 1 is irq 9 */
+-			<0x0800 0 0 4 &gpio0 8  3>, /* INT D on slot 1 is irq 8 */
++			<0x0800 0 0 1 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 1 is irq 11 */
++			<0x0800 0 0 2 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 1 is irq 10 */
++			<0x0800 0 0 3 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 1 is irq 9 */
++			<0x0800 0 0 4 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 1 is irq 8 */
+ 			/* IDSEL 2 */
+-			<0x1000 0 0 1 &gpio0 10 3>, /* INT A on slot 2 is irq 10 */
+-			<0x1000 0 0 2 &gpio0 9  3>, /* INT B on slot 2 is irq 9 */
+-			<0x1000 0 0 3 &gpio0 11 3>, /* INT C on slot 2 is irq 11 */
+-			<0x1000 0 0 4 &gpio0 8  3>, /* INT D on slot 2 is irq 8 */
++			<0x1000 0 0 1 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 2 is irq 10 */
++			<0x1000 0 0 2 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 2 is irq 9 */
++			<0x1000 0 0 3 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 2 is irq 11 */
++			<0x1000 0 0 4 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 2 is irq 8 */
+ 			/* IDSEL 3 */
+-			<0x1800 0 0 1 &gpio0 9  3>, /* INT A on slot 3 is irq 9 */
+-			<0x1800 0 0 2 &gpio0 11 3>, /* INT B on slot 3 is irq 11 */
+-			<0x1800 0 0 3 &gpio0 10 3>, /* INT C on slot 3 is irq 10 */
+-			<0x1800 0 0 4 &gpio0 8  3>; /* INT D on slot 3 is irq 8 */
++			<0x1800 0 0 1 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 3 is irq 9 */
++			<0x1800 0 0 2 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 3 is irq 11 */
++			<0x1800 0 0 3 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 3 is irq 10 */
++			<0x1800 0 0 4 &gpio0 8  IRQ_TYPE_LEVEL_LOW>; /* INT D on slot 3 is irq 8 */
+ 		};
+ 
+ 		ethernet@c8009000 {
+diff --git a/arch/arm/boot/dts/intel-ixp43x-gateworks-gw2358.dts b/arch/arm/boot/dts/intel-ixp43x-gateworks-gw2358.dts
+index 60a1228a970fc..f5fe309f7762d 100644
+--- a/arch/arm/boot/dts/intel-ixp43x-gateworks-gw2358.dts
++++ b/arch/arm/boot/dts/intel-ixp43x-gateworks-gw2358.dts
+@@ -108,35 +108,35 @@
+ 			 */
+ 			interrupt-map =
+ 			/* IDSEL 1 */
+-			<0x0800 0 0 1 &gpio0 11 3>, /* INT A on slot 1 is irq 11 */
+-			<0x0800 0 0 2 &gpio0 10 3>, /* INT B on slot 1 is irq 10 */
+-			<0x0800 0 0 3 &gpio0 9  3>, /* INT C on slot 1 is irq 9 */
+-			<0x0800 0 0 4 &gpio0 8  3>, /* INT D on slot 1 is irq 8 */
++			<0x0800 0 0 1 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 1 is irq 11 */
++			<0x0800 0 0 2 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 1 is irq 10 */
++			<0x0800 0 0 3 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 1 is irq 9 */
++			<0x0800 0 0 4 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 1 is irq 8 */
+ 			/* IDSEL 2 */
+-			<0x1000 0 0 1 &gpio0 10 3>, /* INT A on slot 2 is irq 10 */
+-			<0x1000 0 0 2 &gpio0 9  3>, /* INT B on slot 2 is irq 9 */
+-			<0x1000 0 0 3 &gpio0 8  3>, /* INT C on slot 2 is irq 8 */
+-			<0x1000 0 0 4 &gpio0 11 3>, /* INT D on slot 2 is irq 11 */
++			<0x1000 0 0 1 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 2 is irq 10 */
++			<0x1000 0 0 2 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 2 is irq 9 */
++			<0x1000 0 0 3 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 2 is irq 8 */
++			<0x1000 0 0 4 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 2 is irq 11 */
+ 			/* IDSEL 3 */
+-			<0x1800 0 0 1 &gpio0 9  3>, /* INT A on slot 3 is irq 9 */
+-			<0x1800 0 0 2 &gpio0 8  3>, /* INT B on slot 3 is irq 8 */
+-			<0x1800 0 0 3 &gpio0 11 3>, /* INT C on slot 3 is irq 11 */
+-			<0x1800 0 0 4 &gpio0 10 3>, /* INT D on slot 3 is irq 10 */
++			<0x1800 0 0 1 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 3 is irq 9 */
++			<0x1800 0 0 2 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 3 is irq 8 */
++			<0x1800 0 0 3 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 3 is irq 11 */
++			<0x1800 0 0 4 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 3 is irq 10 */
+ 			/* IDSEL 4 */
+-			<0x2000 0 0 1 &gpio0 8  3>, /* INT A on slot 3 is irq 8 */
+-			<0x2000 0 0 2 &gpio0 11 3>, /* INT B on slot 3 is irq 11 */
+-			<0x2000 0 0 3 &gpio0 10 3>, /* INT C on slot 3 is irq 10 */
+-			<0x2000 0 0 4 &gpio0 9  3>, /* INT D on slot 3 is irq 9 */
++			<0x2000 0 0 1 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 3 is irq 8 */
++			<0x2000 0 0 2 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 3 is irq 11 */
++			<0x2000 0 0 3 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 3 is irq 10 */
++			<0x2000 0 0 4 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 3 is irq 9 */
+ 			/* IDSEL 6 */
+-			<0x3000 0 0 1 &gpio0 10 3>, /* INT A on slot 3 is irq 10 */
+-			<0x3000 0 0 2 &gpio0 9  3>, /* INT B on slot 3 is irq 9 */
+-			<0x3000 0 0 3 &gpio0 8  3>, /* INT C on slot 3 is irq 8 */
+-			<0x3000 0 0 4 &gpio0 11 3>, /* INT D on slot 3 is irq 11 */
++			<0x3000 0 0 1 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 3 is irq 10 */
++			<0x3000 0 0 2 &gpio0 9  IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 3 is irq 9 */
++			<0x3000 0 0 3 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 3 is irq 8 */
++			<0x3000 0 0 4 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT D on slot 3 is irq 11 */
+ 			/* IDSEL 15 */
+-			<0x7800 0 0 1 &gpio0 8  3>, /* INT A on slot 3 is irq 8 */
+-			<0x7800 0 0 2 &gpio0 11 3>, /* INT B on slot 3 is irq 11 */
+-			<0x7800 0 0 3 &gpio0 10 3>, /* INT C on slot 3 is irq 10 */
+-			<0x7800 0 0 4 &gpio0 9  3>; /* INT D on slot 3 is irq 9 */
++			<0x7800 0 0 1 &gpio0 8  IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 3 is irq 8 */
++			<0x7800 0 0 2 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT B on slot 3 is irq 11 */
++			<0x7800 0 0 3 &gpio0 10 IRQ_TYPE_LEVEL_LOW>, /* INT C on slot 3 is irq 10 */
++			<0x7800 0 0 4 &gpio0 9  IRQ_TYPE_LEVEL_LOW>; /* INT D on slot 3 is irq 9 */
+ 		};
+ 
+ 		ethernet@c800a000 {
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 2687c4e890ba8..e36d590e83732 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -1262,9 +1262,9 @@
+ 				<&mmcc DSI1_BYTE_CLK>,
+ 				<&mmcc DSI_PIXEL_CLK>,
+ 				<&mmcc DSI1_ESC_CLK>;
+-			clock-names = "iface_clk", "bus_clk", "core_mmss_clk",
+-					"src_clk", "byte_clk", "pixel_clk",
+-					"core_clk";
++			clock-names = "iface", "bus", "core_mmss",
++					"src", "byte", "pixel",
++					"core";
+ 
+ 			assigned-clocks = <&mmcc DSI1_BYTE_SRC>,
+ 					<&mmcc DSI1_ESC_SRC>,
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 6cf1c8b4c6e28..c9577ba2973d3 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -172,15 +172,15 @@
+ 			sgtl5000_tx_endpoint: endpoint@0 {
+ 				reg = <0>;
+ 				remote-endpoint = <&sai2a_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&sgtl5000_tx_endpoint>;
++				bitclock-master = <&sgtl5000_tx_endpoint>;
+ 			};
+ 
+ 			sgtl5000_rx_endpoint: endpoint@1 {
+ 				reg = <1>;
+ 				remote-endpoint = <&sai2b_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&sgtl5000_rx_endpoint>;
++				bitclock-master = <&sgtl5000_rx_endpoint>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 64dca5b7f748d..6885948f3024e 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -220,8 +220,8 @@
+ &i2c4 {
+ 	hdmi-transmitter@3d {
+ 		compatible = "adi,adv7513";
+-		reg = <0x3d>, <0x2d>, <0x4d>, <0x5d>;
+-		reg-names = "main", "cec", "edid", "packet";
++		reg = <0x3d>, <0x4d>, <0x2d>, <0x5d>;
++		reg-names = "main", "edid", "cec", "packet";
+ 		clocks = <&cec_clock>;
+ 		clock-names = "cec";
+ 
+@@ -239,8 +239,6 @@
+ 		adi,input-depth = <8>;
+ 		adi,input-colorspace = "rgb";
+ 		adi,input-clock = "1x";
+-		adi,input-style = <1>;
+-		adi,input-justification = "evenly";
+ 
+ 		ports {
+ 			#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index 59f18846cf5d0..586aac8a998c0 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -220,15 +220,15 @@
+ 			cs42l51_tx_endpoint: endpoint@0 {
+ 				reg = <0>;
+ 				remote-endpoint = <&sai2a_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&cs42l51_tx_endpoint>;
++				bitclock-master = <&cs42l51_tx_endpoint>;
+ 			};
+ 
+ 			cs42l51_rx_endpoint: endpoint@1 {
+ 				reg = <1>;
+ 				remote-endpoint = <&sai2b_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&cs42l51_rx_endpoint>;
++				bitclock-master = <&cs42l51_rx_endpoint>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index 1976c383912aa..05bd0add258c6 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -719,7 +719,6 @@
+ 		nvidia,xcvr-setup-use-fuses;
+ 		nvidia,xcvr-lsfslew = <2>;
+ 		nvidia,xcvr-lsrslew = <2>;
+-		vbus-supply = <&vdd_vbus1>;
+ 	};
+ 
+ 	usb@c5008000 {
+@@ -731,7 +730,7 @@
+ 		nvidia,xcvr-setup-use-fuses;
+ 		nvidia,xcvr-lsfslew = <2>;
+ 		nvidia,xcvr-lsrslew = <2>;
+-		vbus-supply = <&vdd_vbus3>;
++		vbus-supply = <&vdd_5v0_sys>;
+ 	};
+ 
+ 	brcm_wifi_pwrseq: wifi-pwrseq {
+@@ -991,28 +990,6 @@
+ 		vin-supply = <&vdd_5v0_sys>;
+ 	};
+ 
+-	vdd_vbus1: regulator@4 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdd_usb1_vbus";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		regulator-always-on;
+-		gpio = <&gpio TEGRA_GPIO(D, 0) GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		vin-supply = <&vdd_5v0_sys>;
+-	};
+-
+-	vdd_vbus3: regulator@5 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdd_usb3_vbus";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		regulator-always-on;
+-		gpio = <&gpio TEGRA_GPIO(D, 3) GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		vin-supply = <&vdd_5v0_sys>;
+-	};
+-
+ 	sound {
+ 		compatible = "nvidia,tegra-audio-wm8903-picasso",
+ 			     "nvidia,tegra-audio-wm8903";
+diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+index 95e6bccdb4f6e..dd4d506683de7 100644
+--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+@@ -185,8 +185,9 @@
+ 				nvidia,pins = "ata", "atb", "atc", "atd", "ate",
+ 					"cdev1", "cdev2", "dap1", "dtb", "gma",
+ 					"gmb", "gmc", "gmd", "gme", "gpu7",
+-					"gpv", "i2cp", "pta", "rm", "slxa",
+-					"slxk", "spia", "spib", "uac";
++					"gpv", "i2cp", "irrx", "irtx", "pta",
++					"rm", "slxa", "slxk", "spia", "spib",
++					"uac";
+ 				nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ 				nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 			};
+@@ -211,7 +212,7 @@
+ 			conf_ddc {
+ 				nvidia,pins = "ddc", "dta", "dtd", "kbca",
+ 					"kbcb", "kbcc", "kbcd", "kbce", "kbcf",
+-					"sdc";
++					"sdc", "uad", "uca";
+ 				nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ 				nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 			};
+@@ -221,10 +222,9 @@
+ 					"lvp0", "owc", "sdb";
+ 				nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ 			};
+-			conf_irrx {
+-				nvidia,pins = "irrx", "irtx", "sdd", "spic",
+-					"spie", "spih", "uaa", "uab", "uad",
+-					"uca", "ucb";
++			conf_sdd {
++				nvidia,pins = "sdd", "spic", "spie", "spih",
++					"uaa", "uab", "ucb";
+ 				nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ 				nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ 			};
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
+index be81330db14f6..02641191682e0 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
+@@ -32,14 +32,14 @@
+ 		};
+ 	};
+ 
+-	reg_vcc3v3: vcc3v3 {
++	reg_vcc3v3: regulator-vcc3v3 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vcc3v3";
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+-	reg_vdd_cpu_gpu: vdd-cpu-gpu {
++	reg_vdd_cpu_gpu: regulator-vdd-cpu-gpu {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vdd-cpu-gpu";
+ 		regulator-min-microvolt = <1135000>;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
+index db3d303093f61..6d22efbd645cb 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
+@@ -83,15 +83,9 @@
+ 			};
+ 
+ 			eeprom@52 {
+-				compatible = "atmel,24c512";
++				compatible = "onnn,cat24c04", "atmel,24c04";
+ 				reg = <0x52>;
+ 			};
+-
+-			eeprom@53 {
+-				compatible = "atmel,24c512";
+-				reg = <0x53>;
+-			};
+-
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
+index 60acdf0b689ee..7025aad8ae897 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
+@@ -59,14 +59,9 @@
+ 	};
+ 
+ 	eeprom@52 {
+-		compatible = "atmel,24c512";
++		compatible = "onnn,cat24c05", "atmel,24c04";
+ 		reg = <0x52>;
+ 	};
+-
+-	eeprom@53 {
+-		compatible = "atmel,24c512";
+-		reg = <0x53>;
+-	};
+ };
+ 
+ &i2c3 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi
+index c769fadbd008f..00f86cada30d2 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi
+@@ -278,70 +278,86 @@
+ 
+ 	pmic@69 {
+ 		compatible = "mps,mp5416";
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&pinctrl_pmic>;
+ 		reg = <0x69>;
+ 
+ 		regulators {
++			/* vdd_0p95: DRAM/GPU/VPU */
+ 			buck1 {
+-				regulator-name = "vdd_0p95";
+-				regulator-min-microvolt = <805000>;
++				regulator-name = "buck1";
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <1000000>;
+-				regulator-max-microamp = <2500000>;
++				regulator-min-microamp  = <3800000>;
++				regulator-max-microamp  = <6800000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_soc */
+ 			buck2 {
+-				regulator-name = "vdd_soc";
+-				regulator-min-microvolt = <805000>;
++				regulator-name = "buck2";
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <900000>;
+-				regulator-max-microamp = <1000000>;
++				regulator-min-microamp  = <2200000>;
++				regulator-max-microamp  = <5200000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_arm */
+ 			buck3_reg: buck3 {
+-				regulator-name = "vdd_arm";
+-				regulator-min-microvolt = <805000>;
++				regulator-name = "buck3";
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <1000000>;
+-				regulator-max-microamp = <2200000>;
+-				regulator-boot-on;
++				regulator-min-microamp  = <3800000>;
++				regulator-max-microamp  = <6800000>;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_1p8 */
+ 			buck4 {
+-				regulator-name = "vdd_1p8";
++				regulator-name = "buck4";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+-				regulator-max-microamp = <500000>;
++				regulator-min-microamp  = <2200000>;
++				regulator-max-microamp  = <5200000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* nvcc_snvs_1p8 */
+ 			ldo1 {
+-				regulator-name = "nvcc_snvs_1p8";
++				regulator-name = "ldo1";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+-				regulator-max-microamp = <300000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_snvs_0p8 */
+ 			ldo2 {
+-				regulator-name = "vdd_snvs_0p8";
++				regulator-name = "ldo2";
+ 				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <800000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_0p9 */
+ 			ldo3 {
+-				regulator-name = "vdd_0p95";
+-				regulator-min-microvolt = <800000>;
+-				regulator-max-microvolt = <800000>;
++				regulator-name = "ldo3";
++				regulator-min-microvolt = <900000>;
++				regulator-max-microvolt = <900000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_1p8 */
+ 			ldo4 {
+-				regulator-name = "vdd_1p8";
++				regulator-name = "ldo4";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 		};
+ 	};
+@@ -426,12 +442,6 @@
+ 		>;
+ 	};
+ 
+-	pinctrl_pmic: pmicgrp {
+-		fsl,pins = <
+-			MX8MM_IOMUXC_GPIO1_IO03_GPIO1_IO3	0x41
+-		>;
+-	};
+-
+ 	pinctrl_uart2: uart2grp {
+ 		fsl,pins = <
+ 			MX8MM_IOMUXC_UART2_RXD_UART2_DCE_RX	0x140
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+index 905b68a3daa5a..8e4a0ce99790b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+@@ -46,7 +46,7 @@
+ 		pinctrl-0 = <&pinctrl_reg_usb1_en>;
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "usb_otg1_vbus";
+-		gpio = <&gpio1 12 GPIO_ACTIVE_HIGH>;
++		gpio = <&gpio1 10 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 		regulator-min-microvolt = <5000000>;
+ 		regulator-max-microvolt = <5000000>;
+@@ -156,7 +156,8 @@
+ 
+ 	pinctrl_reg_usb1_en: regusb1grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12	0x41
++			MX8MM_IOMUXC_GPIO1_IO10_GPIO1_IO10	0x41
++			MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12	0x141
+ 			MX8MM_IOMUXC_GPIO1_IO13_USB1_OTG_OC	0x41
+ 		>;
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+index 9928a87f593a5..b0bcda8cc51f4 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+@@ -1227,13 +1227,13 @@
+ 
+ 		cpu@0 {
+ 			device_type = "cpu";
+-			compatible = "nvidia,denver";
++			compatible = "nvidia,tegra132-denver";
+ 			reg = <0>;
+ 		};
+ 
+ 		cpu@1 {
+ 			device_type = "cpu";
+-			compatible = "nvidia,denver";
++			compatible = "nvidia,tegra132-denver";
+ 			reg = <1>;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 5ba7a4519b956..c8250a3f7891f 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -2122,7 +2122,7 @@
+ 	};
+ 
+ 	pcie_ep@14160000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX4A>;
+ 		reg = <0x00 0x14160000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x36040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+@@ -2162,7 +2162,7 @@
+ 	};
+ 
+ 	pcie_ep@14180000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+ 		reg = <0x00 0x14180000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x38040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+@@ -2202,7 +2202,7 @@
+ 	};
+ 
+ 	pcie_ep@141a0000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
+ 		reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 9fa5b028e4f39..23ee1bfa43189 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -151,7 +151,7 @@
+ 		#size-cells = <2>;
+ 		ranges;
+ 
+-		rpm_msg_ram: memory@0x60000 {
++		rpm_msg_ram: memory@60000 {
+ 			reg = <0x0 0x60000 0x0 0x6000>;
+ 			no-map;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+index e8c37a1693d3b..cc08dc4eb56a5 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
++++ b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+@@ -20,7 +20,7 @@
+ 		stdout-path = "serial0";
+ 	};
+ 
+-	memory {
++	memory@40000000 {
+ 		device_type = "memory";
+ 		reg = <0x0 0x40000000 0x0 0x20000000>;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index f39bc10cc5bd7..d64a6e81d1a55 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -583,10 +583,10 @@
+ 
+ 		pcie1: pci@10000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x10000000 0xf1d
+-				0x10000f20 0xa8
+-				0x00088000 0x2000
+-				0x10100000 0x1000>;
++			reg =  <0x10000000 0xf1d>,
++			       <0x10000f20 0xa8>,
++			       <0x00088000 0x2000>,
++			       <0x10100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <1>;
+@@ -645,10 +645,10 @@
+ 
+ 		pcie0: pci@20000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x20000000 0xf1d
+-				0x20000f20 0xa8
+-				0x00080000 0x2000
+-				0x20100000 0x1000>;
++			reg = <0x20000000 0xf1d>,
++			      <0x20000f20 0xa8>,
++			      <0x00080000 0x2000>,
++			      <0x20100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index f9f0b5aa6a266..87a3217e88efa 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -15,16 +15,18 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
++			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32768>;
++			clock-output-names = "sleep_clk";
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 78c55ca10ba9b..77bc233f83805 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -19,14 +19,14 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
+ 			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32764>;
+diff --git a/arch/arm64/boot/dts/qcom/sa8155p-adp.dts b/arch/arm64/boot/dts/qcom/sa8155p-adp.dts
+index 0da7a3b8d1bf3..5ae2ddc65f7e4 100644
+--- a/arch/arm64/boot/dts/qcom/sa8155p-adp.dts
++++ b/arch/arm64/boot/dts/qcom/sa8155p-adp.dts
+@@ -307,10 +307,6 @@
+ 	status = "okay";
+ };
+ 
+-&tlmm {
+-	gpio-reserved-ranges = <0 4>;
+-};
+-
+ &uart2 {
+ 	status = "okay";
+ };
+@@ -337,6 +333,16 @@
+ 	vdda-pll-max-microamp = <18300>;
+ };
+ 
++&usb_1 {
++	status = "okay";
++};
++
++&usb_1_dwc3 {
++	dr_mode = "host";
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&usb2phy_ac_en1_default>;
++};
+ 
+ &usb_1_hsphy {
+ 	status = "okay";
+@@ -346,15 +352,51 @@
+ };
+ 
+ &usb_1_qmpphy {
++	status = "disabled";
++};
++
++&usb_2 {
+ 	status = "okay";
+-	vdda-phy-supply = <&vreg_l8c_1p2>;
+-	vdda-pll-supply = <&vdda_usb_ss_dp_core_1>;
+ };
+ 
+-&usb_1 {
++&usb_2_dwc3 {
++	dr_mode = "host";
++
++	pinctrl-names = "default";
++	pinctrl-0 = <&usb2phy_ac_en2_default>;
++};
++
++&usb_2_hsphy {
+ 	status = "okay";
++	vdda-pll-supply = <&vdd_usb_hs_core>;
++	vdda33-supply = <&vdda_usb_hs_3p1>;
++	vdda18-supply = <&vdda_usb_hs_1p8>;
+ };
+ 
+-&usb_1_dwc3 {
+-	dr_mode = "peripheral";
++&usb_2_qmpphy {
++	status = "okay";
++	vdda-phy-supply = <&vreg_l8c_1p2>;
++	vdda-pll-supply = <&vdda_usb_ss_dp_core_1>;
++};
++
++&tlmm {
++	gpio-reserved-ranges = <0 4>;
++
++	usb2phy_ac_en1_default: usb2phy_ac_en1_default {
++		mux {
++			pins = "gpio113";
++			function = "usb2phy_ac";
++			bias-disable;
++			drive-strength = <2>;
++		};
++	};
++
++	usb2phy_ac_en2_default: usb2phy_ac_en2_default {
++		mux {
++			pins = "gpio123";
++			function = "usb2phy_ac";
++			bias-disable;
++			drive-strength = <2>;
++		};
++	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index f91a928466c3b..06a0ae773ad50 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -17,14 +17,14 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
+ 			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32764>;
+@@ -343,10 +343,19 @@
+ 		};
+ 
+ 		qhee_code: qhee-code@85800000 {
+-			reg = <0x0 0x85800000 0x0 0x3700000>;
++			reg = <0x0 0x85800000 0x0 0x600000>;
+ 			no-map;
+ 		};
+ 
++		rmtfs_mem: memory@85e00000 {
++			compatible = "qcom,rmtfs-mem";
++			reg = <0x0 0x85e00000 0x0 0x200000>;
++			no-map;
++
++			qcom,client-id = <1>;
++			qcom,vmid = <15>;
++		};
++
+ 		smem_region: smem-mem@86000000 {
+ 			reg = <0 0x86000000 0 0x200000>;
+ 			no-map;
+@@ -357,58 +366,44 @@
+ 			no-map;
+ 		};
+ 
+-		modem_fw_mem: modem-fw-region@8ac00000 {
++		mpss_region: mpss@8ac00000 {
+ 			reg = <0x0 0x8ac00000 0x0 0x7e00000>;
+ 			no-map;
+ 		};
+ 
+-		adsp_fw_mem: adsp-fw-region@92a00000 {
++		adsp_region: adsp@92a00000 {
+ 			reg = <0x0 0x92a00000 0x0 0x1e00000>;
+ 			no-map;
+ 		};
+ 
+-		pil_mba_mem: pil-mba-region@94800000 {
++		mba_region: mba@94800000 {
+ 			reg = <0x0 0x94800000 0x0 0x200000>;
+ 			no-map;
+ 		};
+ 
+-		buffer_mem: buffer-region@94a00000 {
++		buffer_mem: tzbuffer@94a00000 {
+ 			reg = <0x0 0x94a00000 0x0 0x100000>;
+ 			no-map;
+ 		};
+ 
+-		venus_fw_mem: venus-fw-region@9f800000 {
++		venus_region: venus@9f800000 {
+ 			reg = <0x0 0x9f800000 0x0 0x800000>;
+ 			no-map;
+ 		};
+ 
+-		secure_region2: secure-region2@f7c00000 {
+-			reg = <0x0 0xf7c00000 0x0 0x5c00000>;
+-			no-map;
+-		};
+-
+ 		adsp_mem: adsp-region@f6000000 {
+ 			reg = <0x0 0xf6000000 0x0 0x800000>;
+ 			no-map;
+ 		};
+ 
+-		qseecom_ta_mem: qseecom-ta-region@fec00000 {
+-			reg = <0x0 0xfec00000 0x0 0x1000000>;
+-			no-map;
+-		};
+-
+ 		qseecom_mem: qseecom-region@f6800000 {
+ 			reg = <0x0 0xf6800000 0x0 0x1400000>;
+ 			no-map;
+ 		};
+ 
+-		secure_display_memory: secure-region@f5c00000 {
+-			reg = <0x0 0xf5c00000 0x0 0x5c00000>;
+-			no-map;
+-		};
+-
+-		cont_splash_mem: cont-splash-region@9d400000 {
+-			reg = <0x0 0x9d400000 0x0 0x23ff000>;
++		zap_shader_region: gpu@fed00000 {
++			compatible = "shared-dma-pool";
++			reg = <0x0 0xfed00000 0x0 0xa00000>;
+ 			no-map;
+ 		};
+ 	};
+@@ -527,14 +522,18 @@
+ 			reg = <0x01f40000 0x20000>;
+ 		};
+ 
+-		tlmm: pinctrl@3000000 {
++		tlmm: pinctrl@3100000 {
+ 			compatible = "qcom,sdm630-pinctrl";
+-			reg = <0x03000000 0xc00000>;
++			reg = <0x03100000 0x400000>,
++				  <0x03500000 0x400000>,
++				  <0x03900000 0x400000>;
++			reg-names = "south", "center", "north";
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-controller;
+-			#gpio-cells = <0x2>;
++			gpio-ranges = <&tlmm 0 0 114>;
++			#gpio-cells = <2>;
+ 			interrupt-controller;
+-			#interrupt-cells = <0x2>;
++			#interrupt-cells = <2>;
+ 
+ 			blsp1_uart1_default: blsp1-uart1-default {
+ 				pins = "gpio0", "gpio1", "gpio2", "gpio3";
+@@ -554,40 +553,48 @@
+ 				bias-disable;
+ 			};
+ 
+-			blsp2_uart1_tx_active: blsp2-uart1-tx-active {
+-				pins = "gpio16";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
+-
+-			blsp2_uart1_tx_sleep: blsp2-uart1-tx-sleep {
+-				pins = "gpio16";
+-				drive-strength = <2>;
+-				bias-pull-up;
+-			};
++			blsp2_uart1_default: blsp2-uart1-active {
++				tx-rts {
++					pins = "gpio16", "gpio19";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-disable;
++				};
+ 
+-			blsp2_uart1_rxcts_active: blsp2-uart1-rxcts-active {
+-				pins = "gpio17", "gpio18";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
++				rx {
++					/*
++					 * Avoid garbage data while BT module
++					 * is powered off or not driving signal
++					 */
++					pins = "gpio17";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-pull-up;
++				};
+ 
+-			blsp2_uart1_rxcts_sleep: blsp2-uart1-rxcts-sleep {
+-				pins = "gpio17", "gpio18";
+-				drive-strength = <2>;
+-				bias-no-pull;
++				cts {
++					/* Match the pull of the BT module */
++					pins = "gpio18";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-pull-down;
++				};
+ 			};
+ 
+-			blsp2_uart1_rfr_active: blsp2-uart1-rfr-active {
+-				pins = "gpio19";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
++			blsp2_uart1_sleep: blsp2-uart1-sleep {
++				tx {
++					pins = "gpio16";
++					function = "gpio";
++					drive-strength = <2>;
++					bias-pull-up;
++				};
+ 
+-			blsp2_uart1_rfr_sleep: blsp2-uart1-rfr-sleep {
+-				pins = "gpio19";
+-				drive-strength = <2>;
+-				bias-no-pull;
++				rx-cts-rts {
++					pins = "gpio17", "gpio18", "gpio19";
++					function = "gpio";
++					drive-strength = <2>;
++					bias-no-pull;
++				};
+ 			};
+ 
+ 			i2c1_default: i2c1-default {
+@@ -686,50 +693,106 @@
+ 				bias-pull-up;
+ 			};
+ 
+-			sdc1_clk_on: sdc1-clk-on {
+-				pins = "sdc1_clk";
+-				bias-disable;
+-				drive-strength = <16>;
+-			};
++			sdc1_state_on: sdc1-on {
++				clk {
++					pins = "sdc1_clk";
++					bias-disable;
++					drive-strength = <16>;
++				};
+ 
+-			sdc1_clk_off: sdc1-clk-off {
+-				pins = "sdc1_clk";
+-				bias-disable;
+-				drive-strength = <2>;
+-			};
++				cmd {
++					pins = "sdc1_cmd";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
+ 
+-			sdc1_cmd_on: sdc1-cmd-on {
+-				pins = "sdc1_cmd";
+-				bias-pull-up;
+-				drive-strength = <10>;
+-			};
++				data {
++					pins = "sdc1_data";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
+ 
+-			sdc1_cmd_off: sdc1-cmd-off {
+-				pins = "sdc1_cmd";
+-				bias-pull-up;
+-				drive-strength = <2>;
++				rclk {
++					pins = "sdc1_rclk";
++					bias-pull-down;
++				};
+ 			};
+ 
+-			sdc1_data_on: sdc1-data-on {
+-				pins = "sdc1_data";
+-				bias-pull-up;
+-				drive-strength = <8>;
+-			};
++			sdc1_state_off: sdc1-off {
++				clk {
++					pins = "sdc1_clk";
++					bias-disable;
++					drive-strength = <2>;
++				};
+ 
+-			sdc1_data_off: sdc1-data-off {
+-				pins = "sdc1_data";
+-				bias-pull-up;
+-				drive-strength = <2>;
++				cmd {
++					pins = "sdc1_cmd";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				data {
++					pins = "sdc1_data";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				rclk {
++					pins = "sdc1_rclk";
++					bias-pull-down;
++				};
+ 			};
+ 
+-			sdc1_rclk_on: sdc1-rclk-on {
+-				pins = "sdc1_rclk";
+-				bias-pull-down;
++			sdc2_state_on: sdc2-on {
++				clk {
++					pins = "sdc2_clk";
++					bias-disable;
++					drive-strength = <16>;
++				};
++
++				cmd {
++					pins = "sdc2_cmd";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
++
++				data {
++					pins = "sdc2_data";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
++
++				sd-cd {
++					pins = "gpio54";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
+ 			};
+ 
+-			sdc1_rclk_off: sdc1-rclk-off {
+-				pins = "sdc1_rclk";
+-				bias-pull-down;
++			sdc2_state_off: sdc2-off {
++				clk {
++					pins = "sdc2_clk";
++					bias-disable;
++					drive-strength = <2>;
++				};
++
++				cmd {
++					pins = "sdc2_cmd";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				data {
++					pins = "sdc2_data";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				sd-cd {
++					pins = "gpio54";
++					bias-disable;
++					drive-strength = <2>;
++				};
+ 			};
+ 		};
+ 
+@@ -823,8 +886,8 @@
+ 			clock-names = "core", "iface", "xo", "ice";
+ 
+ 			pinctrl-names = "default", "sleep";
+-			pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on &sdc1_rclk_on>;
+-			pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_off &sdc1_data_off &sdc1_rclk_off>;
++			pinctrl-0 = <&sdc1_state_on>;
++			pinctrl-1 = <&sdc1_state_off>;
+ 
+ 			bus-width = <8>;
+ 			non-removable;
+@@ -969,10 +1032,8 @@
+ 			dmas = <&blsp2_dma 0>, <&blsp2_dma 1>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-names = "default", "sleep";
+-			pinctrl-0 = <&blsp2_uart1_tx_active &blsp2_uart1_rxcts_active
+-				&blsp2_uart1_rfr_active>;
+-			pinctrl-1 = <&blsp2_uart1_tx_sleep &blsp2_uart1_rxcts_sleep
+-				&blsp2_uart1_rfr_sleep>;
++			pinctrl-0 = <&blsp2_uart1_default>;
++			pinctrl-1 = <&blsp2_uart1_sleep>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 9a6eff1813a68..7f7c8f467bfc0 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -3955,7 +3955,7 @@
+ 			};
+ 		};
+ 
+-		epss_l3: interconnect@18591000 {
++		epss_l3: interconnect@18590000 {
+ 			compatible = "qcom,sm8250-epss-l3";
+ 			reg = <0 0x18590000 0 0x1000>;
+ 
+diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
+index b83fb24954b77..3198acb2aad8c 100644
+--- a/arch/arm64/include/asm/el2_setup.h
++++ b/arch/arm64/include/asm/el2_setup.h
+@@ -149,8 +149,17 @@
+ 	ubfx	x1, x1, #ID_AA64MMFR0_FGT_SHIFT, #4
+ 	cbz	x1, .Lskip_fgt_\@
+ 
+-	msr_s	SYS_HDFGRTR_EL2, xzr
+-	msr_s	SYS_HDFGWTR_EL2, xzr
++	mov	x0, xzr
++	mrs	x1, id_aa64dfr0_el1
++	ubfx	x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
++	cmp	x1, #3
++	b.lt	.Lset_fgt_\@
++	/* Disable PMSNEVFR_EL1 read and write traps */
++	orr	x0, x0, #(1 << 62)
++
++.Lset_fgt_\@:
++	msr_s	SYS_HDFGRTR_EL2, x0
++	msr_s	SYS_HDFGWTR_EL2, x0
+ 	msr_s	SYS_HFGRTR_EL2, xzr
+ 	msr_s	SYS_HFGWTR_EL2, xzr
+ 	msr_s	SYS_HFGITR_EL2, xzr
+diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
+index 3512184cfec17..96dc0f7da258d 100644
+--- a/arch/arm64/include/asm/kernel-pgtable.h
++++ b/arch/arm64/include/asm/kernel-pgtable.h
+@@ -65,8 +65,8 @@
+ #define EARLY_KASLR	(0)
+ #endif
+ 
+-#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \
+-					- ((vstart) >> (shift)) + 1 + EARLY_KASLR)
++#define EARLY_ENTRIES(vstart, vend, shift) \
++	((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
+ 
+ #define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
+ 
+diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
+index 75beffe2ee8a8..e9c30859f80cd 100644
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -27,11 +27,32 @@ typedef struct {
+ } mm_context_t;
+ 
+ /*
+- * This macro is only used by the TLBI and low-level switch_mm() code,
+- * neither of which can race with an ASID change. We therefore don't
+- * need to reload the counter using atomic64_read().
++ * We use atomic64_read() here because the ASID for an 'mm_struct' can
++ * be reallocated when scheduling one of its threads following a
++ * rollover event (see new_context() and flush_context()). In this case,
++ * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
++ * may use a stale ASID. This is fine in principle as the new ASID is
++ * guaranteed to be clean in the TLB, but the TLBI routines have to take
++ * care to handle the following race:
++ *
++ *    CPU 0                    CPU 1                          CPU 2
++ *
++ *    // ptep_clear_flush(mm)
++ *    xchg_relaxed(pte, 0)
++ *    DSB ISHST
++ *    old = ASID(mm)
++ *         |                                                  <rollover>
++ *         |                   new = new_context(mm)
++ *         \-----------------> atomic_set(mm->context.id, new)
++ *                             cpu_switch_mm(mm)
++ *                             // Hardware walk of pte using new ASID
++ *    TLBI(old)
++ *
++ * In this scenario, the barrier on CPU 0 and the dependency on CPU 1
++ * ensure that the page-table walker on CPU 1 *must* see the invalid PTE
++ * written by CPU 0.
+  */
+-#define ASID(mm)	((mm)->context.id.counter & 0xffff)
++#define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
+ 
+ static inline bool arm64_kernel_unmapped_at_el0(void)
+ {
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index cc3f5a33ff9c5..36f02892e1df8 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -245,9 +245,10 @@ static inline void flush_tlb_all(void)
+ 
+ static inline void flush_tlb_mm(struct mm_struct *mm)
+ {
+-	unsigned long asid = __TLBI_VADDR(0, ASID(mm));
++	unsigned long asid;
+ 
+ 	dsb(ishst);
++	asid = __TLBI_VADDR(0, ASID(mm));
+ 	__tlbi(aside1is, asid);
+ 	__tlbi_user(aside1is, asid);
+ 	dsb(ish);
+@@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
+ static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
+ 					 unsigned long uaddr)
+ {
+-	unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
++	unsigned long addr;
+ 
+ 	dsb(ishst);
++	addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
+ 	__tlbi(vale1is, addr);
+ 	__tlbi_user(vale1is, addr);
+ }
+@@ -283,9 +285,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ {
+ 	int num = 0;
+ 	int scale = 0;
+-	unsigned long asid = ASID(vma->vm_mm);
+-	unsigned long addr;
+-	unsigned long pages;
++	unsigned long asid, addr, pages;
+ 
+ 	start = round_down(start, stride);
+ 	end = round_up(end, stride);
+@@ -305,6 +305,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ 	}
+ 
+ 	dsb(ishst);
++	asid = ASID(vma->vm_mm);
+ 
+ 	/*
+ 	 * When the CPU does not support TLB range operations, flush the TLB
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index c5c994a73a645..17962452e31de 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -177,7 +177,7 @@ SYM_CODE_END(preserve_boot_args)
+  * to be composed of multiple pages. (This effectively scales the end index).
+  *
+  *	vstart:	virtual address of start of range
+- *	vend:	virtual address of end of range
++ *	vend:	virtual address of end of range - we map [vstart, vend]
+  *	shift:	shift used to transform virtual address into index
+  *	ptrs:	number of entries in page table
+  *	istart:	index in table corresponding to vstart
+@@ -214,17 +214,18 @@ SYM_CODE_END(preserve_boot_args)
+  *
+  *	tbl:	location of page table
+  *	rtbl:	address to be used for first level page table entry (typically tbl + PAGE_SIZE)
+- *	vstart:	start address to map
+- *	vend:	end address to map - we map [vstart, vend]
++ *	vstart:	virtual address of start of range
++ *	vend:	virtual address of end of range - we map [vstart, vend - 1]
+  *	flags:	flags to use to map last level entries
+  *	phys:	physical address corresponding to vstart - physical memory is contiguous
+  *	pgds:	the number of pgd entries
+  *
+  * Temporaries:	istart, iend, tmp, count, sv - these need to be different registers
+- * Preserves:	vstart, vend, flags
+- * Corrupts:	tbl, rtbl, istart, iend, tmp, count, sv
++ * Preserves:	vstart, flags
++ * Corrupts:	tbl, rtbl, vend, istart, iend, tmp, count, sv
+  */
+ 	.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
++	sub \vend, \vend, #1
+ 	add \rtbl, \tbl, #PAGE_SIZE
+ 	mov \sv, \rtbl
+ 	mov \count, #0
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 709d2c433c5e9..f6b1a88245db2 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -181,6 +181,8 @@ SECTIONS
+ 	/* everything from this point to __init_begin will be marked RO NX */
+ 	RO_DATA(PAGE_SIZE)
+ 
++	HYPERVISOR_DATA_SECTIONS
++
+ 	idmap_pg_dir = .;
+ 	. += IDMAP_DIR_SIZE;
+ 	idmap_pg_end = .;
+@@ -260,8 +262,6 @@ SECTIONS
+ 	_sdata = .;
+ 	RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
+ 
+-	HYPERVISOR_DATA_SECTIONS
+-
+ 	/*
+ 	 * Data written with the MMU off but read with the MMU on requires
+ 	 * cache lines to be invalidated, discarding up to a Cache Writeback
+diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus
+index f1be832e2b746..d1e93a39cd3bc 100644
+--- a/arch/m68k/Kconfig.bus
++++ b/arch/m68k/Kconfig.bus
+@@ -63,7 +63,7 @@ source "drivers/zorro/Kconfig"
+ 
+ endif
+ 
+-if !MMU
++if COLDFIRE
+ 
+ config ISA_DMA_API
+ 	def_bool !M5272
+diff --git a/arch/mips/mti-malta/malta-dtshim.c b/arch/mips/mti-malta/malta-dtshim.c
+index 0ddf03df62688..f451268f6c384 100644
+--- a/arch/mips/mti-malta/malta-dtshim.c
++++ b/arch/mips/mti-malta/malta-dtshim.c
+@@ -22,7 +22,7 @@
+ #define  ROCIT_CONFIG_GEN1_MEMMAP_SHIFT	8
+ #define  ROCIT_CONFIG_GEN1_MEMMAP_MASK	(0xf << 8)
+ 
+-static unsigned char fdt_buf[16 << 10] __initdata;
++static unsigned char fdt_buf[16 << 10] __initdata __aligned(8);
+ 
+ /* determined physical memory size, not overridden by command line args	 */
+ extern unsigned long physical_memsize;
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index bc657e55c15f8..98e4f97db5159 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -547,6 +547,7 @@ EXCEPTION_ENTRY(_external_irq_handler)
+ 	l.bnf	1f			// ext irq enabled, all ok.
+ 	l.nop
+ 
++#ifdef CONFIG_PRINTK
+ 	l.addi  r1,r1,-0x8
+ 	l.movhi r3,hi(42f)
+ 	l.ori	r3,r3,lo(42f)
+@@ -560,6 +561,7 @@ EXCEPTION_ENTRY(_external_irq_handler)
+ 		.string "\n\rESR interrupt bug: in _external_irq_handler (ESR %x)\n\r"
+ 		.align 4
+ 	.previous
++#endif
+ 
+ 	l.ori	r4,r4,SPR_SR_IEE	// fix the bug
+ //	l.sw	PT_SR(r1),r4
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index aed8ea29268bb..2d019aa73b8f0 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -25,18 +25,18 @@ CHECKFLAGS	+= -D__hppa__=1
+ ifdef CONFIG_64BIT
+ UTS_MACHINE	:= parisc64
+ CHECKFLAGS	+= -D__LP64__=1
+-CC_ARCHES	= hppa64
+ LD_BFD		:= elf64-hppa-linux
+ else # 32-bit
+-CC_ARCHES	= hppa hppa2.0 hppa1.1
+ LD_BFD		:= elf32-hppa-linux
+ endif
+ 
+ # select defconfig based on actual architecture
+-ifeq ($(shell uname -m),parisc64)
++ifeq ($(ARCH),parisc64)
+ 	KBUILD_DEFCONFIG := generic-64bit_defconfig
++	CC_ARCHES := hppa64
+ else
+ 	KBUILD_DEFCONFIG := generic-32bit_defconfig
++	CC_ARCHES := hppa hppa2.0 hppa1.1
+ endif
+ 
+ export LD_BFD
+diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c
+index fb1e94a3982bc..db1a47cf424dd 100644
+--- a/arch/parisc/kernel/signal.c
++++ b/arch/parisc/kernel/signal.c
+@@ -237,6 +237,12 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs,
+ #endif
+ 	
+ 	usp = (regs->gr[30] & ~(0x01UL));
++#ifdef CONFIG_64BIT
++	if (is_compat_task()) {
++		/* The gcc alloca implementation leaves garbage in the upper 32 bits of sp */
++		usp = (compat_uint_t)usp;
++	}
++#endif
+ 	/*FIXME: frame_size parameter is unused, remove it. */
+ 	frame = get_sigframe(&ksig->ka, usp, sizeof(*frame));
+ 
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index d21f266cea9a5..cd08f9ed2c8dd 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -21,7 +21,6 @@ CONFIG_INET=y
+ CONFIG_IP_MULTICAST=y
+ CONFIG_IP_PNP=y
+ CONFIG_SYN_COOKIES=y
+-# CONFIG_IPV6 is not set
+ # CONFIG_FW_LOADER is not set
+ CONFIG_MTD=y
+ CONFIG_MTD_BLOCK=y
+@@ -34,6 +33,7 @@ CONFIG_MTD_CFI_GEOMETRY=y
+ # CONFIG_MTD_CFI_I2 is not set
+ CONFIG_MTD_CFI_I4=y
+ CONFIG_MTD_CFI_AMDSTD=y
++CONFIG_MTD_PHYSMAP=y
+ CONFIG_MTD_PHYSMAP_OF=y
+ # CONFIG_BLK_DEV is not set
+ CONFIG_NETDEVICES=y
+@@ -76,7 +76,6 @@ CONFIG_PERF_EVENTS=y
+ CONFIG_MATH_EMULATION=y
+ CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y
+ CONFIG_STRICT_KERNEL_RWX=y
+-CONFIG_IPV6=y
+ CONFIG_BPF_JIT=y
+ CONFIG_DEBUG_VM_PGTABLE=y
+ CONFIG_BDI_SWITCH=y
+diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h
+index c6bbe9778d3cd..3c09109e708ef 100644
+--- a/arch/powerpc/include/asm/pmc.h
++++ b/arch/powerpc/include/asm/pmc.h
+@@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse)
+ #endif
+ }
+ 
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++static inline int ppc_get_pmu_inuse(void)
++{
++	return get_paca()->pmcregs_in_use;
++}
++#endif
++
+ extern void power4_enable_pmcs(void);
+ 
+ #else /* CONFIG_PPC64 */
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 447b78a87c8f2..12c75b95646a5 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1085,7 +1085,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ 	}
+ 
+ 	if (cpu_to_chip_id(boot_cpuid) != -1) {
+-		int idx = num_possible_cpus() / threads_per_core;
++		int idx = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
+ 
+ 		/*
+ 		 * All threads of a core will all belong to the same core,
+@@ -1503,6 +1503,7 @@ static void add_cpu_to_masks(int cpu)
+ 	 * add it to it's own thread sibling mask.
+ 	 */
+ 	cpumask_set_cpu(cpu, cpu_sibling_mask(cpu));
++	cpumask_set_cpu(cpu, cpu_core_mask(cpu));
+ 
+ 	for (i = first_thread; i < first_thread + threads_per_core; i++)
+ 		if (cpu_online(i))
+@@ -1520,11 +1521,6 @@ static void add_cpu_to_masks(int cpu)
+ 	if (chip_id_lookup_table && ret)
+ 		chip_id = cpu_to_chip_id(cpu);
+ 
+-	if (chip_id == -1) {
+-		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+-		goto out;
+-	}
+-
+ 	if (shared_caches)
+ 		submask_fn = cpu_l2_cache_mask;
+ 
+@@ -1534,6 +1530,10 @@ static void add_cpu_to_masks(int cpu)
+ 	/* Skip all CPUs already part of current CPU core mask */
+ 	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
+ 
++	/* If chip_id is -1; limit the cpu_core_mask to within DIE*/
++	if (chip_id == -1)
++		cpumask_and(mask, mask, cpu_cpu_mask(cpu));
++
+ 	for_each_cpu(i, mask) {
+ 		if (chip_id == cpu_to_chip_id(i)) {
+ 			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
+@@ -1543,7 +1543,6 @@ static void add_cpu_to_masks(int cpu)
+ 		}
+ 	}
+ 
+-out:
+ 	free_cpumask_var(mask);
+ }
+ 
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index 2b0d04a1b7d2d..9e4a4a7af380c 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -8,6 +8,7 @@
+  * Copyright 2018 Nick Piggin, Michael Ellerman, IBM Corp.
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/export.h>
+ #include <linux/kallsyms.h>
+ #include <linux/module.h>
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index b5905ae4377c2..44eb7b1ef289e 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -65,10 +65,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
+ 	}
+ 	isync();
+ 
++	pagefault_disable();
+ 	if (is_load)
+-		ret = copy_from_user_nofault(to, (const void __user *)from, n);
++		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
+ 	else
+-		ret = copy_to_user_nofault((void __user *)to, from, n);
++		ret = __copy_to_user_inatomic((void __user *)to, from, n);
++	pagefault_enable();
+ 
+ 	/* switch the pid first to avoid running host with unallocated pid */
+ 	if (quadrant == 1 && pid != old_pid)
+diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
+index dc6591548f0cf..636c6ae0939b4 100644
+--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
+@@ -173,10 +173,13 @@ static void kvmppc_rm_tce_put(struct kvmppc_spapr_tce_table *stt,
+ 	idx -= stt->offset;
+ 	page = stt->pages[idx / TCES_PER_PAGE];
+ 	/*
+-	 * page must not be NULL in real mode,
+-	 * kvmppc_rm_ioba_validate() must have taken care of this.
++	 * kvmppc_rm_ioba_validate() allows pages not be allocated if TCE is
++	 * being cleared, otherwise it returns H_TOO_HARD and we skip this.
+ 	 */
+-	WARN_ON_ONCE_RM(!page);
++	if (!page) {
++		WARN_ON_ONCE_RM(tce != 0);
++		return;
++	}
+ 	tbl = kvmppc_page_address(page);
+ 
+ 	tbl[idx % TCES_PER_PAGE] = tce;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 085fb8ecbf688..af822f09785ff 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -59,6 +59,7 @@
+ #include <asm/kvm_book3s.h>
+ #include <asm/mmu_context.h>
+ #include <asm/lppaca.h>
++#include <asm/pmc.h>
+ #include <asm/processor.h>
+ #include <asm/cputhreads.h>
+ #include <asm/page.h>
+@@ -3852,6 +3853,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	    cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ 		kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ 
++#ifdef CONFIG_PPC_PSERIES
++	if (kvmhv_on_pseries()) {
++		barrier();
++		if (vcpu->arch.vpa.pinned_addr) {
++			struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
++			get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
++		} else {
++			get_lppaca()->pmcregs_in_use = 1;
++		}
++		barrier();
++	}
++#endif
+ 	kvmhv_load_guest_pmu(vcpu);
+ 
+ 	msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
+@@ -3986,6 +3999,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	save_pmu |= nesting_enabled(vcpu->kvm);
+ 
+ 	kvmhv_save_guest_pmu(vcpu, save_pmu);
++#ifdef CONFIG_PPC_PSERIES
++	if (kvmhv_on_pseries()) {
++		barrier();
++		get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
++		barrier();
++	}
++#endif
+ 
+ 	vc->entry_exit_map = 0x101;
+ 	vc->in_guest = 0;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index f2bf98bdcea28..094a1076fd1fe 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -893,7 +893,7 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
+ static void __init find_possible_nodes(void)
+ {
+ 	struct device_node *rtas;
+-	const __be32 *domains;
++	const __be32 *domains = NULL;
+ 	int prop_length, max_nodes;
+ 	u32 i;
+ 
+@@ -909,9 +909,14 @@ static void __init find_possible_nodes(void)
+ 	 * it doesn't exist, then fallback on ibm,max-associativity-domains.
+ 	 * Current denotes what the platform can support compared to max
+ 	 * which denotes what the Hypervisor can support.
++	 *
++	 * If the LPAR is migratable, new nodes might be activated after a LPM,
++	 * so we should consider the max number in that case.
+ 	 */
+-	domains = of_get_property(rtas, "ibm,current-associativity-domains",
+-					&prop_length);
++	if (!of_get_property(of_root, "ibm,migratable-partition", NULL))
++		domains = of_get_property(rtas,
++					  "ibm,current-associativity-domains",
++					  &prop_length);
+ 	if (!domains) {
+ 		domains = of_get_property(rtas, "ibm,max-associativity-domains",
+ 					&prop_length);
+@@ -920,6 +925,8 @@ static void __init find_possible_nodes(void)
+ 	}
+ 
+ 	max_nodes = of_read_number(&domains[min_common_depth], 1);
++	pr_info("Partition configured for %d NUMA nodes.\n", max_nodes);
++
+ 	for (i = 0; i < max_nodes; i++) {
+ 		if (!node_possible(i))
+ 			node_set(i, node_possible_map);
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index bb0ee716de912..b0a5894090391 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -2251,18 +2251,10 @@ unsigned long perf_misc_flags(struct pt_regs *regs)
+  */
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	bool use_siar = regs_use_siar(regs);
+ 	unsigned long siar = mfspr(SPRN_SIAR);
+ 
+-	if (ppmu && (ppmu->flags & PPMU_P10_DD1)) {
+-		if (siar)
+-			return siar;
+-		else
+-			return regs->nip;
+-	} else if (use_siar && siar_valid(regs))
+-		return mfspr(SPRN_SIAR) + perf_ip_adjust(regs);
+-	else if (use_siar)
+-		return 0;		// no valid instruction pointer
++	if (regs_use_siar(regs) && siar_valid(regs) && siar)
++		return siar + perf_ip_adjust(regs);
+ 	else
+ 		return regs->nip;
+ }
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index d48413e28c39e..c756228a081fb 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -175,7 +175,7 @@ static unsigned long single_gpci_request(u32 req, u32 starting_index,
+ 	 */
+ 	count = 0;
+ 	for (i = offset; i < offset + length; i++)
+-		count |= arg->bytes[i] << (i - offset);
++		count |= (u64)(arg->bytes[i]) << ((length - 1 - (i - offset)) * 8);
+ 
+ 	*value = count;
+ out:
+diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h
+index 3a77aa96d0925..bdb0c77bcfd9f 100644
+--- a/arch/s390/include/asm/setup.h
++++ b/arch/s390/include/asm/setup.h
+@@ -36,6 +36,7 @@
+ #define MACHINE_FLAG_NX		BIT(15)
+ #define MACHINE_FLAG_GS		BIT(16)
+ #define MACHINE_FLAG_SCC	BIT(17)
++#define MACHINE_FLAG_PCI_MIO	BIT(18)
+ 
+ #define LPP_MAGIC		BIT(31)
+ #define LPP_PID_MASK		_AC(0xffffffff, UL)
+@@ -110,6 +111,7 @@ extern unsigned long mio_wb_bit_mask;
+ #define MACHINE_HAS_NX		(S390_lowcore.machine_flags & MACHINE_FLAG_NX)
+ #define MACHINE_HAS_GS		(S390_lowcore.machine_flags & MACHINE_FLAG_GS)
+ #define MACHINE_HAS_SCC		(S390_lowcore.machine_flags & MACHINE_FLAG_SCC)
++#define MACHINE_HAS_PCI_MIO	(S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO)
+ 
+ /*
+  * Console mode. Override with conmode=
+diff --git a/arch/s390/include/asm/smp.h b/arch/s390/include/asm/smp.h
+index e317fd4866c15..f16f4d054ae25 100644
+--- a/arch/s390/include/asm/smp.h
++++ b/arch/s390/include/asm/smp.h
+@@ -18,6 +18,7 @@ extern struct mutex smp_cpu_state_mutex;
+ extern unsigned int smp_cpu_mt_shift;
+ extern unsigned int smp_cpu_mtid;
+ extern __vector128 __initdata boot_cpu_vector_save_area[__NUM_VXRS];
++extern cpumask_t cpu_setup_mask;
+ 
+ extern int __cpu_up(unsigned int cpu, struct task_struct *tidle);
+ 
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index fb84e3fc1686d..9857cb0467268 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -236,6 +236,10 @@ static __init void detect_machine_facilities(void)
+ 		clock_comparator_max = -1ULL >> 1;
+ 		__ctl_set_bit(0, 53);
+ 	}
++	if (IS_ENABLED(CONFIG_PCI) && test_facility(153)) {
++		S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO;
++		/* the control bit is set during PCI initialization */
++	}
+ }
+ 
+ static inline void save_vector_registers(void)
+diff --git a/arch/s390/kernel/jump_label.c b/arch/s390/kernel/jump_label.c
+index ab584e8e35275..9156653b56f69 100644
+--- a/arch/s390/kernel/jump_label.c
++++ b/arch/s390/kernel/jump_label.c
+@@ -36,7 +36,7 @@ static void jump_label_bug(struct jump_entry *entry, struct insn *expected,
+ 	unsigned char *ipe = (unsigned char *)expected;
+ 	unsigned char *ipn = (unsigned char *)new;
+ 
+-	pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc);
++	pr_emerg("Jump label code mismatch at %pS [%px]\n", ipc, ipc);
+ 	pr_emerg("Found:    %6ph\n", ipc);
+ 	pr_emerg("Expected: %6ph\n", ipe);
+ 	pr_emerg("New:      %6ph\n", ipn);
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 8e8ace899407c..1909ec99d47d7 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -95,6 +95,7 @@ __vector128 __initdata boot_cpu_vector_save_area[__NUM_VXRS];
+ #endif
+ 
+ static unsigned int smp_max_threads __initdata = -1U;
++cpumask_t cpu_setup_mask;
+ 
+ static int __init early_nosmt(char *s)
+ {
+@@ -894,13 +895,14 @@ static void smp_init_secondary(void)
+ 	vtime_init();
+ 	vdso_getcpu_init();
+ 	pfault_init();
++	cpumask_set_cpu(cpu, &cpu_setup_mask);
++	update_cpu_masks();
+ 	notify_cpu_starting(cpu);
+ 	if (topology_cpu_dedicated(cpu))
+ 		set_cpu_flag(CIF_DEDICATED_CPU);
+ 	else
+ 		clear_cpu_flag(CIF_DEDICATED_CPU);
+ 	set_cpu_online(cpu, true);
+-	update_cpu_masks();
+ 	inc_irq_stat(CPU_RST);
+ 	local_irq_enable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+@@ -955,10 +957,13 @@ early_param("possible_cpus", _setup_possible_cpus);
+ int __cpu_disable(void)
+ {
+ 	unsigned long cregs[16];
++	int cpu;
+ 
+ 	/* Handle possible pending IPIs */
+ 	smp_handle_ext_call();
+-	set_cpu_online(smp_processor_id(), false);
++	cpu = smp_processor_id();
++	set_cpu_online(cpu, false);
++	cpumask_clear_cpu(cpu, &cpu_setup_mask);
+ 	update_cpu_masks();
+ 	/* Disable pseudo page faults on this cpu. */
+ 	pfault_fini();
+diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
+index 26aa2614ee352..eb4047c9da9a3 100644
+--- a/arch/s390/kernel/topology.c
++++ b/arch/s390/kernel/topology.c
+@@ -67,7 +67,7 @@ static void cpu_group_map(cpumask_t *dst, struct mask_info *info, unsigned int c
+ 	static cpumask_t mask;
+ 
+ 	cpumask_clear(&mask);
+-	if (!cpu_online(cpu))
++	if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
+ 		goto out;
+ 	cpumask_set_cpu(cpu, &mask);
+ 	switch (topology_mode) {
+@@ -88,7 +88,7 @@ static void cpu_group_map(cpumask_t *dst, struct mask_info *info, unsigned int c
+ 	case TOPOLOGY_MODE_SINGLE:
+ 		break;
+ 	}
+-	cpumask_and(&mask, &mask, cpu_online_mask);
++	cpumask_and(&mask, &mask, &cpu_setup_mask);
+ out:
+ 	cpumask_copy(dst, &mask);
+ }
+@@ -99,16 +99,16 @@ static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
+ 	int i;
+ 
+ 	cpumask_clear(&mask);
+-	if (!cpu_online(cpu))
++	if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
+ 		goto out;
+ 	cpumask_set_cpu(cpu, &mask);
+ 	if (topology_mode != TOPOLOGY_MODE_HW)
+ 		goto out;
+ 	cpu -= cpu % (smp_cpu_mtid + 1);
+-	for (i = 0; i <= smp_cpu_mtid; i++)
+-		if (cpu_present(cpu + i))
++	for (i = 0; i <= smp_cpu_mtid; i++) {
++		if (cpumask_test_cpu(cpu + i, &cpu_setup_mask))
+ 			cpumask_set_cpu(cpu + i, &mask);
+-	cpumask_and(&mask, &mask, cpu_online_mask);
++	}
+ out:
+ 	cpumask_copy(dst, &mask);
+ }
+@@ -569,6 +569,7 @@ void __init topology_init_early(void)
+ 	alloc_masks(info, &book_info, 2);
+ 	alloc_masks(info, &drawer_info, 3);
+ out:
++	cpumask_set_cpu(0, &cpu_setup_mask);
+ 	__arch_update_cpu_topology();
+ 	__arch_update_dedicated_flag(NULL);
+ }
+diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
+index 8ac710de1ab1b..07bbee9b7320d 100644
+--- a/arch/s390/mm/init.c
++++ b/arch/s390/mm/init.c
+@@ -186,9 +186,9 @@ static void pv_init(void)
+ 		return;
+ 
+ 	/* make sure bounce buffers are shared */
++	swiotlb_force = SWIOTLB_FORCE;
+ 	swiotlb_init(1);
+ 	swiotlb_update_mem_attributes();
+-	swiotlb_force = SWIOTLB_FORCE;
+ }
+ 
+ void __init mem_init(void)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 77cd965cffefa..34839bad33e4d 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -893,7 +893,6 @@ static void zpci_mem_exit(void)
+ }
+ 
+ static unsigned int s390_pci_probe __initdata = 1;
+-static unsigned int s390_pci_no_mio __initdata;
+ unsigned int s390_pci_force_floating __initdata;
+ static unsigned int s390_pci_initialized;
+ 
+@@ -904,7 +903,7 @@ char * __init pcibios_setup(char *str)
+ 		return NULL;
+ 	}
+ 	if (!strcmp(str, "nomio")) {
+-		s390_pci_no_mio = 1;
++		S390_lowcore.machine_flags &= ~MACHINE_FLAG_PCI_MIO;
+ 		return NULL;
+ 	}
+ 	if (!strcmp(str, "force_floating")) {
+@@ -935,7 +934,7 @@ static int __init pci_base_init(void)
+ 		return 0;
+ 	}
+ 
+-	if (test_facility(153) && !s390_pci_no_mio) {
++	if (MACHINE_HAS_PCI_MIO) {
+ 		static_branch_enable(&have_mio);
+ 		ctl_set_bit(2, 5);
+ 	}
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index c890d67a64ad0..ba54c44a64e2e 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -375,8 +375,6 @@ static void __init ms_hyperv_init_platform(void)
+ 	if (ms_hyperv.features & HV_ACCESS_TSC_INVARIANT) {
+ 		wrmsrl(HV_X64_MSR_TSC_INVARIANT_CONTROL, 0x1);
+ 		setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+-	} else {
+-		mark_tsc_unstable("running on Hyper-V");
+ 	}
+ 
+ 	/*
+@@ -437,6 +435,13 @@ static void __init ms_hyperv_init_platform(void)
+ 	/* Register Hyper-V specific clocksource */
+ 	hv_init_clocksource();
+ #endif
++	/*
++	 * TSC should be marked as unstable only after Hyper-V
++	 * clocksource has been initialized. This ensures that the
++	 * stability of the sched_clock is not altered.
++	 */
++	if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT))
++		mark_tsc_unstable("running on Hyper-V");
+ }
+ 
+ static bool __init ms_hyperv_x2apic_available(void)
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index ac06ca32e9ef7..5e6e236977c75 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -618,8 +618,8 @@ int xen_alloc_p2m_entry(unsigned long pfn)
+ 	}
+ 
+ 	/* Expanded the p2m? */
+-	if (pfn > xen_p2m_last_pfn) {
+-		xen_p2m_last_pfn = pfn;
++	if (pfn >= xen_p2m_last_pfn) {
++		xen_p2m_last_pfn = ALIGN(pfn + 1, P2M_PER_PAGE);
+ 		HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn;
+ 	}
+ 
+diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c
+index 21184488c277f..0108504dfb454 100644
+--- a/arch/xtensa/platforms/iss/console.c
++++ b/arch/xtensa/platforms/iss/console.c
+@@ -136,9 +136,13 @@ static const struct tty_operations serial_ops = {
+ 
+ static int __init rs_init(void)
+ {
+-	tty_port_init(&serial_port);
++	int ret;
+ 
+ 	serial_driver = alloc_tty_driver(SERIAL_MAX_NUM_LINES);
++	if (!serial_driver)
++		return -ENOMEM;
++
++	tty_port_init(&serial_port);
+ 
+ 	/* Initialize the tty_driver structure */
+ 
+@@ -156,8 +160,15 @@ static int __init rs_init(void)
+ 	tty_set_operations(serial_driver, &serial_ops);
+ 	tty_port_link_device(&serial_port, serial_driver, 0);
+ 
+-	if (tty_register_driver(serial_driver))
+-		panic("Couldn't register serial driver\n");
++	ret = tty_register_driver(serial_driver);
++	if (ret) {
++		pr_err("Couldn't register serial driver\n");
++		tty_driver_kref_put(serial_driver);
++		tty_port_destroy(&serial_port);
++
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 673a634eadd9f..9360c65169ff4 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5296,7 +5296,7 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ 	if (bfqq->new_ioprio >= IOPRIO_BE_NR) {
+ 		pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
+ 			bfqq->new_ioprio);
+-		bfqq->new_ioprio = IOPRIO_BE_NR;
++		bfqq->new_ioprio = IOPRIO_BE_NR - 1;
+ 	}
+ 
+ 	bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 86fce751bb173..1d0c76c18fc52 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -360,9 +360,6 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
+ 	if (!blk_queue_is_zoned(q))
+ 		return -ENOTTY;
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EACCES;
+-
+ 	if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report)))
+ 		return -EFAULT;
+ 
+@@ -421,9 +418,6 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
+ 	if (!blk_queue_is_zoned(q))
+ 		return -ENOTTY;
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EACCES;
+-
+ 	if (!(mode & FMODE_WRITE))
+ 		return -EBADF;
+ 
+diff --git a/block/bsg.c b/block/bsg.c
+index 1f196563ae6ca..79b42c5cafeb8 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -373,10 +373,13 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	case SG_GET_RESERVED_SIZE:
+ 	case SG_SET_RESERVED_SIZE:
+ 	case SG_EMULATED_HOST:
+-	case SCSI_IOCTL_SEND_COMMAND:
+ 		return scsi_cmd_ioctl(bd->queue, NULL, file->f_mode, cmd, uarg);
+ 	case SG_IO:
+ 		return bsg_sg_io(bd->queue, file->f_mode, uarg);
++	case SCSI_IOCTL_SEND_COMMAND:
++		pr_warn_ratelimited("%s: calling unsupported SCSI_IOCTL_SEND_COMMAND\n",
++				current->comm);
++		return -EINVAL;
+ 	default:
+ 		return -ENOTTY;
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 44f434acfce08..0e6e73b8023fc 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3950,6 +3950,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 850*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 860*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 870*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "FCCT*M500*",			NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
+index f0ef844428bb4..338c2e50f7591 100644
+--- a/drivers/ata/sata_dwc_460ex.c
++++ b/drivers/ata/sata_dwc_460ex.c
+@@ -1259,24 +1259,20 @@ static int sata_dwc_probe(struct platform_device *ofdev)
+ 	irq = irq_of_parse_and_map(np, 0);
+ 	if (irq == NO_IRQ) {
+ 		dev_err(&ofdev->dev, "no SATA DMA irq\n");
+-		err = -ENODEV;
+-		goto error_out;
++		return -ENODEV;
+ 	}
+ 
+ #ifdef CONFIG_SATA_DWC_OLD_DMA
+ 	if (!of_find_property(np, "dmas", NULL)) {
+ 		err = sata_dwc_dma_init_old(ofdev, hsdev);
+ 		if (err)
+-			goto error_out;
++			return err;
+ 	}
+ #endif
+ 
+ 	hsdev->phy = devm_phy_optional_get(hsdev->dev, "sata-phy");
+-	if (IS_ERR(hsdev->phy)) {
+-		err = PTR_ERR(hsdev->phy);
+-		hsdev->phy = NULL;
+-		goto error_out;
+-	}
++	if (IS_ERR(hsdev->phy))
++		return PTR_ERR(hsdev->phy);
+ 
+ 	err = phy_init(hsdev->phy);
+ 	if (err)
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 6c0ef9d55a343..8c77e14987d4b 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -886,6 +886,8 @@ static void device_link_put_kref(struct device_link *link)
+ {
+ 	if (link->flags & DL_FLAG_STATELESS)
+ 		kref_put(&link->kref, __device_link_del);
++	else if (!device_is_registered(link->consumer))
++		__device_link_del(&link->kref);
+ 	else
+ 		WARN(1, "Unable to drop a managed device link reference\n");
+ }
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index 09c8ab5e0959e..32b2b6d9bde0b 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -68,6 +68,8 @@ struct fsl_mc_addr_translation_range {
+ #define MC_FAPR_PL	BIT(18)
+ #define MC_FAPR_BMT	BIT(17)
+ 
++static phys_addr_t mc_portal_base_phys_addr;
++
+ /**
+  * fsl_mc_bus_match - device to driver matching callback
+  * @dev: the fsl-mc device to match against
+@@ -220,7 +222,7 @@ static int scan_fsl_mc_bus(struct device *dev, void *data)
+ 	root_mc_dev = to_fsl_mc_device(dev);
+ 	root_mc_bus = to_fsl_mc_bus(root_mc_dev);
+ 	mutex_lock(&root_mc_bus->scan_mutex);
+-	dprc_scan_objects(root_mc_dev, NULL);
++	dprc_scan_objects(root_mc_dev, false);
+ 	mutex_unlock(&root_mc_bus->scan_mutex);
+ 
+ exit:
+@@ -703,14 +705,30 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
+ 		 * If base address is in the region_desc use it otherwise
+ 		 * revert to old mechanism
+ 		 */
+-		if (region_desc.base_address)
++		if (region_desc.base_address) {
+ 			regions[i].start = region_desc.base_address +
+ 						region_desc.base_offset;
+-		else
++		} else {
+ 			error = translate_mc_addr(mc_dev, mc_region_type,
+ 					  region_desc.base_offset,
+ 					  &regions[i].start);
+ 
++			/*
++			 * Some versions of the MC firmware wrongly report
++			 * 0 for register base address of the DPMCP associated
++			 * with child DPRC objects thus rendering them unusable.
++			 * This is particularly troublesome in ACPI boot
++			 * scenarios where the legacy way of extracting this
++			 * base address from the device tree does not apply.
++			 * Given that DPMCPs share the same base address,
++			 * workaround this by using the base address extracted
++			 * from the root DPRC container.
++			 */
++			if (is_fsl_mc_bus_dprc(mc_dev) &&
++			    regions[i].start == region_desc.base_offset)
++				regions[i].start += mc_portal_base_phys_addr;
++		}
++
+ 		if (error < 0) {
+ 			dev_err(parent_dev,
+ 				"Invalid MC offset: %#x (for %s.%d\'s region %d)\n",
+@@ -1126,6 +1144,8 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
+ 	plat_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	mc_portal_phys_addr = plat_res->start;
+ 	mc_portal_size = resource_size(plat_res);
++	mc_portal_base_phys_addr = mc_portal_phys_addr & ~0x3ffffff;
++
+ 	error = fsl_create_mc_io(&pdev->dev, mc_portal_phys_addr,
+ 				 mc_portal_size, NULL,
+ 				 FSL_MC_IO_ATOMIC_CONTEXT_PORTAL, &mc_io);
+diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c
+index b4fc8d71daf20..b656d25a97678 100644
+--- a/drivers/clk/at91/clk-generated.c
++++ b/drivers/clk/at91/clk-generated.c
+@@ -128,6 +128,12 @@ static int clk_generated_determine_rate(struct clk_hw *hw,
+ 	int i;
+ 	u32 div;
+ 
++	/* do not look for a rate that is outside of our range */
++	if (gck->range.max && req->rate > gck->range.max)
++		req->rate = gck->range.max;
++	if (gck->range.min && req->rate < gck->range.min)
++		req->rate = gck->range.min;
++
+ 	for (i = 0; i < clk_hw_get_num_parents(hw); i++) {
+ 		if (gck->chg_pid == i)
+ 			continue;
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 2c309e3dc8e34..04e728538cefe 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -216,7 +216,8 @@ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 		div->width = PCG_PREDIV_WIDTH;
+ 		divider_ops = &imx8m_clk_composite_divider_ops;
+ 		mux_ops = &clk_mux_ops;
+-		flags |= CLK_SET_PARENT_GATE;
++		if (!(composite_flags & IMX_COMPOSITE_FW_MANAGED))
++			flags |= CLK_SET_PARENT_GATE;
+ 	}
+ 
+ 	div->lock = &imx_ccm_lock;
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index f1919fafb1247..e92621fa8b9cd 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -407,10 +407,10 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MM_SYS_PLL2_500M] = imx_clk_hw_fixed_factor("sys_pll2_500m", "sys_pll2_500m_cg", 1, 2);
+ 	hws[IMX8MM_SYS_PLL2_1000M] = imx_clk_hw_fixed_factor("sys_pll2_1000m", "sys_pll2_out", 1, 1);
+ 
+-	hws[IMX8MM_CLK_CLKOUT1_SEL] = imx_clk_hw_mux("clkout1_sel", base + 0x128, 4, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
++	hws[IMX8MM_CLK_CLKOUT1_SEL] = imx_clk_hw_mux2("clkout1_sel", base + 0x128, 4, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
+ 	hws[IMX8MM_CLK_CLKOUT1_DIV] = imx_clk_hw_divider("clkout1_div", "clkout1_sel", base + 0x128, 0, 4);
+ 	hws[IMX8MM_CLK_CLKOUT1] = imx_clk_hw_gate("clkout1", "clkout1_div", base + 0x128, 8);
+-	hws[IMX8MM_CLK_CLKOUT2_SEL] = imx_clk_hw_mux("clkout2_sel", base + 0x128, 20, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
++	hws[IMX8MM_CLK_CLKOUT2_SEL] = imx_clk_hw_mux2("clkout2_sel", base + 0x128, 20, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
+ 	hws[IMX8MM_CLK_CLKOUT2_DIV] = imx_clk_hw_divider("clkout2_div", "clkout2_sel", base + 0x128, 16, 4);
+ 	hws[IMX8MM_CLK_CLKOUT2] = imx_clk_hw_gate("clkout2", "clkout2_div", base + 0x128, 24);
+ 
+@@ -470,10 +470,11 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+-	hws[IMX8MM_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mm_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MM_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mm_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MM_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mm_dram_alt_sels, base + 0xa000);
++	hws[IMX8MM_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mm_dram_apb_sels, base + 0xa080);
+ 
+ 	/* IP */
+ 	hws[IMX8MM_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mm_vpu_g1_sels, base + 0xa100);
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index 88f6630cd472f..0a76f969b28b3 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -453,10 +453,11 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+-	hws[IMX8MN_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mn_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MN_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mn_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MN_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mn_dram_alt_sels, base + 0xa000);
++	hws[IMX8MN_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mn_dram_apb_sels, base + 0xa080);
+ 
+ 	hws[IMX8MN_CLK_DISP_PIXEL] = imx8m_clk_hw_composite("disp_pixel", imx8mn_disp_pixel_sels, base + 0xa500);
+ 	hws[IMX8MN_CLK_SAI2] = imx8m_clk_hw_composite("sai2", imx8mn_sai2_sels, base + 0xa600);
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index c491bc9c61ce7..83cc2b1c32947 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -449,11 +449,12 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+ 	hws[IMX8MQ_CLK_DRAM_CORE] = imx_clk_hw_mux2_flags("dram_core_clk", base + 0x9800, 24, 1, imx8mq_dram_core_sels, ARRAY_SIZE(imx8mq_dram_core_sels), CLK_IS_CRITICAL);
+-	hws[IMX8MQ_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mq_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MQ_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mq_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MQ_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mq_dram_alt_sels, base + 0xa000);
++	hws[IMX8MQ_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mq_dram_apb_sels, base + 0xa080);
+ 
+ 	/* IP */
+ 	hws[IMX8MQ_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mq_vpu_g1_sels, base + 0xa100);
+diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
+index 7571603bee23b..e144f983fd8ce 100644
+--- a/drivers/clk/imx/clk.h
++++ b/drivers/clk/imx/clk.h
+@@ -530,8 +530,9 @@ struct clk_hw *imx_clk_hw_cpu(const char *name, const char *parent_name,
+ 		struct clk *div, struct clk *mux, struct clk *pll,
+ 		struct clk *step);
+ 
+-#define IMX_COMPOSITE_CORE	BIT(0)
+-#define IMX_COMPOSITE_BUS	BIT(1)
++#define IMX_COMPOSITE_CORE		BIT(0)
++#define IMX_COMPOSITE_BUS		BIT(1)
++#define IMX_COMPOSITE_FW_MANAGED	BIT(2)
+ 
+ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 					    const char * const *parent_names,
+@@ -567,6 +568,17 @@ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 		ARRAY_SIZE(parent_names), reg, 0, \
+ 		flags | CLK_SET_RATE_NO_REPARENT | CLK_OPS_PARENT_ENABLE)
+ 
++#define __imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, flags) \
++	imx8m_clk_hw_composite_flags(name, parent_names, \
++		ARRAY_SIZE(parent_names), reg, IMX_COMPOSITE_FW_MANAGED, \
++		flags | CLK_GET_RATE_NOCACHE | CLK_SET_RATE_NO_REPARENT | CLK_OPS_PARENT_ENABLE)
++
++#define imx8m_clk_hw_fw_managed_composite(name, parent_names, reg) \
++	__imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, 0)
++
++#define imx8m_clk_hw_fw_managed_composite_critical(name, parent_names, reg) \
++	__imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, CLK_IS_CRITICAL)
++
+ #define __imx8m_clk_composite(name, parent_names, reg, flags) \
+ 	to_clk(__imx8m_clk_hw_composite(name, parent_names, reg, flags))
+ 
+diff --git a/drivers/clk/ralink/clk-mt7621.c b/drivers/clk/ralink/clk-mt7621.c
+index 857da1e274be9..a2c045390f008 100644
+--- a/drivers/clk/ralink/clk-mt7621.c
++++ b/drivers/clk/ralink/clk-mt7621.c
+@@ -131,14 +131,7 @@ static int mt7621_gate_ops_init(struct device *dev,
+ 				struct mt7621_gate *sclk)
+ {
+ 	struct clk_init_data init = {
+-		/*
+-		 * Until now no clock driver existed so
+-		 * these SoC drivers are not prepared
+-		 * yet for the clock. We don't want kernel to
+-		 * disable anything so we add CLK_IS_CRITICAL
+-		 * flag here.
+-		 */
+-		.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++		.flags = CLK_SET_RATE_PARENT,
+ 		.num_parents = 1,
+ 		.parent_names = &sclk->parent_name,
+ 		.ops = &mt7621_gate_ops,
+diff --git a/drivers/clk/renesas/renesas-rzg2l-cpg.c b/drivers/clk/renesas/renesas-rzg2l-cpg.c
+index e7c59af2a1d85..f894a210de902 100644
+--- a/drivers/clk/renesas/renesas-rzg2l-cpg.c
++++ b/drivers/clk/renesas/renesas-rzg2l-cpg.c
+@@ -229,7 +229,7 @@ static struct clk
+ 
+ 	case CPG_MOD:
+ 		type = "module";
+-		if (clkidx > priv->num_mod_clks) {
++		if (clkidx >= priv->num_mod_clks) {
+ 			dev_err(dev, "Invalid %s clock index %u\n", type,
+ 				clkidx);
+ 			return ERR_PTR(-EINVAL);
+diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
+index fe937bcdb4876..f7827b3b7fc1c 100644
+--- a/drivers/clk/rockchip/clk-pll.c
++++ b/drivers/clk/rockchip/clk-pll.c
+@@ -940,7 +940,7 @@ struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ 	switch (pll_type) {
+ 	case pll_rk3036:
+ 	case pll_rk3328:
+-		if (!pll->rate_table || IS_ERR(ctx->grf))
++		if (!pll->rate_table)
+ 			init.ops = &rockchip_rk3036_pll_clk_norate_ops;
+ 		else
+ 			init.ops = &rockchip_rk3036_pll_clk_ops;
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index 1cb21ea79c640..242e94c0cf8a3 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -107,10 +107,10 @@ static const struct clk_parent_data gpio_db_free_mux[] = {
+ };
+ 
+ static const struct clk_parent_data psi_ref_free_mux[] = {
+-	{ .fw_name = "main_pll_c3",
+-	  .name = "main_pll_c3", },
+-	{ .fw_name = "peri_pll_c3",
+-	  .name = "peri_pll_c3", },
++	{ .fw_name = "main_pll_c2",
++	  .name = "main_pll_c2", },
++	{ .fw_name = "peri_pll_c2",
++	  .name = "peri_pll_c2", },
+ 	{ .fw_name = "osc1",
+ 	  .name = "osc1", },
+ 	{ .fw_name = "cb-intosc-hs-div2-clk",
+@@ -195,6 +195,13 @@ static const struct clk_parent_data sdmmc_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
++static const struct clk_parent_data s2f_user0_mux[] = {
++	{ .fw_name = "s2f_user0_free_clk",
++	  .name = "s2f_user0_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ static const struct clk_parent_data s2f_user1_mux[] = {
+ 	{ .fw_name = "s2f_user1_free_clk",
+ 	  .name = "s2f_user1_free_clk", },
+@@ -273,7 +280,7 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+ 	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0, 0},
+ 	{ AGILEX_S2F_USER0_FREE_CLK, "s2f_user0_free_clk", NULL, s2f_usr0_free_mux,
+-	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0, 0},
++	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0x30, 2},
+ 	{ AGILEX_S2F_USER1_FREE_CLK, "s2f_user1_free_clk", NULL, s2f_usr1_free_mux,
+ 	  ARRAY_SIZE(s2f_usr1_free_mux), 0, 0xEC, 0, 0x88, 5},
+ 	{ AGILEX_PSI_REF_FREE_CLK, "psi_ref_free_clk", NULL, psi_ref_free_mux,
+@@ -319,6 +326,8 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  4, 0x98, 0, 16, 0x88, 3, 0},
+ 	{ AGILEX_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0x7C,
+ 	  5, 0, 0, 0, 0x88, 4, 4},
++	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_user0_mux, ARRAY_SIZE(s2f_user0_mux), 0, 0x24,
++	  6, 0, 0, 0, 0x30, 2, 0},
+ 	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0x7C,
+ 	  6, 0, 0, 0, 0x88, 5, 0},
+ 	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0x7C,
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index 005600cef2730..6fbb46b2f6dac 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -36,6 +36,7 @@
+ #define MAX_PSTATE_SHIFT	32
+ #define LPSTATE_SHIFT		48
+ #define GPSTATE_SHIFT		56
++#define MAX_NR_CHIPS		32
+ 
+ #define MAX_RAMP_DOWN_TIME				5120
+ /*
+@@ -1046,12 +1047,20 @@ static int init_chip_info(void)
+ 	unsigned int *chip;
+ 	unsigned int cpu, i;
+ 	unsigned int prev_chip_id = UINT_MAX;
++	cpumask_t *chip_cpu_mask;
+ 	int ret = 0;
+ 
+ 	chip = kcalloc(num_possible_cpus(), sizeof(*chip), GFP_KERNEL);
+ 	if (!chip)
+ 		return -ENOMEM;
+ 
++	/* Allocate a chip cpu mask large enough to fit mask for all chips */
++	chip_cpu_mask = kcalloc(MAX_NR_CHIPS, sizeof(cpumask_t), GFP_KERNEL);
++	if (!chip_cpu_mask) {
++		ret = -ENOMEM;
++		goto free_and_return;
++	}
++
+ 	for_each_possible_cpu(cpu) {
+ 		unsigned int id = cpu_to_chip_id(cpu);
+ 
+@@ -1059,22 +1068,25 @@ static int init_chip_info(void)
+ 			prev_chip_id = id;
+ 			chip[nr_chips++] = id;
+ 		}
++		cpumask_set_cpu(cpu, &chip_cpu_mask[nr_chips-1]);
+ 	}
+ 
+ 	chips = kcalloc(nr_chips, sizeof(struct chip), GFP_KERNEL);
+ 	if (!chips) {
+ 		ret = -ENOMEM;
+-		goto free_and_return;
++		goto out_free_chip_cpu_mask;
+ 	}
+ 
+ 	for (i = 0; i < nr_chips; i++) {
+ 		chips[i].id = chip[i];
+-		cpumask_copy(&chips[i].mask, cpumask_of_node(chip[i]));
++		cpumask_copy(&chips[i].mask, &chip_cpu_mask[i]);
+ 		INIT_WORK(&chips[i].throttle, powernv_cpufreq_work_fn);
+ 		for_each_cpu(cpu, &chips[i].mask)
+ 			per_cpu(chip_info, cpu) =  &chips[i];
+ 	}
+ 
++out_free_chip_cpu_mask:
++	kfree(chip_cpu_mask);
+ free_and_return:
+ 	kfree(chip);
+ 	return ret;
+diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
+index a2b5c6f60cf0e..ff164dec8422e 100644
+--- a/drivers/cpuidle/cpuidle-pseries.c
++++ b/drivers/cpuidle/cpuidle-pseries.c
+@@ -402,7 +402,7 @@ static void __init fixup_cede0_latency(void)
+  * pseries_idle_probe()
+  * Choose state table for shared versus dedicated partition
+  */
+-static int pseries_idle_probe(void)
++static int __init pseries_idle_probe(void)
+ {
+ 
+ 	if (cpuidle_disable != IDLE_NO_OVERRIDE)
+@@ -419,7 +419,21 @@ static int pseries_idle_probe(void)
+ 			cpuidle_state_table = shared_states;
+ 			max_idle_state = ARRAY_SIZE(shared_states);
+ 		} else {
+-			fixup_cede0_latency();
++			/*
++			 * Use firmware provided latency values
++			 * starting with POWER10 platforms. In the
++			 * case that we are running on a POWER10
++			 * platform but in an earlier compat mode, we
++			 * can still use the firmware provided values.
++			 *
++			 * However, on platforms prior to POWER10, we
++			 * cannot rely on the accuracy of the firmware
++			 * provided latency values. On such platforms,
++			 * go with the conservative default estimate
++			 * of 10us.
++			 */
++			if (cpu_has_feature(CPU_FTR_ARCH_31) || pvr_version_is(PVR_POWER10))
++				fixup_cede0_latency();
+ 			cpuidle_state_table = dedicated_states;
+ 			max_idle_state = NR_DEDICATED_STATES;
+ 		}
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 91808402e0bf2..2ecb0e1f65d8d 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -300,6 +300,9 @@ static int __sev_platform_shutdown_locked(int *error)
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	int ret;
+ 
++	if (sev->state == SEV_STATE_UNINIT)
++		return 0;
++
+ 	ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+ 	if (ret)
+ 		return ret;
+@@ -1019,6 +1022,20 @@ e_err:
+ 	return ret;
+ }
+ 
++static void sev_firmware_shutdown(struct sev_device *sev)
++{
++	sev_platform_shutdown(NULL);
++
++	if (sev_es_tmr) {
++		/* The TMR area was encrypted, flush it from the cache */
++		wbinvd_on_all_cpus();
++
++		free_pages((unsigned long)sev_es_tmr,
++			   get_order(SEV_ES_TMR_SIZE));
++		sev_es_tmr = NULL;
++	}
++}
++
+ void sev_dev_destroy(struct psp_device *psp)
+ {
+ 	struct sev_device *sev = psp->sev_data;
+@@ -1026,6 +1043,8 @@ void sev_dev_destroy(struct psp_device *psp)
+ 	if (!sev)
+ 		return;
+ 
++	sev_firmware_shutdown(sev);
++
+ 	if (sev->misc)
+ 		kref_put(&misc_dev->refcount, sev_exit);
+ 
+@@ -1056,21 +1075,6 @@ void sev_pci_init(void)
+ 	if (sev_get_api_version())
+ 		goto err;
+ 
+-	/*
+-	 * If platform is not in UNINIT state then firmware upgrade and/or
+-	 * platform INIT command will fail. These command require UNINIT state.
+-	 *
+-	 * In a normal boot we should never run into case where the firmware
+-	 * is not in UNINIT state on boot. But in case of kexec boot, a reboot
+-	 * may not go through a typical shutdown sequence and may leave the
+-	 * firmware in INIT or WORKING state.
+-	 */
+-
+-	if (sev->state != SEV_STATE_UNINIT) {
+-		sev_platform_shutdown(NULL);
+-		sev->state = SEV_STATE_UNINIT;
+-	}
+-
+ 	if (sev_version_greater_or_equal(0, 15) &&
+ 	    sev_update_firmware(sev->dev) == 0)
+ 		sev_get_api_version();
+@@ -1115,17 +1119,10 @@ err:
+ 
+ void sev_pci_exit(void)
+ {
+-	if (!psp_master->sev_data)
+-		return;
+-
+-	sev_platform_shutdown(NULL);
++	struct sev_device *sev = psp_master->sev_data;
+ 
+-	if (sev_es_tmr) {
+-		/* The TMR area was encrypted, flush it from the cache */
+-		wbinvd_on_all_cpus();
++	if (!sev)
++		return;
+ 
+-		free_pages((unsigned long)sev_es_tmr,
+-			   get_order(SEV_ES_TMR_SIZE));
+-		sev_es_tmr = NULL;
+-	}
++	sev_firmware_shutdown(sev);
+ }
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 6fb6ba35f89d4..9bcc1884c06a1 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -241,6 +241,17 @@ e_err:
+ 	return ret;
+ }
+ 
++static void sp_pci_shutdown(struct pci_dev *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct sp_device *sp = dev_get_drvdata(dev);
++
++	if (!sp)
++		return;
++
++	sp_destroy(sp);
++}
++
+ static void sp_pci_remove(struct pci_dev *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -371,6 +382,7 @@ static struct pci_driver sp_pci_driver = {
+ 	.id_table = sp_pci_table,
+ 	.probe = sp_pci_probe,
+ 	.remove = sp_pci_remove,
++	.shutdown = sp_pci_shutdown,
+ 	.driver.pm = &sp_pci_pm_ops,
+ };
+ 
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index f397cc5bf1021..d19e5ffb5104b 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -300,21 +300,20 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 
+ 	struct scatterlist *dst = req->dst;
+ 	struct scatterlist *src = req->src;
+-	const int nents = sg_nents(req->src);
++	int dst_nents = sg_nents(dst);
+ 
+ 	const int out_off = DCP_BUF_SZ;
+ 	uint8_t *in_buf = sdcp->coh->aes_in_buf;
+ 	uint8_t *out_buf = sdcp->coh->aes_out_buf;
+ 
+-	uint8_t *out_tmp, *src_buf, *dst_buf = NULL;
+ 	uint32_t dst_off = 0;
++	uint8_t *src_buf = NULL;
+ 	uint32_t last_out_len = 0;
+ 
+ 	uint8_t *key = sdcp->coh->aes_key;
+ 
+ 	int ret = 0;
+-	int split = 0;
+-	unsigned int i, len, clen, rem = 0, tlen = 0;
++	unsigned int i, len, clen, tlen = 0;
+ 	int init = 0;
+ 	bool limit_hit = false;
+ 
+@@ -332,7 +331,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 		memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128);
+ 	}
+ 
+-	for_each_sg(req->src, src, nents, i) {
++	for_each_sg(req->src, src, sg_nents(src), i) {
+ 		src_buf = sg_virt(src);
+ 		len = sg_dma_len(src);
+ 		tlen += len;
+@@ -357,34 +356,17 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 			 * submit the buffer.
+ 			 */
+ 			if (actx->fill == out_off || sg_is_last(src) ||
+-				limit_hit) {
++			    limit_hit) {
+ 				ret = mxs_dcp_run_aes(actx, req, init);
+ 				if (ret)
+ 					return ret;
+ 				init = 0;
+ 
+-				out_tmp = out_buf;
++				sg_pcopy_from_buffer(dst, dst_nents, out_buf,
++						     actx->fill, dst_off);
++				dst_off += actx->fill;
+ 				last_out_len = actx->fill;
+-				while (dst && actx->fill) {
+-					if (!split) {
+-						dst_buf = sg_virt(dst);
+-						dst_off = 0;
+-					}
+-					rem = min(sg_dma_len(dst) - dst_off,
+-						  actx->fill);
+-
+-					memcpy(dst_buf + dst_off, out_tmp, rem);
+-					out_tmp += rem;
+-					dst_off += rem;
+-					actx->fill -= rem;
+-
+-					if (dst_off == sg_dma_len(dst)) {
+-						dst = sg_next(dst);
+-						split = 0;
+-					} else {
+-						split = 1;
+-					}
+-				}
++				actx->fill = 0;
+ 			}
+ 		} while (len);
+ 
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index f26c71747d43a..e744fd87c63c8 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -615,25 +615,21 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
+  */
+ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
+ {
+-	unsigned int seq, shared_count;
++	struct dma_fence *fence;
++	unsigned int seq;
+ 	int ret;
+ 
+ 	rcu_read_lock();
+ retry:
+ 	ret = true;
+-	shared_count = 0;
+ 	seq = read_seqcount_begin(&obj->seq);
+ 
+ 	if (test_all) {
+ 		struct dma_resv_list *fobj = dma_resv_shared_list(obj);
+-		unsigned int i;
+-
+-		if (fobj)
+-			shared_count = fobj->shared_count;
++		unsigned int i, shared_count;
+ 
++		shared_count = fobj ? fobj->shared_count : 0;
+ 		for (i = 0; i < shared_count; ++i) {
+-			struct dma_fence *fence;
+-
+ 			fence = rcu_dereference(fobj->shared[i]);
+ 			ret = dma_resv_test_signaled_single(fence);
+ 			if (ret < 0)
+@@ -641,24 +637,19 @@ retry:
+ 			else if (!ret)
+ 				break;
+ 		}
+-
+-		if (read_seqcount_retry(&obj->seq, seq))
+-			goto retry;
+ 	}
+ 
+-	if (!shared_count) {
+-		struct dma_fence *fence_excl = dma_resv_excl_fence(obj);
+-
+-		if (fence_excl) {
+-			ret = dma_resv_test_signaled_single(fence_excl);
+-			if (ret < 0)
+-				goto retry;
++	fence = dma_resv_excl_fence(obj);
++	if (ret && fence) {
++		ret = dma_resv_test_signaled_single(fence);
++		if (ret < 0)
++			goto retry;
+ 
+-			if (read_seqcount_retry(&obj->seq, seq))
+-				goto retry;
+-		}
+ 	}
+ 
++	if (read_seqcount_retry(&obj->seq, seq))
++		goto retry;
++
+ 	rcu_read_unlock();
+ 	return ret;
+ }
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 8070fd664bfc6..665ccbf2b8be8 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -433,7 +433,6 @@ struct sdma_channel {
+ 	unsigned long			watermark_level;
+ 	u32				shp_addr, per_addr;
+ 	enum dma_status			status;
+-	bool				context_loaded;
+ 	struct imx_dma_data		data;
+ 	struct work_struct		terminate_worker;
+ };
+@@ -1008,9 +1007,6 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 	int ret;
+ 	unsigned long flags;
+ 
+-	if (sdmac->context_loaded)
+-		return 0;
+-
+ 	if (sdmac->direction == DMA_DEV_TO_MEM)
+ 		load_address = sdmac->pc_from_device;
+ 	else if (sdmac->direction == DMA_DEV_TO_DEV)
+@@ -1053,8 +1049,6 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 
+ 	spin_unlock_irqrestore(&sdma->channel_0_lock, flags);
+ 
+-	sdmac->context_loaded = true;
+-
+ 	return ret;
+ }
+ 
+@@ -1093,7 +1087,6 @@ static void sdma_channel_terminate_work(struct work_struct *work)
+ 	vchan_get_all_descriptors(&sdmac->vc, &head);
+ 	spin_unlock_irqrestore(&sdmac->vc.lock, flags);
+ 	vchan_dma_desc_free_list(&sdmac->vc, &head);
+-	sdmac->context_loaded = false;
+ }
+ 
+ static int sdma_terminate_all(struct dma_chan *chan)
+@@ -1168,7 +1161,6 @@ static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac)
+ static int sdma_config_channel(struct dma_chan *chan)
+ {
+ 	struct sdma_channel *sdmac = to_sdma_chan(chan);
+-	int ret;
+ 
+ 	sdma_disable_channel(chan);
+ 
+@@ -1208,9 +1200,7 @@ static int sdma_config_channel(struct dma_chan *chan)
+ 		sdmac->watermark_level = 0; /* FIXME: M3_BASE_ADDRESS */
+ 	}
+ 
+-	ret = sdma_load_context(sdmac);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int sdma_set_channel_priority(struct sdma_channel *sdmac,
+@@ -1361,7 +1351,6 @@ static void sdma_free_chan_resources(struct dma_chan *chan)
+ 
+ 	sdmac->event_id0 = 0;
+ 	sdmac->event_id1 = 0;
+-	sdmac->context_loaded = false;
+ 
+ 	sdma_set_channel_priority(sdmac, 0);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 8e5a7ac8c36fc..7a73167319116 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -522,6 +522,7 @@ uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,
+ 			break;
+ 		case CHIP_RENOIR:
+ 		case CHIP_VANGOGH:
++		case CHIP_YELLOW_CARP:
+ 			domain |= AMDGPU_GEM_DOMAIN_GTT;
+ 			break;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 854fc497844b8..9a67746c10edd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -341,21 +341,18 @@ retry:
+ 	r = amdgpu_gem_object_create(adev, size, args->in.alignment,
+ 				     initial_domain,
+ 				     flags, ttm_bo_type_device, resv, &gobj);
+-	if (r) {
+-		if (r != -ERESTARTSYS) {
+-			if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
+-				flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+-				goto retry;
+-			}
++	if (r && r != -ERESTARTSYS) {
++		if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
++			flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
++			goto retry;
++		}
+ 
+-			if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
+-				initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
+-				goto retry;
+-			}
+-			DRM_DEBUG("Failed to allocate GEM object (%llu, %d, %llu, %d)\n",
+-				  size, initial_domain, args->in.alignment, r);
++		if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
++			initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
++			goto retry;
+ 		}
+-		return r;
++		DRM_DEBUG("Failed to allocate GEM object (%llu, %d, %llu, %d)\n",
++				size, initial_domain, args->in.alignment, r);
+ 	}
+ 
+ 	if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+index bca4dddd5a15b..82608df433964 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+@@ -339,7 +339,7 @@ static void amdgpu_i2c_put_byte(struct amdgpu_i2c_chan *i2c_bus,
+ void
+ amdgpu_i2c_router_select_ddc_port(const struct amdgpu_connector *amdgpu_connector)
+ {
+-	u8 val;
++	u8 val = 0;
+ 
+ 	if (!amdgpu_connector->router.ddc_valid)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 92c8e6e7f346b..def812f6231aa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -196,7 +196,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ 		c++;
+ 	}
+ 
+-	BUG_ON(c >= AMDGPU_BO_MAX_PLACEMENTS);
++	BUG_ON(c > AMDGPU_BO_MAX_PLACEMENTS);
+ 
+ 	placement->num_placement = c;
+ 	placement->placement = places;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index fc66aca285944..95d5842385b32 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1966,11 +1966,20 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
+ 	bool exc_err_limit = false;
+ 	int ret;
+ 
+-	if (adev->ras_enabled && con)
+-		data = &con->eh_data;
+-	else
++	if (!con)
++		return 0;
++
++	/* Allow access to RAS EEPROM via debugfs, when the ASIC
++	 * supports RAS and debugfs is enabled, but when
++	 * adev->ras_enabled is unset, i.e. when "ras_enable"
++	 * module parameter is set to 0.
++	 */
++	con->adev = adev;
++
++	if (!adev->ras_enabled)
+ 		return 0;
+ 
++	data = &con->eh_data;
+ 	*data = kmalloc(sizeof(**data), GFP_KERNEL | __GFP_ZERO);
+ 	if (!*data) {
+ 		ret = -ENOMEM;
+@@ -1980,7 +1989,6 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
+ 	mutex_init(&con->recovery_lock);
+ 	INIT_WORK(&con->recovery_work, amdgpu_ras_do_recovery);
+ 	atomic_set(&con->in_recovery, 0);
+-	con->adev = adev;
+ 
+ 	max_eeprom_records_len = amdgpu_ras_eeprom_get_record_max_length();
+ 	amdgpu_ras_validate_threshold(adev, max_eeprom_records_len);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index 38222de921d15..8dd151c9e4591 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -325,7 +325,7 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control,
+ 		return ret;
+ 	}
+ 
+-	__decode_table_header_from_buff(hdr, &buff[2]);
++	__decode_table_header_from_buff(hdr, buff);
+ 
+ 	if (hdr->header == EEPROM_TABLE_HDR_VAL) {
+ 		control->num_recs = (hdr->tbl_size - EEPROM_TABLE_HEADER_SIZE) /
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 284bb42d6c866..121ee9f2b8d16 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -119,7 +119,7 @@ static int vcn_v1_0_sw_init(void *handle)
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+ 		adev->firmware.fw_size +=
+ 			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index 8af567c546dbc..f4686e918e0d1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -122,7 +122,7 @@ static int vcn_v2_0_sw_init(void *handle)
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+ 		adev->firmware.fw_size +=
+ 			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index 888b17d84691c..e0c0c3734432e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -152,7 +152,7 @@ static int vcn_v2_5_sw_init(void *handle)
+ 			adev->firmware.fw_size +=
+ 				ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+ 		}
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 47d4f04cbd69e..2f017560948eb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -160,7 +160,7 @@ static int vcn_v3_0_sw_init(void *handle)
+ 			adev->firmware.fw_size +=
+ 				ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+ 		}
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+index 88813dad731fa..c021519af8106 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+@@ -98,36 +98,78 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 		uint32_t *se_mask)
+ {
+ 	struct kfd_cu_info cu_info;
+-	uint32_t cu_per_se[KFD_MAX_NUM_SE] = {0};
+-	int i, se, sh, cu = 0;
+-
++	uint32_t cu_per_sh[KFD_MAX_NUM_SE][KFD_MAX_NUM_SH_PER_SE] = {0};
++	int i, se, sh, cu;
+ 	amdgpu_amdkfd_get_cu_info(mm->dev->kgd, &cu_info);
+ 
+ 	if (cu_mask_count > cu_info.cu_active_number)
+ 		cu_mask_count = cu_info.cu_active_number;
+ 
++	/* Exceeding these bounds corrupts the stack and indicates a coding error.
++	 * Returning with no CU's enabled will hang the queue, which should be
++	 * attention grabbing.
++	 */
++	if (cu_info.num_shader_engines > KFD_MAX_NUM_SE) {
++		pr_err("Exceeded KFD_MAX_NUM_SE, chip reports %d\n", cu_info.num_shader_engines);
++		return;
++	}
++	if (cu_info.num_shader_arrays_per_engine > KFD_MAX_NUM_SH_PER_SE) {
++		pr_err("Exceeded KFD_MAX_NUM_SH, chip reports %d\n",
++			cu_info.num_shader_arrays_per_engine * cu_info.num_shader_engines);
++		return;
++	}
++	/* Count active CUs per SH.
++	 *
++	 * Some CUs in an SH may be disabled.	HW expects disabled CUs to be
++	 * represented in the high bits of each SH's enable mask (the upper and lower
++	 * 16 bits of se_mask) and will take care of the actual distribution of
++	 * disabled CUs within each SH automatically.
++	 * Each half of se_mask must be filled only on bits 0-cu_per_sh[se][sh]-1.
++	 *
++	 * See note on Arcturus cu_bitmap layout in gfx_v9_0_get_cu_info.
++	 */
+ 	for (se = 0; se < cu_info.num_shader_engines; se++)
+ 		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++)
+-			cu_per_se[se] += hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
+-
+-	/* Symmetrically map cu_mask to all SEs:
+-	 * cu_mask[0] bit0 -> se_mask[0] bit0;
+-	 * cu_mask[0] bit1 -> se_mask[1] bit0;
+-	 * ... (if # SE is 4)
+-	 * cu_mask[0] bit4 -> se_mask[0] bit1;
++			cu_per_sh[se][sh] = hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
++
++	/* Symmetrically map cu_mask to all SEs & SHs:
++	 * se_mask programs up to 2 SH in the upper and lower 16 bits.
++	 *
++	 * Examples
++	 * Assuming 1 SH/SE, 4 SEs:
++	 * cu_mask[0] bit0 -> se_mask[0] bit0
++	 * cu_mask[0] bit1 -> se_mask[1] bit0
++	 * ...
++	 * cu_mask[0] bit4 -> se_mask[0] bit1
++	 * ...
++	 *
++	 * Assuming 2 SH/SE, 4 SEs
++	 * cu_mask[0] bit0 -> se_mask[0] bit0 (SE0,SH0,CU0)
++	 * cu_mask[0] bit1 -> se_mask[1] bit0 (SE1,SH0,CU0)
++	 * ...
++	 * cu_mask[0] bit4 -> se_mask[0] bit16 (SE0,SH1,CU0)
++	 * cu_mask[0] bit5 -> se_mask[1] bit16 (SE1,SH1,CU0)
++	 * ...
++	 * cu_mask[0] bit8 -> se_mask[0] bit1 (SE0,SH0,CU1)
+ 	 * ...
++	 *
++	 * First ensure all CUs are disabled, then enable user specified CUs.
+ 	 */
+-	se = 0;
+-	for (i = 0; i < cu_mask_count; i++) {
+-		if (cu_mask[i / 32] & (1 << (i % 32)))
+-			se_mask[se] |= 1 << cu;
+-
+-		do {
+-			se++;
+-			if (se == cu_info.num_shader_engines) {
+-				se = 0;
+-				cu++;
++	for (i = 0; i < cu_info.num_shader_engines; i++)
++		se_mask[i] = 0;
++
++	i = 0;
++	for (cu = 0; cu < 16; cu++) {
++		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++) {
++			for (se = 0; se < cu_info.num_shader_engines; se++) {
++				if (cu_per_sh[se][sh] > cu) {
++					if (cu_mask[i / 32] & (1 << (i % 32)))
++						se_mask[se] |= 1 << (cu + sh * 16);
++					i++;
++					if (i == cu_mask_count)
++						return;
++				}
+ 			}
+-		} while (cu >= cu_per_se[se] && cu < 32);
++		}
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+index b5e2ea7550d41..6e6918ccedfdb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+@@ -27,6 +27,7 @@
+ #include "kfd_priv.h"
+ 
+ #define KFD_MAX_NUM_SE 8
++#define KFD_MAX_NUM_SH_PER_SE 2
+ 
+ /**
+  * struct mqd_manager
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index e883731c3f8ff..0f7f1e5621ea4 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -2426,7 +2426,8 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ 	}
+ 	if (!p->xnack_enabled) {
+ 		pr_debug("XNACK not enabled for pasid 0x%x\n", pasid);
+-		return -EFAULT;
++		r = -EFAULT;
++		goto out;
+ 	}
+ 	svms = &p->svms;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index afa96c8f721b7..3f913e4abd49e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1202,7 +1202,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	dc_hardware_init(adev->dm.dc);
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+-	if (adev->apu_flags) {
++	if ((adev->flags & AMD_IS_APU) && (adev->asic_type >= CHIP_CARRIZO)) {
+ 		struct dc_phy_addr_space_config pa_config;
+ 
+ 		mmhub_read_system_context(adev, &pa_config);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index f1145086a4688..1d15a9af99560 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -197,29 +197,29 @@ static ssize_t dp_link_settings_read(struct file *f, char __user *buf,
+ 
+ 	rd_buf_ptr = rd_buf;
+ 
+-	str_len = strlen("Current:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Current:  %d  %d  %d  ",
++	str_len = strlen("Current:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Current:  %d  0x%x  %d  ",
+ 			link->cur_link_settings.lane_count,
+ 			link->cur_link_settings.link_rate,
+ 			link->cur_link_settings.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Verified:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Verified:  %d  %d  %d  ",
++	str_len = strlen("Verified:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Verified:  %d  0x%x  %d  ",
+ 			link->verified_link_cap.lane_count,
+ 			link->verified_link_cap.link_rate,
+ 			link->verified_link_cap.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Reported:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Reported:  %d  %d  %d  ",
++	str_len = strlen("Reported:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Reported:  %d  0x%x  %d  ",
+ 			link->reported_link_cap.lane_count,
+ 			link->reported_link_cap.link_rate,
+ 			link->reported_link_cap.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Preferred:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Preferred:  %d  %d  %d\n",
++	str_len = strlen("Preferred:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Preferred:  %d  0x%x  %d\n",
+ 			link->preferred_link_setting.lane_count,
+ 			link->preferred_link_setting.link_rate,
+ 			link->preferred_link_setting.link_spread);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+index 10d42ae0cffef..3428334c6c575 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+@@ -207,7 +207,7 @@ static void dmub_psr_set_level(struct dmub_psr *dmub, uint16_t psr_level, uint8_
+ 	cmd.psr_set_level.header.sub_type = DMUB_CMD__PSR_SET_LEVEL;
+ 	cmd.psr_set_level.header.payload_bytes = sizeof(struct dmub_cmd_psr_set_level_data);
+ 	cmd.psr_set_level.psr_set_level_data.psr_level = psr_level;
+-	cmd.psr_set_level.psr_set_level_data.cmd_version = PSR_VERSION_1;
++	cmd.psr_set_level.psr_set_level_data.cmd_version = DMUB_CMD_PSR_CONTROL_VERSION_1;
+ 	cmd.psr_set_level.psr_set_level_data.panel_inst = panel_inst;
+ 	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
+ 	dc_dmub_srv_cmd_execute(dc->dmub_srv);
+@@ -293,7 +293,7 @@ static bool dmub_psr_copy_settings(struct dmub_psr *dmub,
+ 	copy_settings_data->debug.bitfields.use_hw_lock_mgr		= 1;
+ 	copy_settings_data->fec_enable_status = (link->fec_state == dc_link_fec_enabled);
+ 	copy_settings_data->fec_enable_delay_in100us = link->dc->debug.fec_enable_delay_in100us;
+-	copy_settings_data->cmd_version =  PSR_VERSION_1;
++	copy_settings_data->cmd_version =  DMUB_CMD_PSR_CONTROL_VERSION_1;
+ 	copy_settings_data->panel_inst = panel_inst;
+ 
+ 	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index c545eddabdcca..75fa4adcf5f40 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1502,25 +1502,22 @@ void dcn10_init_hw(struct dc *dc)
+ void dcn10_power_down_on_boot(struct dc *dc)
+ {
+ 	struct dc_link *edp_links[MAX_NUM_EDP];
+-	struct dc_link *edp_link;
++	struct dc_link *edp_link = NULL;
+ 	int edp_num;
+ 	int i = 0;
+ 
+ 	get_edp_links(dc, edp_links, &edp_num);
+-
+-	if (edp_num) {
+-		for (i = 0; i < edp_num; i++) {
+-			edp_link = edp_links[i];
+-			if (edp_link->link_enc->funcs->is_dig_enabled &&
+-					edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc) &&
+-					dc->hwseq->funcs.edp_backlight_control &&
+-					dc->hwss.power_down &&
+-					dc->hwss.edp_power_control) {
+-				dc->hwseq->funcs.edp_backlight_control(edp_link, false);
+-				dc->hwss.power_down(dc);
+-				dc->hwss.edp_power_control(edp_link, false);
+-			}
+-		}
++	if (edp_num)
++		edp_link = edp_links[0];
++
++	if (edp_link && edp_link->link_enc->funcs->is_dig_enabled &&
++			edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc) &&
++			dc->hwseq->funcs.edp_backlight_control &&
++			dc->hwss.power_down &&
++			dc->hwss.edp_power_control) {
++		dc->hwseq->funcs.edp_backlight_control(edp_link, false);
++		dc->hwss.power_down(dc);
++		dc->hwss.edp_power_control(edp_link, false);
+ 	} else {
+ 		for (i = 0; i < dc->link_count; i++) {
+ 			struct dc_link *link = dc->links[i];
+@@ -3631,13 +3628,12 @@ enum dc_status dcn10_set_clock(struct dc *dc,
+ 	struct dc_clock_config clock_cfg = {0};
+ 	struct dc_clocks *current_clocks = &context->bw_ctx.bw.dcn.clk;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->get_clock)
+-				dc->clk_mgr->funcs->get_clock(dc->clk_mgr,
+-						context, clock_type, &clock_cfg);
+-
+-	if (!dc->clk_mgr->funcs->get_clock)
++	if (!dc->clk_mgr || !dc->clk_mgr->funcs->get_clock)
+ 		return DC_FAIL_UNSUPPORTED_1;
+ 
++	dc->clk_mgr->funcs->get_clock(dc->clk_mgr,
++		context, clock_type, &clock_cfg);
++
+ 	if (clk_khz > clock_cfg.max_clock_khz)
+ 		return DC_FAIL_CLK_EXCEED_MAX;
+ 
+@@ -3655,7 +3651,7 @@ enum dc_status dcn10_set_clock(struct dc *dc,
+ 	else
+ 		return DC_ERROR_UNEXPECTED;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->update_clocks)
++	if (dc->clk_mgr->funcs->update_clocks)
+ 				dc->clk_mgr->funcs->update_clocks(dc->clk_mgr,
+ 				context, true);
+ 	return DC_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 5c2853654ccad..a47ba1d45be92 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1723,13 +1723,15 @@ void dcn20_program_front_end_for_ctx(
+ 
+ 				pipe = pipe->bottom_pipe;
+ 			}
+-			/* Program secondary blending tree and writeback pipes */
+-			pipe = &context->res_ctx.pipe_ctx[i];
+-			if (!pipe->prev_odm_pipe && pipe->stream->num_wb_info > 0
+-					&& (pipe->update_flags.raw || pipe->plane_state->update_flags.raw || pipe->stream->update_flags.raw)
+-					&& hws->funcs.program_all_writeback_pipes_in_tree)
+-				hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 		}
++		/* Program secondary blending tree and writeback pipes */
++		pipe = &context->res_ctx.pipe_ctx[i];
++		if (!pipe->top_pipe && !pipe->prev_odm_pipe
++				&& pipe->stream && pipe->stream->num_wb_info > 0
++				&& (pipe->update_flags.raw || (pipe->plane_state && pipe->plane_state->update_flags.raw)
++					|| pipe->stream->update_flags.raw)
++				&& hws->funcs.program_all_writeback_pipes_in_tree)
++			hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index b173fa3653b55..c78933a9d31c1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2462,7 +2462,7 @@ void dcn20_set_mcif_arb_params(
+ 				wb_arb_params->cli_watermark[k] = get_wm_writeback_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ 				wb_arb_params->pstate_watermark[k] = get_wm_writeback_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ 			}
+-			wb_arb_params->time_per_pixel = 16.0 / context->res_ctx.pipe_ctx[i].stream->phy_pix_clk; /* 4 bit fraction, ms */
++			wb_arb_params->time_per_pixel = 16.0 * 1000 / (context->res_ctx.pipe_ctx[i].stream->phy_pix_clk / 1000); /* 4 bit fraction, ms */
+ 			wb_arb_params->slice_lines = 32;
+ 			wb_arb_params->arbitration_slice = 2;
+ 			wb_arb_params->max_scaled_time = dcn20_calc_max_scaled_time(wb_arb_params->time_per_pixel,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
+index 3fe9e41e4dbd7..6a3d3a0ec0a36 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
+@@ -49,6 +49,11 @@
+ static void dwb3_get_reg_field_ogam(struct dcn30_dwbc *dwbc30,
+ 	struct dcn3_xfer_func_reg *reg)
+ {
++	reg->shifts.field_region_start_base = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_BASE_B;
++	reg->masks.field_region_start_base = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_BASE_B;
++	reg->shifts.field_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_OFFSET_B;
++	reg->masks.field_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_OFFSET_B;
++
+ 	reg->shifts.exp_region0_lut_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION0_LUT_OFFSET;
+ 	reg->masks.exp_region0_lut_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION0_LUT_OFFSET;
+ 	reg->shifts.exp_region0_num_segments = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION0_NUM_SEGMENTS;
+@@ -66,8 +71,6 @@ static void dwb3_get_reg_field_ogam(struct dcn30_dwbc *dwbc30,
+ 	reg->masks.field_region_end_base = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_END_BASE_B;
+ 	reg->shifts.field_region_linear_slope = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_SLOPE_B;
+ 	reg->masks.field_region_linear_slope = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_SLOPE_B;
+-	reg->masks.field_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_OFFSET_B;
+-	reg->shifts.field_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_OFFSET_B;
+ 	reg->shifts.exp_region_start = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_B;
+ 	reg->masks.exp_region_start = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_B;
+ 	reg->shifts.exp_resion_start_segment = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_SEGMENT_B;
+@@ -147,18 +150,19 @@ static enum dc_lut_mode dwb3_get_ogam_current(
+ 	uint32_t state_mode;
+ 	uint32_t ram_select;
+ 
+-	REG_GET(DWB_OGAM_CONTROL,
+-		DWB_OGAM_MODE, &state_mode);
+-	REG_GET(DWB_OGAM_CONTROL,
+-		DWB_OGAM_SELECT, &ram_select);
++	REG_GET_2(DWB_OGAM_CONTROL,
++		DWB_OGAM_MODE_CURRENT, &state_mode,
++		DWB_OGAM_SELECT_CURRENT, &ram_select);
+ 
+ 	if (state_mode == 0) {
+ 		mode = LUT_BYPASS;
+ 	} else if (state_mode == 2) {
+ 		if (ram_select == 0)
+ 			mode = LUT_RAM_A;
+-		else
++		else if (ram_select == 1)
+ 			mode = LUT_RAM_B;
++		else
++			mode = LUT_BYPASS;
+ 	} else {
+ 		// Reserved value
+ 		mode = LUT_BYPASS;
+@@ -172,10 +176,10 @@ static void dwb3_configure_ogam_lut(
+ 	struct dcn30_dwbc *dwbc30,
+ 	bool is_ram_a)
+ {
+-	REG_UPDATE(DWB_OGAM_LUT_CONTROL,
+-		DWB_OGAM_LUT_READ_COLOR_SEL, 7);
+-	REG_UPDATE(DWB_OGAM_CONTROL,
+-		DWB_OGAM_SELECT, is_ram_a == true ? 0 : 1);
++	REG_UPDATE_2(DWB_OGAM_LUT_CONTROL,
++		DWB_OGAM_LUT_WRITE_COLOR_MASK, 7,
++		DWB_OGAM_LUT_HOST_SEL, (is_ram_a == true) ? 0 : 1);
++
+ 	REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
+ }
+ 
+@@ -185,17 +189,45 @@ static void dwb3_program_ogam_pwl(struct dcn30_dwbc *dwbc30,
+ {
+ 	uint32_t i;
+ 
+-    // triple base implementation
+-	for (i = 0; i < num/2; i++) {
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].blue_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].blue_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].blue_reg);
++	uint32_t last_base_value_red = rgb[num-1].red_reg + rgb[num-1].delta_red_reg;
++	uint32_t last_base_value_green = rgb[num-1].green_reg + rgb[num-1].delta_green_reg;
++	uint32_t last_base_value_blue = rgb[num-1].blue_reg + rgb[num-1].delta_blue_reg;
++
++	if (is_rgb_equal(rgb,  num)) {
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].red_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_red);
++
++	} else {
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 4);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].red_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_red);
++
++		REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 2);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].green_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_green);
++
++		REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 1);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].blue_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_blue);
+ 	}
+ }
+ 
+@@ -211,6 +243,8 @@ static bool dwb3_program_ogam_lut(
+ 		return false;
+ 	}
+ 
++	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_MODE, 2);
++
+ 	current_mode = dwb3_get_ogam_current(dwbc30);
+ 	if (current_mode == LUT_BYPASS || current_mode == LUT_RAM_A)
+ 		next_mode = LUT_RAM_B;
+@@ -227,8 +261,7 @@ static bool dwb3_program_ogam_lut(
+ 	dwb3_program_ogam_pwl(
+ 		dwbc30, params->rgb_resulted, params->hw_points_num);
+ 
+-	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_MODE, 2);
+-	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_SELECT, next_mode == LUT_RAM_A ? 0 : 1);
++	REG_UPDATE(DWB_OGAM_CONTROL, DWB_OGAM_SELECT, next_mode == LUT_RAM_A ? 0 : 1);
+ 
+ 	return true;
+ }
+@@ -271,14 +304,19 @@ static void dwb3_program_gamut_remap(
+ 
+ 	struct color_matrices_reg gam_regs;
+ 
+-	REG_UPDATE(DWB_GAMUT_REMAP_COEF_FORMAT, DWB_GAMUT_REMAP_COEF_FORMAT, coef_format);
+-
+ 	if (regval == NULL || select == CM_GAMUT_REMAP_MODE_BYPASS) {
+ 		REG_SET(DWB_GAMUT_REMAP_MODE, 0,
+ 				DWB_GAMUT_REMAP_MODE, 0);
+ 		return;
+ 	}
+ 
++	REG_UPDATE(DWB_GAMUT_REMAP_COEF_FORMAT, DWB_GAMUT_REMAP_COEF_FORMAT, coef_format);
++
++	gam_regs.shifts.csc_c11 = dwbc30->dwbc_shift->DWB_GAMUT_REMAPA_C11;
++	gam_regs.masks.csc_c11  = dwbc30->dwbc_mask->DWB_GAMUT_REMAPA_C11;
++	gam_regs.shifts.csc_c12 = dwbc30->dwbc_shift->DWB_GAMUT_REMAPA_C12;
++	gam_regs.masks.csc_c12 = dwbc30->dwbc_mask->DWB_GAMUT_REMAPA_C12;
++
+ 	switch (select) {
+ 	case CM_GAMUT_REMAP_MODE_RAMA_COEFF:
+ 		gam_regs.csc_c11_c12 = REG(DWB_GAMUT_REMAPA_C11_C12);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index c68e3a708a335..fafed1e4a998d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -398,12 +398,22 @@ void dcn30_program_all_writeback_pipes_in_tree(
+ 			for (i_pipe = 0; i_pipe < dc->res_pool->pipe_count; i_pipe++) {
+ 				struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i_pipe];
+ 
++				if (!pipe_ctx->plane_state)
++					continue;
++
+ 				if (pipe_ctx->plane_state == wb_info.writeback_source_plane) {
+ 					wb_info.mpcc_inst = pipe_ctx->plane_res.mpcc_inst;
+ 					break;
+ 				}
+ 			}
+-			ASSERT(wb_info.mpcc_inst != -1);
++
++			if (wb_info.mpcc_inst == -1) {
++				/* Disable writeback pipe and disconnect from MPCC
++				 * if source plane has been removed
++				 */
++				dc->hwss.disable_writeback(dc, wb_info.dwb_pipe_inst);
++				continue;
++			}
+ 
+ 			ASSERT(wb_info.dwb_pipe_inst < dc->res_pool->res_cap->num_dwb);
+ 			dwb = dc->res_pool->dwbc[wb_info.dwb_pipe_inst];
+@@ -580,22 +590,19 @@ void dcn30_init_hw(struct dc *dc)
+ 	 */
+ 	if (dc->config.power_down_display_on_boot) {
+ 		struct dc_link *edp_links[MAX_NUM_EDP];
+-		struct dc_link *edp_link;
++		struct dc_link *edp_link = NULL;
+ 
+ 		get_edp_links(dc, edp_links, &edp_num);
+-		if (edp_num) {
+-			for (i = 0; i < edp_num; i++) {
+-				edp_link = edp_links[i];
+-				if (edp_link->link_enc->funcs->is_dig_enabled &&
+-						edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc) &&
+-						dc->hwss.edp_backlight_control &&
+-						dc->hwss.power_down &&
+-						dc->hwss.edp_power_control) {
+-					dc->hwss.edp_backlight_control(edp_link, false);
+-					dc->hwss.power_down(dc);
+-					dc->hwss.edp_power_control(edp_link, false);
+-				}
+-			}
++		if (edp_num)
++			edp_link = edp_links[0];
++		if (edp_link && edp_link->link_enc->funcs->is_dig_enabled &&
++				edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc) &&
++				dc->hwss.edp_backlight_control &&
++				dc->hwss.power_down &&
++				dc->hwss.edp_power_control) {
++			dc->hwss.edp_backlight_control(edp_link, false);
++			dc->hwss.power_down(dc);
++			dc->hwss.edp_power_control(edp_link, false);
+ 		} else {
+ 			for (i = 0; i < dc->link_count; i++) {
+ 				struct dc_link *link = dc->links[i];
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index 28e15ebf2f431..23a246b62d5d7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -2398,16 +2398,37 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 	dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+ 
+ 	if (bw_params->clk_table.entries[0].memclk_mhz) {
++		int max_dcfclk_mhz = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0, max_phyclk_mhz = 0;
++
++		for (i = 0; i < MAX_NUM_DPM_LVL; i++) {
++			if (bw_params->clk_table.entries[i].dcfclk_mhz > max_dcfclk_mhz)
++				max_dcfclk_mhz = bw_params->clk_table.entries[i].dcfclk_mhz;
++			if (bw_params->clk_table.entries[i].dispclk_mhz > max_dispclk_mhz)
++				max_dispclk_mhz = bw_params->clk_table.entries[i].dispclk_mhz;
++			if (bw_params->clk_table.entries[i].dppclk_mhz > max_dppclk_mhz)
++				max_dppclk_mhz = bw_params->clk_table.entries[i].dppclk_mhz;
++			if (bw_params->clk_table.entries[i].phyclk_mhz > max_phyclk_mhz)
++				max_phyclk_mhz = bw_params->clk_table.entries[i].phyclk_mhz;
++		}
++
++		if (!max_dcfclk_mhz)
++			max_dcfclk_mhz = dcn3_0_soc.clock_limits[0].dcfclk_mhz;
++		if (!max_dispclk_mhz)
++			max_dispclk_mhz = dcn3_0_soc.clock_limits[0].dispclk_mhz;
++		if (!max_dppclk_mhz)
++			max_dppclk_mhz = dcn3_0_soc.clock_limits[0].dppclk_mhz;
++		if (!max_phyclk_mhz)
++			max_phyclk_mhz = dcn3_0_soc.clock_limits[0].phyclk_mhz;
+ 
+-		if (bw_params->clk_table.entries[1].dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
++		if (max_dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+ 			// If max DCFCLK is greater than the max DCFCLK STA target, insert into the DCFCLK STA target array
+-			dcfclk_sta_targets[num_dcfclk_sta_targets] = bw_params->clk_table.entries[1].dcfclk_mhz;
++			dcfclk_sta_targets[num_dcfclk_sta_targets] = max_dcfclk_mhz;
+ 			num_dcfclk_sta_targets++;
+-		} else if (bw_params->clk_table.entries[1].dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
++		} else if (max_dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+ 			// If max DCFCLK is less than the max DCFCLK STA target, cap values and remove duplicates
+ 			for (i = 0; i < num_dcfclk_sta_targets; i++) {
+-				if (dcfclk_sta_targets[i] > bw_params->clk_table.entries[1].dcfclk_mhz) {
+-					dcfclk_sta_targets[i] = bw_params->clk_table.entries[1].dcfclk_mhz;
++				if (dcfclk_sta_targets[i] > max_dcfclk_mhz) {
++					dcfclk_sta_targets[i] = max_dcfclk_mhz;
+ 					break;
+ 				}
+ 			}
+@@ -2447,7 +2468,7 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 				dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+ 				dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+ 			} else {
+-				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= bw_params->clk_table.entries[1].dcfclk_mhz) {
++				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+ 					dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+ 					dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 				} else {
+@@ -2462,11 +2483,12 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 		}
+ 
+ 		while (j < num_uclk_states && num_states < DC__VOLTAGE_STATES &&
+-				optimal_dcfclk_for_uclk[j] <= bw_params->clk_table.entries[1].dcfclk_mhz) {
++				optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+ 			dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+ 			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 		}
+ 
++		dcn3_0_soc.num_states = num_states;
+ 		for (i = 0; i < dcn3_0_soc.num_states; i++) {
+ 			dcn3_0_soc.clock_limits[i].state = i;
+ 			dcn3_0_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
+@@ -2474,9 +2496,9 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 			dcn3_0_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+ 
+ 			/* Fill all states with max values of all other clocks */
+-			dcn3_0_soc.clock_limits[i].dispclk_mhz = bw_params->clk_table.entries[1].dispclk_mhz;
+-			dcn3_0_soc.clock_limits[i].dppclk_mhz  = bw_params->clk_table.entries[1].dppclk_mhz;
+-			dcn3_0_soc.clock_limits[i].phyclk_mhz  = bw_params->clk_table.entries[1].phyclk_mhz;
++			dcn3_0_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
++			dcn3_0_soc.clock_limits[i].dppclk_mhz  = max_dppclk_mhz;
++			dcn3_0_soc.clock_limits[i].phyclk_mhz  = max_phyclk_mhz;
+ 			dcn3_0_soc.clock_limits[i].dtbclk_mhz = dcn3_0_soc.clock_limits[0].dtbclk_mhz;
+ 			/* These clocks cannot come from bw_params, always fill from dcn3_0_soc[1] */
+ 			/* FCLK, PHYCLK_D18, SOCCLK, DSCCLK */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+index 8a2119d8ca0de..8189606537c5a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+@@ -226,6 +226,7 @@ void dcn31_init_hw(struct dc *dc)
+ 	if (dc->config.power_down_display_on_boot) {
+ 		struct dc_link *edp_links[MAX_NUM_EDP];
+ 		struct dc_link *edp_link;
++		bool power_down = false;
+ 
+ 		get_edp_links(dc, edp_links, &edp_num);
+ 		if (edp_num) {
+@@ -239,9 +240,11 @@ void dcn31_init_hw(struct dc *dc)
+ 					dc->hwss.edp_backlight_control(edp_link, false);
+ 					dc->hwss.power_down(dc);
+ 					dc->hwss.edp_power_control(edp_link, false);
++					power_down = true;
+ 				}
+ 			}
+-		} else {
++		}
++		if (!power_down) {
+ 			for (i = 0; i < dc->link_count; i++) {
+ 				struct dc_link *link = dc->links[i];
+ 
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index 911f9f4147741..39ca338eb80b3 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -337,6 +337,11 @@ int ast_mode_config_init(struct ast_private *ast);
+ #define AST_DP501_LINKRATE	0xf014
+ #define AST_DP501_EDID_DATA	0xf020
+ 
++/* Define for Soc scratched reg */
++#define AST_VRAM_INIT_STATUS_MASK	GENMASK(7, 6)
++//#define AST_VRAM_INIT_BY_BMC		BIT(7)
++//#define AST_VRAM_INIT_READY		BIT(6)
++
+ int ast_mm_init(struct ast_private *ast);
+ 
+ /* ast post */
+@@ -346,6 +351,7 @@ bool ast_is_vga_enabled(struct drm_device *dev);
+ void ast_post_gpu(struct drm_device *dev);
+ u32 ast_mindwm(struct ast_private *ast, u32 r);
+ void ast_moutdwm(struct ast_private *ast, u32 r, u32 v);
++void ast_patch_ahb_2500(struct ast_private *ast);
+ /* ast dp501 */
+ void ast_set_dp501_video_output(struct drm_device *dev, u8 mode);
+ bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size);
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index 2aff2e6cf450c..79a3618679554 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -97,6 +97,11 @@ static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
+ 	jregd0 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
+ 	jregd1 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
+ 	if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
++		/* Patch AST2500 */
++		if (((pdev->revision & 0xF0) == 0x40)
++			&& ((jregd0 & AST_VRAM_INIT_STATUS_MASK) == 0))
++			ast_patch_ahb_2500(ast);
++
+ 		/* Double check it's actually working */
+ 		data = ast_read32(ast, 0xf004);
+ 		if ((data != 0xFFFFFFFF) && (data != 0x00)) {
+diff --git a/drivers/gpu/drm/ast/ast_post.c b/drivers/gpu/drm/ast/ast_post.c
+index 0607658dde51b..b5d92f652fd85 100644
+--- a/drivers/gpu/drm/ast/ast_post.c
++++ b/drivers/gpu/drm/ast/ast_post.c
+@@ -2028,6 +2028,40 @@ static bool ast_dram_init_2500(struct ast_private *ast)
+ 	return true;
+ }
+ 
++void ast_patch_ahb_2500(struct ast_private *ast)
++{
++	u32	data;
++
++	/* Clear bus lock condition */
++	ast_moutdwm(ast, 0x1e600000, 0xAEED1A03);
++	ast_moutdwm(ast, 0x1e600084, 0x00010000);
++	ast_moutdwm(ast, 0x1e600088, 0x00000000);
++	ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
++	data = ast_mindwm(ast, 0x1e6e2070);
++	if (data & 0x08000000) {					/* check fast reset */
++		/*
++		 * If "Fast restet" is enabled for ARM-ICE debugger,
++		 * then WDT needs to enable, that
++		 * WDT04 is WDT#1 Reload reg.
++		 * WDT08 is WDT#1 counter restart reg to avoid system deadlock
++		 * WDT0C is WDT#1 control reg
++		 *	[6:5]:= 01:Full chip
++		 *	[4]:= 1:1MHz clock source
++		 *	[1]:= 1:WDT will be cleeared and disabled after timeout occurs
++		 *	[0]:= 1:WDT enable
++		 */
++		ast_moutdwm(ast, 0x1E785004, 0x00000010);
++		ast_moutdwm(ast, 0x1E785008, 0x00004755);
++		ast_moutdwm(ast, 0x1E78500c, 0x00000033);
++		udelay(1000);
++	}
++	do {
++		ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
++		data = ast_mindwm(ast, 0x1e6e2000);
++	}	while (data != 1);
++	ast_moutdwm(ast, 0x1e6e207c, 0x08000000);	/* clear fast reset */
++}
++
+ void ast_post_chip_2500(struct drm_device *dev)
+ {
+ 	struct ast_private *ast = to_ast_private(dev);
+@@ -2035,39 +2069,44 @@ void ast_post_chip_2500(struct drm_device *dev)
+ 	u8 reg;
+ 
+ 	reg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
+-	if ((reg & 0x80) == 0) {/* vga only */
++	if ((reg & AST_VRAM_INIT_STATUS_MASK) == 0) {/* vga only */
+ 		/* Clear bus lock condition */
+-		ast_moutdwm(ast, 0x1e600000, 0xAEED1A03);
+-		ast_moutdwm(ast, 0x1e600084, 0x00010000);
+-		ast_moutdwm(ast, 0x1e600088, 0x00000000);
+-		ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
+-		ast_write32(ast, 0xf004, 0x1e6e0000);
+-		ast_write32(ast, 0xf000, 0x1);
+-		ast_write32(ast, 0x12000, 0x1688a8a8);
+-		while (ast_read32(ast, 0x12000) != 0x1)
+-			;
+-
+-		ast_write32(ast, 0x10000, 0xfc600309);
+-		while (ast_read32(ast, 0x10000) != 0x1)
+-			;
++		ast_patch_ahb_2500(ast);
++
++		/* Disable watchdog */
++		ast_moutdwm(ast, 0x1E78502C, 0x00000000);
++		ast_moutdwm(ast, 0x1E78504C, 0x00000000);
++
++		/*
++		 * Reset USB port to patch USB unknown device issue
++		 * SCU90 is Multi-function Pin Control #5
++		 *	[29]:= 1:Enable USB2.0 Host port#1 (that the mutually shared USB2.0 Hub
++		 *				port).
++		 * SCU94 is Multi-function Pin Control #6
++		 *	[14:13]:= 1x:USB2.0 Host2 controller
++		 * SCU70 is Hardware Strap reg
++		 *	[23]:= 1:CLKIN is 25MHz and USBCK1 = 24/48 MHz (determined by
++		 *				[18]: 0(24)/1(48) MHz)
++		 * SCU7C is Write clear reg to SCU70
++		 *	[23]:= write 1 and then SCU70[23] will be clear as 0b.
++		 */
++		ast_moutdwm(ast, 0x1E6E2090, 0x20000000);
++		ast_moutdwm(ast, 0x1E6E2094, 0x00004000);
++		if (ast_mindwm(ast, 0x1E6E2070) & 0x00800000) {
++			ast_moutdwm(ast, 0x1E6E207C, 0x00800000);
++			mdelay(100);
++			ast_moutdwm(ast, 0x1E6E2070, 0x00800000);
++		}
++		/* Modify eSPI reset pin */
++		temp = ast_mindwm(ast, 0x1E6E2070);
++		if (temp & 0x02000000)
++			ast_moutdwm(ast, 0x1E6E207C, 0x00004000);
+ 
+ 		/* Slow down CPU/AHB CLK in VGA only mode */
+ 		temp = ast_read32(ast, 0x12008);
+ 		temp |= 0x73;
+ 		ast_write32(ast, 0x12008, temp);
+ 
+-		/* Reset USB port to patch USB unknown device issue */
+-		ast_moutdwm(ast, 0x1e6e2090, 0x20000000);
+-		temp  = ast_mindwm(ast, 0x1e6e2094);
+-		temp |= 0x00004000;
+-		ast_moutdwm(ast, 0x1e6e2094, temp);
+-		temp  = ast_mindwm(ast, 0x1e6e2070);
+-		if (temp & 0x00800000) {
+-			ast_moutdwm(ast, 0x1e6e207c, 0x00800000);
+-			mdelay(100);
+-			ast_moutdwm(ast, 0x1e6e2070, 0x00800000);
+-		}
+-
+ 		if (!ast_dram_init_2500(ast))
+ 			drm_err(dev, "DRAM init failed !\n");
+ 
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index 873995f0a7416..6002404ffcb9d 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -196,7 +196,7 @@ static u32 ps2bc(struct nwl_dsi *dsi, unsigned long long ps)
+ 	u32 bpp = mipi_dsi_pixel_format_to_bpp(dsi->format);
+ 
+ 	return DIV64_U64_ROUND_UP(ps * dsi->mode.clock * bpp,
+-				  dsi->lanes * 8 * NSEC_PER_SEC);
++				  dsi->lanes * 8ULL * NSEC_PER_SEC);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
+index b59b26a71ad5d..3a298df00901d 100644
+--- a/drivers/gpu/drm/drm_auth.c
++++ b/drivers/gpu/drm/drm_auth.c
+@@ -135,16 +135,18 @@ static void drm_set_master(struct drm_device *dev, struct drm_file *fpriv,
+ static int drm_new_set_master(struct drm_device *dev, struct drm_file *fpriv)
+ {
+ 	struct drm_master *old_master;
++	struct drm_master *new_master;
+ 
+ 	lockdep_assert_held_once(&dev->master_mutex);
+ 
+ 	WARN_ON(fpriv->is_master);
+ 	old_master = fpriv->master;
+-	fpriv->master = drm_master_create(dev);
+-	if (!fpriv->master) {
+-		fpriv->master = old_master;
++	new_master = drm_master_create(dev);
++	if (!new_master)
+ 		return -ENOMEM;
+-	}
++	spin_lock(&fpriv->master_lookup_lock);
++	fpriv->master = new_master;
++	spin_unlock(&fpriv->master_lookup_lock);
+ 
+ 	fpriv->is_master = 1;
+ 	fpriv->authenticated = 1;
+@@ -303,10 +305,13 @@ int drm_master_open(struct drm_file *file_priv)
+ 	 * any master object for render clients
+ 	 */
+ 	mutex_lock(&dev->master_mutex);
+-	if (!dev->master)
++	if (!dev->master) {
+ 		ret = drm_new_set_master(dev, file_priv);
+-	else
++	} else {
++		spin_lock(&file_priv->master_lookup_lock);
+ 		file_priv->master = drm_master_get(dev->master);
++		spin_unlock(&file_priv->master_lookup_lock);
++	}
+ 	mutex_unlock(&dev->master_mutex);
+ 
+ 	return ret;
+@@ -372,6 +377,31 @@ struct drm_master *drm_master_get(struct drm_master *master)
+ }
+ EXPORT_SYMBOL(drm_master_get);
+ 
++/**
++ * drm_file_get_master - reference &drm_file.master of @file_priv
++ * @file_priv: DRM file private
++ *
++ * Increments the reference count of @file_priv's &drm_file.master and returns
++ * the &drm_file.master. If @file_priv has no &drm_file.master, returns NULL.
++ *
++ * Master pointers returned from this function should be unreferenced using
++ * drm_master_put().
++ */
++struct drm_master *drm_file_get_master(struct drm_file *file_priv)
++{
++	struct drm_master *master = NULL;
++
++	spin_lock(&file_priv->master_lookup_lock);
++	if (!file_priv->master)
++		goto unlock;
++	master = drm_master_get(file_priv->master);
++
++unlock:
++	spin_unlock(&file_priv->master_lookup_lock);
++	return master;
++}
++EXPORT_SYMBOL(drm_file_get_master);
++
+ static void drm_master_destroy(struct kref *kref)
+ {
+ 	struct drm_master *master = container_of(kref, struct drm_master, refcount);
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 3d7182001004d..b0a8264894885 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -91,6 +91,7 @@ static int drm_clients_info(struct seq_file *m, void *data)
+ 	mutex_lock(&dev->filelist_mutex);
+ 	list_for_each_entry_reverse(priv, &dev->filelist, lhead) {
+ 		struct task_struct *task;
++		bool is_current_master = drm_is_current_master(priv);
+ 
+ 		rcu_read_lock(); /* locks pid_task()->comm */
+ 		task = pid_task(priv->pid, PIDTYPE_PID);
+@@ -99,7 +100,7 @@ static int drm_clients_info(struct seq_file *m, void *data)
+ 			   task ? task->comm : "<unknown>",
+ 			   pid_vnr(priv->pid),
+ 			   priv->minor->index,
+-			   drm_is_current_master(priv) ? 'y' : 'n',
++			   is_current_master ? 'y' : 'n',
+ 			   priv->authenticated ? 'y' : 'n',
+ 			   from_kuid_munged(seq_user_ns(m), uid),
+ 			   priv->magic);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index ad0795afc21cf..86d13d6bc4631 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2872,11 +2872,13 @@ static int process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr,
+ 	idx += tosend + 1;
+ 
+ 	ret = drm_dp_send_sideband_msg(mgr, up, chunk, idx);
+-	if (unlikely(ret) && drm_debug_enabled(DRM_UT_DP)) {
+-		struct drm_printer p = drm_debug_printer(DBG_PREFIX);
++	if (ret) {
++		if (drm_debug_enabled(DRM_UT_DP)) {
++			struct drm_printer p = drm_debug_printer(DBG_PREFIX);
+ 
+-		drm_printf(&p, "sideband msg failed to send\n");
+-		drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
++			drm_printf(&p, "sideband msg failed to send\n");
++			drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
++		}
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index d4f0bac6f8f8a..ceb1a9723855f 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -176,6 +176,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor)
+ 	init_waitqueue_head(&file->event_wait);
+ 	file->event_space = 4096; /* set aside 4k for event buffer */
+ 
++	spin_lock_init(&file->master_lookup_lock);
+ 	mutex_init(&file->event_read_lock);
+ 
+ 	if (drm_core_check_feature(dev, DRIVER_GEM))
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index 00fb433bcef1a..92eac73d9001f 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -106,10 +106,19 @@ static bool _drm_has_leased(struct drm_master *master, int id)
+  */
+ bool _drm_lease_held(struct drm_file *file_priv, int id)
+ {
+-	if (!file_priv || !file_priv->master)
++	bool ret;
++	struct drm_master *master;
++
++	if (!file_priv)
+ 		return true;
+ 
+-	return _drm_lease_held_master(file_priv->master, id);
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return true;
++	ret = _drm_lease_held_master(master, id);
++	drm_master_put(&master);
++
++	return ret;
+ }
+ 
+ /**
+@@ -128,13 +137,22 @@ bool drm_lease_held(struct drm_file *file_priv, int id)
+ 	struct drm_master *master;
+ 	bool ret;
+ 
+-	if (!file_priv || !file_priv->master || !file_priv->master->lessor)
++	if (!file_priv)
+ 		return true;
+ 
+-	master = file_priv->master;
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return true;
++	if (!master->lessor) {
++		ret = true;
++		goto out;
++	}
+ 	mutex_lock(&master->dev->mode_config.idr_mutex);
+ 	ret = _drm_lease_held_master(master, id);
+ 	mutex_unlock(&master->dev->mode_config.idr_mutex);
++
++out:
++	drm_master_put(&master);
+ 	return ret;
+ }
+ 
+@@ -154,10 +172,16 @@ uint32_t drm_lease_filter_crtcs(struct drm_file *file_priv, uint32_t crtcs_in)
+ 	int count_in, count_out;
+ 	uint32_t crtcs_out = 0;
+ 
+-	if (!file_priv || !file_priv->master || !file_priv->master->lessor)
++	if (!file_priv)
+ 		return crtcs_in;
+ 
+-	master = file_priv->master;
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return crtcs_in;
++	if (!master->lessor) {
++		crtcs_out = crtcs_in;
++		goto out;
++	}
+ 	dev = master->dev;
+ 
+ 	count_in = count_out = 0;
+@@ -176,6 +200,9 @@ uint32_t drm_lease_filter_crtcs(struct drm_file *file_priv, uint32_t crtcs_in)
+ 		count_in++;
+ 	}
+ 	mutex_unlock(&master->dev->mode_config.idr_mutex);
++
++out:
++	drm_master_put(&master);
+ 	return crtcs_out;
+ }
+ 
+@@ -489,7 +516,7 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	size_t object_count;
+ 	int ret = 0;
+ 	struct idr leases;
+-	struct drm_master *lessor = lessor_priv->master;
++	struct drm_master *lessor;
+ 	struct drm_master *lessee = NULL;
+ 	struct file *lessee_file = NULL;
+ 	struct file *lessor_file = lessor_priv->filp;
+@@ -501,12 +528,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
+-	/* Do not allow sub-leases */
+-	if (lessor->lessor) {
+-		DRM_DEBUG_LEASE("recursive leasing not allowed\n");
+-		return -EINVAL;
+-	}
+-
+ 	/* need some objects */
+ 	if (cl->object_count == 0) {
+ 		DRM_DEBUG_LEASE("no objects in lease\n");
+@@ -518,12 +539,22 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 		return -EINVAL;
+ 	}
+ 
++	lessor = drm_file_get_master(lessor_priv);
++	/* Do not allow sub-leases */
++	if (lessor->lessor) {
++		DRM_DEBUG_LEASE("recursive leasing not allowed\n");
++		ret = -EINVAL;
++		goto out_lessor;
++	}
++
+ 	object_count = cl->object_count;
+ 
+ 	object_ids = memdup_user(u64_to_user_ptr(cl->object_ids),
+ 			array_size(object_count, sizeof(__u32)));
+-	if (IS_ERR(object_ids))
+-		return PTR_ERR(object_ids);
++	if (IS_ERR(object_ids)) {
++		ret = PTR_ERR(object_ids);
++		goto out_lessor;
++	}
+ 
+ 	idr_init(&leases);
+ 
+@@ -534,14 +565,15 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	if (ret) {
+ 		DRM_DEBUG_LEASE("lease object lookup failed: %i\n", ret);
+ 		idr_destroy(&leases);
+-		return ret;
++		goto out_lessor;
+ 	}
+ 
+ 	/* Allocate a file descriptor for the lease */
+ 	fd = get_unused_fd_flags(cl->flags & (O_CLOEXEC | O_NONBLOCK));
+ 	if (fd < 0) {
+ 		idr_destroy(&leases);
+-		return fd;
++		ret = fd;
++		goto out_lessor;
+ 	}
+ 
+ 	DRM_DEBUG_LEASE("Creating lease\n");
+@@ -577,6 +609,7 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	/* Hook up the fd */
+ 	fd_install(fd, lessee_file);
+ 
++	drm_master_put(&lessor);
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ 	return 0;
+ 
+@@ -586,6 +619,8 @@ out_lessee:
+ out_leases:
+ 	put_unused_fd(fd);
+ 
++out_lessor:
++	drm_master_put(&lessor);
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret);
+ 	return ret;
+ }
+@@ -608,7 +643,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 	struct drm_mode_list_lessees *arg = data;
+ 	__u32 __user *lessee_ids = (__u32 __user *) (uintptr_t) (arg->lessees_ptr);
+ 	__u32 count_lessees = arg->count_lessees;
+-	struct drm_master *lessor = lessor_priv->master, *lessee;
++	struct drm_master *lessor, *lessee;
+ 	int count;
+ 	int ret = 0;
+ 
+@@ -619,6 +654,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessor = drm_file_get_master(lessor_priv);
+ 	DRM_DEBUG_LEASE("List lessees for %d\n", lessor->lessee_id);
+ 
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+@@ -642,6 +678,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 		arg->count_lessees = count;
+ 
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessor);
+ 
+ 	return ret;
+ }
+@@ -661,7 +698,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 	struct drm_mode_get_lease *arg = data;
+ 	__u32 __user *object_ids = (__u32 __user *) (uintptr_t) (arg->objects_ptr);
+ 	__u32 count_objects = arg->count_objects;
+-	struct drm_master *lessee = lessee_priv->master;
++	struct drm_master *lessee;
+ 	struct idr *object_idr;
+ 	int count;
+ 	void *entry;
+@@ -675,6 +712,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessee = drm_file_get_master(lessee_priv);
+ 	DRM_DEBUG_LEASE("get lease for %d\n", lessee->lessee_id);
+ 
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+@@ -702,6 +740,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 		arg->count_objects = count;
+ 
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessee);
+ 
+ 	return ret;
+ }
+@@ -720,7 +759,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 				void *data, struct drm_file *lessor_priv)
+ {
+ 	struct drm_mode_revoke_lease *arg = data;
+-	struct drm_master *lessor = lessor_priv->master;
++	struct drm_master *lessor;
+ 	struct drm_master *lessee;
+ 	int ret = 0;
+ 
+@@ -730,6 +769,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessor = drm_file_get_master(lessor_priv);
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+ 
+ 	lessee = _drm_find_lessee(lessor, arg->lessee_id);
+@@ -750,6 +790,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 
+ fail:
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessor);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dma.c b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+index 0644936afee26..bf33c3084cb41 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dma.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+@@ -115,6 +115,8 @@ int exynos_drm_register_dma(struct drm_device *drm, struct device *dev,
+ 				EXYNOS_DEV_ADDR_START, EXYNOS_DEV_ADDR_SIZE);
+ 		else if (IS_ENABLED(CONFIG_IOMMU_DMA))
+ 			mapping = iommu_get_domain_for_dev(priv->dma_dev);
++		else
++			mapping = ERR_PTR(-ENODEV);
+ 
+ 		if (IS_ERR(mapping))
+ 			return PTR_ERR(mapping);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h
+index 749a075fe9e4c..d1b51c133e27a 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_drv.h
++++ b/drivers/gpu/drm/mgag200/mgag200_drv.h
+@@ -43,6 +43,22 @@
+ #define ATTR_INDEX 0x1fc0
+ #define ATTR_DATA 0x1fc1
+ 
++#define WREG_MISC(v)						\
++	WREG8(MGA_MISC_OUT, v)
++
++#define RREG_MISC(v)						\
++	((v) = RREG8(MGA_MISC_IN))
++
++#define WREG_MISC_MASKED(v, mask)				\
++	do {							\
++		u8 misc_;					\
++		u8 mask_ = (mask);				\
++		RREG_MISC(misc_);				\
++		misc_ &= ~mask_;				\
++		misc_ |= ((v) & mask_);				\
++		WREG_MISC(misc_);				\
++	} while (0)
++
+ #define WREG_ATTR(reg, v)					\
+ 	do {							\
+ 		RREG8(0x1fda);					\
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index 9d576240faedd..555e3181e52b0 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -174,6 +174,8 @@ static int mgag200_g200_set_plls(struct mga_device *mdev, long clock)
+ 	drm_dbg_kms(dev, "clock: %ld vco: %ld m: %d n: %d p: %d s: %d\n",
+ 		    clock, f_vco, m, n, p, s);
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG_DAC(MGA1064_PIX_PLLC_M, m);
+ 	WREG_DAC(MGA1064_PIX_PLLC_N, n);
+ 	WREG_DAC(MGA1064_PIX_PLLC_P, (p | (s << 3)));
+@@ -289,6 +291,8 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock)
+ 		return 1;
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG_DAC(MGA1064_PIX_PLLC_M, m);
+ 	WREG_DAC(MGA1064_PIX_PLLC_N, n);
+ 	WREG_DAC(MGA1064_PIX_PLLC_P, p);
+@@ -385,6 +389,8 @@ static int mga_g200wb_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	for (i = 0; i <= 32 && pll_locked == false; i++) {
+ 		if (i > 0) {
+ 			WREG8(MGAREG_CRTC_INDEX, 0x1e);
+@@ -522,6 +528,8 @@ static int mga_g200ev_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 	tmp = RREG8(DAC_DATA);
+ 	tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;
+@@ -654,6 +662,9 @@ static int mga_g200eh_set_plls(struct mga_device *mdev, long clock)
+ 			}
+ 		}
+ 	}
++
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	for (i = 0; i <= 32 && pll_locked == false; i++) {
+ 		WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 		tmp = RREG8(DAC_DATA);
+@@ -754,6 +765,8 @@ static int mga_g200er_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 	tmp = RREG8(DAC_DATA);
+ 	tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;
+@@ -787,8 +800,6 @@ static int mga_g200er_set_plls(struct mga_device *mdev, long clock)
+ 
+ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
+ {
+-	u8 misc;
+-
+ 	switch(mdev->type) {
+ 	case G200_PCI:
+ 	case G200_AGP:
+@@ -808,11 +819,6 @@ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
+ 		return mga_g200er_set_plls(mdev, clock);
+ 	}
+ 
+-	misc = RREG8(MGA_MISC_IN);
+-	misc &= ~MGAREG_MISC_CLK_SEL_MASK;
+-	misc |= MGAREG_MISC_CLK_SEL_MGA_MSK;
+-	WREG8(MGA_MISC_OUT, misc);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mgag200/mgag200_reg.h b/drivers/gpu/drm/mgag200/mgag200_reg.h
+index 977be0565c061..60e705283fe84 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_reg.h
++++ b/drivers/gpu/drm/mgag200/mgag200_reg.h
+@@ -222,11 +222,10 @@
+ 
+ #define MGAREG_MISC_IOADSEL	(0x1 << 0)
+ #define MGAREG_MISC_RAMMAPEN	(0x1 << 1)
+-#define MGAREG_MISC_CLK_SEL_MASK	GENMASK(3, 2)
+-#define MGAREG_MISC_CLK_SEL_VGA25	(0x0 << 2)
+-#define MGAREG_MISC_CLK_SEL_VGA28	(0x1 << 2)
+-#define MGAREG_MISC_CLK_SEL_MGA_PIX	(0x2 << 2)
+-#define MGAREG_MISC_CLK_SEL_MGA_MSK	(0x3 << 2)
++#define MGAREG_MISC_CLKSEL_MASK		GENMASK(3, 2)
++#define MGAREG_MISC_CLKSEL_VGA25	(0x0 << 2)
++#define MGAREG_MISC_CLKSEL_VGA28	(0x1 << 2)
++#define MGAREG_MISC_CLKSEL_MGA		(0x3 << 2)
+ #define MGAREG_MISC_VIDEO_DIS	(0x1 << 4)
+ #define MGAREG_MISC_HIGH_PG_SEL	(0x1 << 5)
+ #define MGAREG_MISC_HSYNCPOL		BIT(6)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 9c5e4618aa0ae..183b9f9c1b315 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1383,13 +1383,13 @@ static void a6xx_llc_activate(struct a6xx_gpu *a6xx_gpu)
+ {
+ 	struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ 	struct msm_gpu *gpu = &adreno_gpu->base;
+-	u32 cntl1_regval = 0;
++	u32 gpu_scid, cntl1_regval = 0;
+ 
+ 	if (IS_ERR(a6xx_gpu->llc_mmio))
+ 		return;
+ 
+ 	if (!llcc_slice_activate(a6xx_gpu->llc_slice)) {
+-		u32 gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice);
++		gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice);
+ 
+ 		gpu_scid &= 0x1f;
+ 		cntl1_regval = (gpu_scid << 0) | (gpu_scid << 5) | (gpu_scid << 10) |
+@@ -1409,26 +1409,34 @@ static void a6xx_llc_activate(struct a6xx_gpu *a6xx_gpu)
+ 		}
+ 	}
+ 
+-	if (cntl1_regval) {
++	if (!cntl1_regval)
++		return;
++
++	/*
++	 * Program the slice IDs for the various GPU blocks and GPU MMU
++	 * pagetables
++	 */
++	if (!a6xx_gpu->have_mmu500) {
++		a6xx_llc_write(a6xx_gpu,
++			REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_1, cntl1_regval);
++
+ 		/*
+-		 * Program the slice IDs for the various GPU blocks and GPU MMU
+-		 * pagetables
++		 * Program cacheability overrides to not allocate cache
++		 * lines on a write miss
+ 		 */
+-		if (a6xx_gpu->have_mmu500)
+-			gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, GENMASK(24, 0),
+-				cntl1_regval);
+-		else {
+-			a6xx_llc_write(a6xx_gpu,
+-				REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_1, cntl1_regval);
+-
+-			/*
+-			 * Program cacheability overrides to not allocate cache
+-			 * lines on a write miss
+-			 */
+-			a6xx_llc_rmw(a6xx_gpu,
+-				REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_0, 0xF, 0x03);
+-		}
++		a6xx_llc_rmw(a6xx_gpu,
++			REG_A6XX_CX_MISC_SYSTEM_CACHE_CNTL_0, 0xF, 0x03);
++		return;
+ 	}
++
++	gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, GENMASK(24, 0), cntl1_regval);
++
++	/* On A660, the SCID programming for UCHE traffic is done in
++	 * A6XX_GBIF_SCACHE_CNTL0[14:10]
++	 */
++	if (adreno_is_a660(adreno_gpu))
++		gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, (0x1f << 10) |
++			(1 << 8), (gpu_scid << 10) | (1 << 8));
+ }
+ 
+ static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 704dace895cbe..b131fd376192b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -974,6 +974,7 @@ static const struct dpu_perf_cfg sdm845_perf_data = {
+ 	.amortizable_threshold = 25,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sdm845_qos_linear),
+ 		.entries = sdm845_qos_linear
+@@ -1001,6 +1002,7 @@ static const struct dpu_perf_cfg sc7180_perf_data = {
+ 	.min_dram_ib = 1600000,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xff, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xff00, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
+ 		.entries = sc7180_qos_linear
+@@ -1028,6 +1030,7 @@ static const struct dpu_perf_cfg sm8150_perf_data = {
+ 	.min_dram_ib = 800000,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff8, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sm8150_qos_linear),
+ 		.entries = sm8150_qos_linear
+@@ -1056,6 +1059,7 @@ static const struct dpu_perf_cfg sm8250_perf_data = {
+ 	.min_dram_ib = 800000,
+ 	.min_prefill_lines = 35,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xff00, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
+ 		.entries = sc7180_qos_linear
+@@ -1084,6 +1088,7 @@ static const struct dpu_perf_cfg sc7280_perf_data = {
+ 	.min_dram_ib = 1600000,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xffff, 0xffff, 0x0},
++	.safe_lut_tbl = {0xff00, 0xff00, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_macrotile),
+ 		.entries = sc7180_qos_macrotile
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 0712752742f4f..cdcaf470f1480 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -89,13 +89,6 @@ static void mdp4_disable_commit(struct msm_kms *kms)
+ 
+ static void mdp4_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state)
+ {
+-	int i;
+-	struct drm_crtc *crtc;
+-	struct drm_crtc_state *crtc_state;
+-
+-	/* see 119ecb7fd */
+-	for_each_new_crtc_in_state(state, crtc, crtc_state, i)
+-		drm_crtc_vblank_get(crtc);
+ }
+ 
+ static void mdp4_flush_commit(struct msm_kms *kms, unsigned crtc_mask)
+@@ -114,12 +107,6 @@ static void mdp4_wait_flush(struct msm_kms *kms, unsigned crtc_mask)
+ 
+ static void mdp4_complete_commit(struct msm_kms *kms, unsigned crtc_mask)
+ {
+-	struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
+-	struct drm_crtc *crtc;
+-
+-	/* see 119ecb7fd */
+-	for_each_crtc_mask(mdp4_kms->dev, crtc, crtc_mask)
+-		drm_crtc_vblank_put(crtc);
+ }
+ 
+ static long mdp4_round_pixclk(struct msm_kms *kms, unsigned long rate,
+@@ -412,6 +399,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+ 	struct mdp4_platform_config *config = mdp4_get_config(pdev);
++	struct msm_drm_private *priv = dev->dev_private;
+ 	struct mdp4_kms *mdp4_kms;
+ 	struct msm_kms *kms = NULL;
+ 	struct msm_gem_address_space *aspace;
+@@ -431,7 +419,8 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 		goto fail;
+ 	}
+ 
+-	kms = &mdp4_kms->base.base;
++	priv->kms = &mdp4_kms->base.base;
++	kms = priv->kms;
+ 
+ 	mdp4_kms->dev = dev;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_aux.c b/drivers/gpu/drm/msm/dp/dp_aux.c
+index 4a3293b590b05..eb40d8413bca9 100644
+--- a/drivers/gpu/drm/msm/dp/dp_aux.c
++++ b/drivers/gpu/drm/msm/dp/dp_aux.c
+@@ -353,6 +353,9 @@ static ssize_t dp_aux_transfer(struct drm_dp_aux *dp_aux,
+ 			if (!(aux->retry_cnt % MAX_AUX_RETRIES))
+ 				dp_catalog_aux_update_cfg(aux->catalog);
+ 		}
++		/* reset aux if link is in connected state */
++		if (dp_catalog_link_is_connected(aux->catalog))
++			dp_catalog_aux_reset(aux->catalog);
+ 	} else {
+ 		aux->retry_cnt = 0;
+ 		switch (aux->aux_error_num) {
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index eaddfd7398850..6f5e45d54b268 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -81,13 +81,6 @@ struct dp_ctrl_private {
+ 	struct completion video_comp;
+ };
+ 
+-struct dp_cr_status {
+-	u8 lane_0_1;
+-	u8 lane_2_3;
+-};
+-
+-#define DP_LANE0_1_CR_DONE	0x11
+-
+ static int dp_aux_link_configure(struct drm_dp_aux *aux,
+ 					struct dp_link_info *link)
+ {
+@@ -1078,7 +1071,7 @@ static int dp_ctrl_read_link_status(struct dp_ctrl_private *ctrl,
+ }
+ 
+ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int tries, old_v_level, ret = 0;
+ 	u8 link_status[DP_LINK_STATUS_SIZE];
+@@ -1107,9 +1100,6 @@ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl,
+ 		if (ret)
+ 			return ret;
+ 
+-		cr->lane_0_1 = link_status[0];
+-		cr->lane_2_3 = link_status[1];
+-
+ 		if (drm_dp_clock_recovery_ok(link_status,
+ 			ctrl->link->link_params.num_lanes)) {
+ 			return 0;
+@@ -1186,7 +1176,7 @@ static void dp_ctrl_clear_training_pattern(struct dp_ctrl_private *ctrl)
+ }
+ 
+ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int tries = 0, ret = 0;
+ 	char pattern;
+@@ -1202,10 +1192,6 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ 	else
+ 		pattern = DP_TRAINING_PATTERN_2;
+ 
+-	ret = dp_ctrl_update_vx_px(ctrl);
+-	if (ret)
+-		return ret;
+-
+ 	ret = dp_catalog_ctrl_set_pattern(ctrl->catalog, pattern);
+ 	if (ret)
+ 		return ret;
+@@ -1218,8 +1204,6 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ 		ret = dp_ctrl_read_link_status(ctrl, link_status);
+ 		if (ret)
+ 			return ret;
+-		cr->lane_0_1 = link_status[0];
+-		cr->lane_2_3 = link_status[1];
+ 
+ 		if (drm_dp_channel_eq_ok(link_status,
+ 			ctrl->link->link_params.num_lanes)) {
+@@ -1239,7 +1223,7 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ static int dp_ctrl_reinitialize_mainlink(struct dp_ctrl_private *ctrl);
+ 
+ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int ret = 0;
+ 	u8 encoding = DP_SET_ANSI_8B10B;
+@@ -1255,7 +1239,7 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+ 	drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
+ 				&encoding, 1);
+ 
+-	ret = dp_ctrl_link_train_1(ctrl, cr, training_step);
++	ret = dp_ctrl_link_train_1(ctrl, training_step);
+ 	if (ret) {
+ 		DRM_ERROR("link training #1 failed. ret=%d\n", ret);
+ 		goto end;
+@@ -1264,7 +1248,7 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+ 	/* print success info as this is a result of user initiated action */
+ 	DRM_DEBUG_DP("link training #1 successful\n");
+ 
+-	ret = dp_ctrl_link_train_2(ctrl, cr, training_step);
++	ret = dp_ctrl_link_train_2(ctrl, training_step);
+ 	if (ret) {
+ 		DRM_ERROR("link training #2 failed. ret=%d\n", ret);
+ 		goto end;
+@@ -1280,7 +1264,7 @@ end:
+ }
+ 
+ static int dp_ctrl_setup_main_link(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int ret = 0;
+ 
+@@ -1295,7 +1279,7 @@ static int dp_ctrl_setup_main_link(struct dp_ctrl_private *ctrl,
+ 	 * a link training pattern, we have to first do soft reset.
+ 	 */
+ 
+-	ret = dp_ctrl_link_train(ctrl, cr, training_step);
++	ret = dp_ctrl_link_train(ctrl, training_step);
+ 
+ 	return ret;
+ }
+@@ -1492,14 +1476,16 @@ static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl)
+ static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl)
+ {
+ 	int ret = 0;
+-	struct dp_cr_status cr;
+ 	int training_step = DP_TRAINING_NONE;
+ 
+ 	dp_ctrl_push_idle(&ctrl->dp_ctrl);
+ 
++	ctrl->link->phy_params.p_level = 0;
++	ctrl->link->phy_params.v_level = 0;
++
+ 	ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
+ 
+-	ret = dp_ctrl_setup_main_link(ctrl, &cr, &training_step);
++	ret = dp_ctrl_setup_main_link(ctrl, &training_step);
+ 	if (ret)
+ 		goto end;
+ 
+@@ -1630,6 +1616,35 @@ void dp_ctrl_handle_sink_request(struct dp_ctrl *dp_ctrl)
+ 	}
+ }
+ 
++static bool dp_ctrl_clock_recovery_any_ok(
++			const u8 link_status[DP_LINK_STATUS_SIZE],
++			int lane_count)
++{
++	int reduced_cnt;
++
++	if (lane_count <= 1)
++		return false;
++
++	/*
++	 * only interested in the lane number after reduced
++	 * lane_count = 4, then only interested in 2 lanes
++	 * lane_count = 2, then only interested in 1 lane
++	 */
++	reduced_cnt = lane_count >> 1;
++
++	return drm_dp_clock_recovery_ok(link_status, reduced_cnt);
++}
++
++static bool dp_ctrl_channel_eq_ok(struct dp_ctrl_private *ctrl)
++{
++	u8 link_status[DP_LINK_STATUS_SIZE];
++	int num_lanes = ctrl->link->link_params.num_lanes;
++
++	dp_ctrl_read_link_status(ctrl, link_status);
++
++	return drm_dp_channel_eq_ok(link_status, num_lanes);
++}
++
+ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ {
+ 	int rc = 0;
+@@ -1637,7 +1652,7 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	u32 rate = 0;
+ 	int link_train_max_retries = 5;
+ 	u32 const phy_cts_pixel_clk_khz = 148500;
+-	struct dp_cr_status cr;
++	u8 link_status[DP_LINK_STATUS_SIZE];
+ 	unsigned int training_step;
+ 
+ 	if (!dp_ctrl)
+@@ -1664,6 +1679,9 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 		ctrl->link->link_params.rate,
+ 		ctrl->link->link_params.num_lanes, ctrl->dp_ctrl.pixel_rate);
+ 
++	ctrl->link->phy_params.p_level = 0;
++	ctrl->link->phy_params.v_level = 0;
++
+ 	rc = dp_ctrl_enable_mainlink_clocks(ctrl);
+ 	if (rc)
+ 		return rc;
+@@ -1677,19 +1695,21 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 		}
+ 
+ 		training_step = DP_TRAINING_NONE;
+-		rc = dp_ctrl_setup_main_link(ctrl, &cr, &training_step);
++		rc = dp_ctrl_setup_main_link(ctrl, &training_step);
+ 		if (rc == 0) {
+ 			/* training completed successfully */
+ 			break;
+ 		} else if (training_step == DP_TRAINING_1) {
+ 			/* link train_1 failed */
+-			if (!dp_catalog_link_is_connected(ctrl->catalog)) {
++			if (!dp_catalog_link_is_connected(ctrl->catalog))
+ 				break;
+-			}
++
++			dp_ctrl_read_link_status(ctrl, link_status);
+ 
+ 			rc = dp_ctrl_link_rate_down_shift(ctrl);
+ 			if (rc < 0) { /* already in RBR = 1.6G */
+-				if (cr.lane_0_1 & DP_LANE0_1_CR_DONE) {
++				if (dp_ctrl_clock_recovery_any_ok(link_status,
++					ctrl->link->link_params.num_lanes)) {
+ 					/*
+ 					 * some lanes are ready,
+ 					 * reduce lane number
+@@ -1705,12 +1725,18 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 				}
+ 			}
+ 		} else if (training_step == DP_TRAINING_2) {
+-			/* link train_2 failed, lower lane rate */
+-			if (!dp_catalog_link_is_connected(ctrl->catalog)) {
++			/* link train_2 failed */
++			if (!dp_catalog_link_is_connected(ctrl->catalog))
+ 				break;
+-			}
+ 
+-			rc = dp_ctrl_link_lane_down_shift(ctrl);
++			dp_ctrl_read_link_status(ctrl, link_status);
++
++			if (!drm_dp_clock_recovery_ok(link_status,
++					ctrl->link->link_params.num_lanes))
++				rc = dp_ctrl_link_rate_down_shift(ctrl);
++			else
++				rc = dp_ctrl_link_lane_down_shift(ctrl);
++
+ 			if (rc < 0) {
+ 				/* end with failure */
+ 				break; /* lane == 1 already */
+@@ -1721,17 +1747,19 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN)
+ 		return rc;
+ 
+-	/* stop txing train pattern */
+-	dp_ctrl_clear_training_pattern(ctrl);
++	if (rc == 0) {  /* link train successfully */
++		/*
++		 * do not stop train pattern here
++		 * stop link training at on_stream
++		 * to pass compliance test
++		 */
++	} else  {
++		/*
++		 * link training failed
++		 * end txing train pattern here
++		 */
++		dp_ctrl_clear_training_pattern(ctrl);
+ 
+-	/*
+-	 * keep transmitting idle pattern until video ready
+-	 * to avoid main link from loss of sync
+-	 */
+-	if (rc == 0)  /* link train successfully */
+-		dp_ctrl_push_idle(dp_ctrl);
+-	else  {
+-		/* link training failed */
+ 		dp_ctrl_deinitialize_mainlink(ctrl);
+ 		rc = -ECONNRESET;
+ 	}
+@@ -1739,9 +1767,15 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	return rc;
+ }
+ 
++static int dp_ctrl_link_retrain(struct dp_ctrl_private *ctrl)
++{
++	int training_step = DP_TRAINING_NONE;
++
++	return dp_ctrl_setup_main_link(ctrl, &training_step);
++}
++
+ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ {
+-	u32 rate = 0;
+ 	int ret = 0;
+ 	bool mainlink_ready = false;
+ 	struct dp_ctrl_private *ctrl;
+@@ -1751,10 +1785,6 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ 
+ 	ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
+ 
+-	rate = ctrl->panel->link_info.rate;
+-
+-	ctrl->link->link_params.rate = rate;
+-	ctrl->link->link_params.num_lanes = ctrl->panel->link_info.num_lanes;
+ 	ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
+ 
+ 	DRM_DEBUG_DP("rate=%d, num_lanes=%d, pixel_rate=%d\n",
+@@ -1769,6 +1799,12 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ 		}
+ 	}
+ 
++	if (!dp_ctrl_channel_eq_ok(ctrl))
++		dp_ctrl_link_retrain(ctrl);
++
++	/* stop txing train pattern to end link training */
++	dp_ctrl_clear_training_pattern(ctrl);
++
+ 	ret = dp_ctrl_enable_stream_clocks(ctrl);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 440b327534302..2181b60e1d1d8 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -271,7 +271,7 @@ static u8 dp_panel_get_edid_checksum(struct edid *edid)
+ {
+ 	struct edid *last_block;
+ 	u8 *raw_edid;
+-	bool is_edid_corrupt;
++	bool is_edid_corrupt = false;
+ 
+ 	if (!edid) {
+ 		DRM_ERROR("invalid edid input\n");
+@@ -303,7 +303,12 @@ void dp_panel_handle_sink_request(struct dp_panel *dp_panel)
+ 	panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
+ 
+ 	if (panel->link->sink_request & DP_TEST_LINK_EDID_READ) {
+-		u8 checksum = dp_panel_get_edid_checksum(dp_panel->edid);
++		u8 checksum;
++
++		if (dp_panel->edid)
++			checksum = dp_panel_get_edid_checksum(dp_panel->edid);
++		else
++			checksum = dp_panel->connector->real_edid_checksum;
+ 
+ 		dp_link_send_edid_checksum(panel->link, checksum);
+ 		dp_link_send_test_response(panel->link);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+index f3f1c03c7db95..763f127e46213 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+@@ -154,7 +154,6 @@ static const struct msm_dsi_config sdm660_dsi_cfg = {
+ 	.reg_cfg = {
+ 		.num = 2,
+ 		.regs = {
+-			{"vdd", 73400, 32 },	/* 0.9 V */
+ 			{"vdda", 12560, 4 },	/* 1.2 V */
+ 		},
+ 	},
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index a34cf151c5170..bb31230721bdd 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -1050,7 +1050,7 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
+ 	.reg_cfg = {
+ 		.num = 1,
+ 		.regs = {
+-			{"vcca", 17000, 32},
++			{"vcca", 73400, 32},
+ 		},
+ 	},
+ 	.ops = {
+diff --git a/drivers/gpu/drm/omapdrm/omap_plane.c b/drivers/gpu/drm/omapdrm/omap_plane.c
+index 801da917507d5..512af976b7e90 100644
+--- a/drivers/gpu/drm/omapdrm/omap_plane.c
++++ b/drivers/gpu/drm/omapdrm/omap_plane.c
+@@ -6,6 +6,7 @@
+ 
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
++#include <drm/drm_gem_atomic_helper.h>
+ #include <drm/drm_plane_helper.h>
+ 
+ #include "omap_dmm_tiler.h"
+@@ -29,6 +30,8 @@ static int omap_plane_prepare_fb(struct drm_plane *plane,
+ 	if (!new_state->fb)
+ 		return 0;
+ 
++	drm_gem_plane_helper_prepare_fb(plane, new_state);
++
+ 	return omap_framebuffer_pin(new_state->fb);
+ }
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
+index f614e98771e49..8b2cdb8c701d8 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.h
++++ b/drivers/gpu/drm/panfrost/panfrost_device.h
+@@ -121,8 +121,12 @@ struct panfrost_device {
+ };
+ 
+ struct panfrost_mmu {
++	struct panfrost_device *pfdev;
++	struct kref refcount;
+ 	struct io_pgtable_cfg pgtbl_cfg;
+ 	struct io_pgtable_ops *pgtbl_ops;
++	struct drm_mm mm;
++	spinlock_t mm_lock;
+ 	int as;
+ 	atomic_t as_count;
+ 	struct list_head list;
+@@ -133,9 +137,7 @@ struct panfrost_file_priv {
+ 
+ 	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
+ 
+-	struct panfrost_mmu mmu;
+-	struct drm_mm mm;
+-	spinlock_t mm_lock;
++	struct panfrost_mmu *mmu;
+ };
+ 
+ static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 075ec0ef746cf..945133db1857f 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -417,7 +417,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
+ 		 * anyway, so let's not bother.
+ 		 */
+ 		if (!list_is_singular(&bo->mappings.list) ||
+-		    WARN_ON_ONCE(first->mmu != &priv->mmu)) {
++		    WARN_ON_ONCE(first->mmu != priv->mmu)) {
+ 			ret = -EINVAL;
+ 			goto out_unlock_mappings;
+ 		}
+@@ -449,32 +449,6 @@ int panfrost_unstable_ioctl_check(void)
+ 	return 0;
+ }
+ 
+-#define PFN_4G		(SZ_4G >> PAGE_SHIFT)
+-#define PFN_4G_MASK	(PFN_4G - 1)
+-#define PFN_16M		(SZ_16M >> PAGE_SHIFT)
+-
+-static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node,
+-					 unsigned long color,
+-					 u64 *start, u64 *end)
+-{
+-	/* Executable buffers can't start or end on a 4GB boundary */
+-	if (!(color & PANFROST_BO_NOEXEC)) {
+-		u64 next_seg;
+-
+-		if ((*start & PFN_4G_MASK) == 0)
+-			(*start)++;
+-
+-		if ((*end & PFN_4G_MASK) == 0)
+-			(*end)--;
+-
+-		next_seg = ALIGN(*start, PFN_4G);
+-		if (next_seg - *start <= PFN_16M)
+-			*start = next_seg + 1;
+-
+-		*end = min(*end, ALIGN(*start, PFN_4G) - 1);
+-	}
+-}
+-
+ static int
+ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ {
+@@ -489,15 +463,11 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ 	panfrost_priv->pfdev = pfdev;
+ 	file->driver_priv = panfrost_priv;
+ 
+-	spin_lock_init(&panfrost_priv->mm_lock);
+-
+-	/* 4G enough for now. can be 48-bit */
+-	drm_mm_init(&panfrost_priv->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT);
+-	panfrost_priv->mm.color_adjust = panfrost_drm_mm_color_adjust;
+-
+-	ret = panfrost_mmu_pgtable_alloc(panfrost_priv);
+-	if (ret)
+-		goto err_pgtable;
++	panfrost_priv->mmu = panfrost_mmu_ctx_create(pfdev);
++	if (IS_ERR(panfrost_priv->mmu)) {
++		ret = PTR_ERR(panfrost_priv->mmu);
++		goto err_free;
++	}
+ 
+ 	ret = panfrost_job_open(panfrost_priv);
+ 	if (ret)
+@@ -506,9 +476,8 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ 	return 0;
+ 
+ err_job:
+-	panfrost_mmu_pgtable_free(panfrost_priv);
+-err_pgtable:
+-	drm_mm_takedown(&panfrost_priv->mm);
++	panfrost_mmu_ctx_put(panfrost_priv->mmu);
++err_free:
+ 	kfree(panfrost_priv);
+ 	return ret;
+ }
+@@ -521,8 +490,7 @@ panfrost_postclose(struct drm_device *dev, struct drm_file *file)
+ 	panfrost_perfcnt_close(file);
+ 	panfrost_job_close(panfrost_priv);
+ 
+-	panfrost_mmu_pgtable_free(panfrost_priv);
+-	drm_mm_takedown(&panfrost_priv->mm);
++	panfrost_mmu_ctx_put(panfrost_priv->mmu);
+ 	kfree(panfrost_priv);
+ }
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 3e0723bc36bda..23377481f4e31 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -60,7 +60,7 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
+ 
+ 	mutex_lock(&bo->mappings.lock);
+ 	list_for_each_entry(iter, &bo->mappings.list, node) {
+-		if (iter->mmu == &priv->mmu) {
++		if (iter->mmu == priv->mmu) {
+ 			kref_get(&iter->refcount);
+ 			mapping = iter;
+ 			break;
+@@ -74,16 +74,13 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
+ static void
+ panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping)
+ {
+-	struct panfrost_file_priv *priv;
+-
+ 	if (mapping->active)
+ 		panfrost_mmu_unmap(mapping);
+ 
+-	priv = container_of(mapping->mmu, struct panfrost_file_priv, mmu);
+-	spin_lock(&priv->mm_lock);
++	spin_lock(&mapping->mmu->mm_lock);
+ 	if (drm_mm_node_allocated(&mapping->mmnode))
+ 		drm_mm_remove_node(&mapping->mmnode);
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mapping->mmu->mm_lock);
+ }
+ 
+ static void panfrost_gem_mapping_release(struct kref *kref)
+@@ -94,6 +91,7 @@ static void panfrost_gem_mapping_release(struct kref *kref)
+ 
+ 	panfrost_gem_teardown_mapping(mapping);
+ 	drm_gem_object_put(&mapping->obj->base.base);
++	panfrost_mmu_ctx_put(mapping->mmu);
+ 	kfree(mapping);
+ }
+ 
+@@ -143,11 +141,11 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
+ 	else
+ 		align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
+ 
+-	mapping->mmu = &priv->mmu;
+-	spin_lock(&priv->mm_lock);
+-	ret = drm_mm_insert_node_generic(&priv->mm, &mapping->mmnode,
++	mapping->mmu = panfrost_mmu_ctx_get(priv->mmu);
++	spin_lock(&mapping->mmu->mm_lock);
++	ret = drm_mm_insert_node_generic(&mapping->mmu->mm, &mapping->mmnode,
+ 					 size >> PAGE_SHIFT, align, color, 0);
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mapping->mmu->mm_lock);
+ 	if (ret)
+ 		goto err;
+ 
+@@ -176,7 +174,7 @@ void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv)
+ 
+ 	mutex_lock(&bo->mappings.lock);
+ 	list_for_each_entry(iter, &bo->mappings.list, node) {
+-		if (iter->mmu == &priv->mmu) {
++		if (iter->mmu == priv->mmu) {
+ 			mapping = iter;
+ 			list_del(&iter->node);
+ 			break;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 2df3e999a38d0..3757c6eb30238 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -165,7 +165,7 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ 		return;
+ 	}
+ 
+-	cfg = panfrost_mmu_as_get(pfdev, &job->file_priv->mmu);
++	cfg = panfrost_mmu_as_get(pfdev, job->file_priv->mmu);
+ 
+ 	job_write(pfdev, JS_HEAD_NEXT_LO(js), jc_head & 0xFFFFFFFF);
+ 	job_write(pfdev, JS_HEAD_NEXT_HI(js), jc_head >> 32);
+@@ -527,7 +527,7 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
+ 			if (job) {
+ 				pfdev->jobs[j] = NULL;
+ 
+-				panfrost_mmu_as_put(pfdev, &job->file_priv->mmu);
++				panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
+ 				panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
+ 
+ 				dma_fence_signal_locked(job->done_fence);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 0581186ebfb3a..eea6ade902cb4 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -1,5 +1,8 @@
+ // SPDX-License-Identifier:	GPL-2.0
+ /* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
++
++#include <drm/panfrost_drm.h>
++
+ #include <linux/atomic.h>
+ #include <linux/bitfield.h>
+ #include <linux/delay.h>
+@@ -52,25 +55,16 @@ static int write_cmd(struct panfrost_device *pfdev, u32 as_nr, u32 cmd)
+ }
+ 
+ static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
+-			u64 iova, size_t size)
++			u64 iova, u64 size)
+ {
+ 	u8 region_width;
+ 	u64 region = iova & PAGE_MASK;
+-	/*
+-	 * fls returns:
+-	 * 1 .. 32
+-	 *
+-	 * 10 + fls(num_pages)
+-	 * results in the range (11 .. 42)
+-	 */
+-
+-	size = round_up(size, PAGE_SIZE);
+ 
+-	region_width = 10 + fls(size >> PAGE_SHIFT);
+-	if ((size >> PAGE_SHIFT) != (1ul << (region_width - 11))) {
+-		/* not pow2, so must go up to the next pow2 */
+-		region_width += 1;
+-	}
++	/* The size is encoded as ceil(log2) minus(1), which may be calculated
++	 * with fls. The size must be clamped to hardware bounds.
++	 */
++	size = max_t(u64, size, AS_LOCK_REGION_MIN_SIZE);
++	region_width = fls64(size - 1) - 1;
+ 	region |= region_width;
+ 
+ 	/* Lock the region that needs to be updated */
+@@ -81,7 +75,7 @@ static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
+ 
+ 
+ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr,
+-				      u64 iova, size_t size, u32 op)
++				      u64 iova, u64 size, u32 op)
+ {
+ 	if (as_nr < 0)
+ 		return 0;
+@@ -98,7 +92,7 @@ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr,
+ 
+ static int mmu_hw_do_operation(struct panfrost_device *pfdev,
+ 			       struct panfrost_mmu *mmu,
+-			       u64 iova, size_t size, u32 op)
++			       u64 iova, u64 size, u32 op)
+ {
+ 	int ret;
+ 
+@@ -115,7 +109,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m
+ 	u64 transtab = cfg->arm_mali_lpae_cfg.transtab;
+ 	u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
+ 
+-	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
++	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
+ 
+ 	mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL);
+ 	mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
+@@ -131,7 +125,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m
+ 
+ static void panfrost_mmu_disable(struct panfrost_device *pfdev, u32 as_nr)
+ {
+-	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
++	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
+ 
+ 	mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), 0);
+ 	mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), 0);
+@@ -231,7 +225,7 @@ static size_t get_pgsize(u64 addr, size_t size)
+ 
+ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
+ 				     struct panfrost_mmu *mmu,
+-				     u64 iova, size_t size)
++				     u64 iova, u64 size)
+ {
+ 	if (mmu->as < 0)
+ 		return;
+@@ -337,7 +331,7 @@ static void mmu_tlb_inv_context_s1(void *cookie)
+ 
+ static void mmu_tlb_sync_context(void *cookie)
+ {
+-	//struct panfrost_device *pfdev = cookie;
++	//struct panfrost_mmu *mmu = cookie;
+ 	// TODO: Wait 1000 GPU cycles for HW_ISSUE_6367/T60X
+ }
+ 
+@@ -352,57 +346,10 @@ static const struct iommu_flush_ops mmu_tlb_ops = {
+ 	.tlb_flush_walk = mmu_tlb_flush_walk,
+ };
+ 
+-int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv)
+-{
+-	struct panfrost_mmu *mmu = &priv->mmu;
+-	struct panfrost_device *pfdev = priv->pfdev;
+-
+-	INIT_LIST_HEAD(&mmu->list);
+-	mmu->as = -1;
+-
+-	mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
+-		.pgsize_bitmap	= SZ_4K | SZ_2M,
+-		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
+-		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
+-		.coherent_walk	= pfdev->coherent,
+-		.tlb		= &mmu_tlb_ops,
+-		.iommu_dev	= pfdev->dev,
+-	};
+-
+-	mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg,
+-					      priv);
+-	if (!mmu->pgtbl_ops)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv)
+-{
+-	struct panfrost_device *pfdev = priv->pfdev;
+-	struct panfrost_mmu *mmu = &priv->mmu;
+-
+-	spin_lock(&pfdev->as_lock);
+-	if (mmu->as >= 0) {
+-		pm_runtime_get_noresume(pfdev->dev);
+-		if (pm_runtime_active(pfdev->dev))
+-			panfrost_mmu_disable(pfdev, mmu->as);
+-		pm_runtime_put_autosuspend(pfdev->dev);
+-
+-		clear_bit(mmu->as, &pfdev->as_alloc_mask);
+-		clear_bit(mmu->as, &pfdev->as_in_use_mask);
+-		list_del(&mmu->list);
+-	}
+-	spin_unlock(&pfdev->as_lock);
+-
+-	free_io_pgtable_ops(mmu->pgtbl_ops);
+-}
+-
+ static struct panfrost_gem_mapping *
+ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
+ {
+ 	struct panfrost_gem_mapping *mapping = NULL;
+-	struct panfrost_file_priv *priv;
+ 	struct drm_mm_node *node;
+ 	u64 offset = addr >> PAGE_SHIFT;
+ 	struct panfrost_mmu *mmu;
+@@ -415,11 +362,10 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
+ 	goto out;
+ 
+ found_mmu:
+-	priv = container_of(mmu, struct panfrost_file_priv, mmu);
+ 
+-	spin_lock(&priv->mm_lock);
++	spin_lock(&mmu->mm_lock);
+ 
+-	drm_mm_for_each_node(node, &priv->mm) {
++	drm_mm_for_each_node(node, &mmu->mm) {
+ 		if (offset >= node->start &&
+ 		    offset < (node->start + node->size)) {
+ 			mapping = drm_mm_node_to_panfrost_mapping(node);
+@@ -429,7 +375,7 @@ found_mmu:
+ 		}
+ 	}
+ 
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mmu->mm_lock);
+ out:
+ 	spin_unlock(&pfdev->as_lock);
+ 	return mapping;
+@@ -542,6 +488,107 @@ err_bo:
+ 	return ret;
+ }
+ 
++static void panfrost_mmu_release_ctx(struct kref *kref)
++{
++	struct panfrost_mmu *mmu = container_of(kref, struct panfrost_mmu,
++						refcount);
++	struct panfrost_device *pfdev = mmu->pfdev;
++
++	spin_lock(&pfdev->as_lock);
++	if (mmu->as >= 0) {
++		pm_runtime_get_noresume(pfdev->dev);
++		if (pm_runtime_active(pfdev->dev))
++			panfrost_mmu_disable(pfdev, mmu->as);
++		pm_runtime_put_autosuspend(pfdev->dev);
++
++		clear_bit(mmu->as, &pfdev->as_alloc_mask);
++		clear_bit(mmu->as, &pfdev->as_in_use_mask);
++		list_del(&mmu->list);
++	}
++	spin_unlock(&pfdev->as_lock);
++
++	free_io_pgtable_ops(mmu->pgtbl_ops);
++	drm_mm_takedown(&mmu->mm);
++	kfree(mmu);
++}
++
++void panfrost_mmu_ctx_put(struct panfrost_mmu *mmu)
++{
++	kref_put(&mmu->refcount, panfrost_mmu_release_ctx);
++}
++
++struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu)
++{
++	kref_get(&mmu->refcount);
++
++	return mmu;
++}
++
++#define PFN_4G		(SZ_4G >> PAGE_SHIFT)
++#define PFN_4G_MASK	(PFN_4G - 1)
++#define PFN_16M		(SZ_16M >> PAGE_SHIFT)
++
++static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node,
++					 unsigned long color,
++					 u64 *start, u64 *end)
++{
++	/* Executable buffers can't start or end on a 4GB boundary */
++	if (!(color & PANFROST_BO_NOEXEC)) {
++		u64 next_seg;
++
++		if ((*start & PFN_4G_MASK) == 0)
++			(*start)++;
++
++		if ((*end & PFN_4G_MASK) == 0)
++			(*end)--;
++
++		next_seg = ALIGN(*start, PFN_4G);
++		if (next_seg - *start <= PFN_16M)
++			*start = next_seg + 1;
++
++		*end = min(*end, ALIGN(*start, PFN_4G) - 1);
++	}
++}
++
++struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev)
++{
++	struct panfrost_mmu *mmu;
++
++	mmu = kzalloc(sizeof(*mmu), GFP_KERNEL);
++	if (!mmu)
++		return ERR_PTR(-ENOMEM);
++
++	mmu->pfdev = pfdev;
++	spin_lock_init(&mmu->mm_lock);
++
++	/* 4G enough for now. can be 48-bit */
++	drm_mm_init(&mmu->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT);
++	mmu->mm.color_adjust = panfrost_drm_mm_color_adjust;
++
++	INIT_LIST_HEAD(&mmu->list);
++	mmu->as = -1;
++
++	mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
++		.pgsize_bitmap	= SZ_4K | SZ_2M,
++		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
++		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
++		.coherent_walk	= pfdev->coherent,
++		.tlb		= &mmu_tlb_ops,
++		.iommu_dev	= pfdev->dev,
++	};
++
++	mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg,
++					      mmu);
++	if (!mmu->pgtbl_ops) {
++		kfree(mmu);
++		return ERR_PTR(-EINVAL);
++	}
++
++	kref_init(&mmu->refcount);
++
++	return mmu;
++}
++
+ static const char *access_type_name(struct panfrost_device *pfdev,
+ 		u32 fault_status)
+ {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h
+index 44fc2edf63ce6..cc2a0d307febc 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.h
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h
+@@ -18,7 +18,8 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev);
+ u32 panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu);
+ void panfrost_mmu_as_put(struct panfrost_device *pfdev, struct panfrost_mmu *mmu);
+ 
+-int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv);
+-void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv);
++struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu);
++void panfrost_mmu_ctx_put(struct panfrost_mmu *mmu);
++struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h
+index dc9df5457f1c3..db3d9930b19c1 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_regs.h
++++ b/drivers/gpu/drm/panfrost/panfrost_regs.h
+@@ -319,6 +319,8 @@
+ #define AS_FAULTSTATUS_ACCESS_TYPE_READ		(0x2 << 8)
+ #define AS_FAULTSTATUS_ACCESS_TYPE_WRITE	(0x3 << 8)
+ 
++#define AS_LOCK_REGION_MIN_SIZE                 (1ULL << 15)
++
+ #define gpu_write(dev, reg, data) writel(data, dev->iomem + reg)
+ #define gpu_read(dev, reg) readl(dev->iomem + reg)
+ 
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+index c22551c2facb1..2a06ec1cbefb0 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+@@ -559,6 +559,13 @@ static int rcar_du_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void rcar_du_shutdown(struct platform_device *pdev)
++{
++	struct rcar_du_device *rcdu = platform_get_drvdata(pdev);
++
++	drm_atomic_helper_shutdown(&rcdu->ddev);
++}
++
+ static int rcar_du_probe(struct platform_device *pdev)
+ {
+ 	struct rcar_du_device *rcdu;
+@@ -615,6 +622,7 @@ error:
+ static struct platform_driver rcar_du_platform_driver = {
+ 	.probe		= rcar_du_probe,
+ 	.remove		= rcar_du_remove,
++	.shutdown	= rcar_du_shutdown,
+ 	.driver		= {
+ 		.name	= "rcar-du",
+ 		.pm	= &rcar_du_pm_ops,
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 8d7fd65ccced3..32202385073a2 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -488,6 +488,31 @@ void ttm_bo_unlock_delayed_workqueue(struct ttm_device *bdev, int resched)
+ }
+ EXPORT_SYMBOL(ttm_bo_unlock_delayed_workqueue);
+ 
++static int ttm_bo_bounce_temp_buffer(struct ttm_buffer_object *bo,
++				     struct ttm_resource **mem,
++				     struct ttm_operation_ctx *ctx,
++				     struct ttm_place *hop)
++{
++	struct ttm_placement hop_placement;
++	struct ttm_resource *hop_mem;
++	int ret;
++
++	hop_placement.num_placement = hop_placement.num_busy_placement = 1;
++	hop_placement.placement = hop_placement.busy_placement = hop;
++
++	/* find space in the bounce domain */
++	ret = ttm_bo_mem_space(bo, &hop_placement, &hop_mem, ctx);
++	if (ret)
++		return ret;
++	/* move to the bounce domain */
++	ret = ttm_bo_handle_move_mem(bo, hop_mem, false, ctx, NULL);
++	if (ret) {
++		ttm_resource_free(bo, &hop_mem);
++		return ret;
++	}
++	return 0;
++}
++
+ static int ttm_bo_evict(struct ttm_buffer_object *bo,
+ 			struct ttm_operation_ctx *ctx)
+ {
+@@ -527,12 +552,17 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo,
+ 		goto out;
+ 	}
+ 
++bounce:
+ 	ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);
+-	if (unlikely(ret)) {
+-		WARN(ret == -EMULTIHOP, "Unexpected multihop in eviction - likely driver bug\n");
+-		if (ret != -ERESTARTSYS)
++	if (ret == -EMULTIHOP) {
++		ret = ttm_bo_bounce_temp_buffer(bo, &evict_mem, ctx, &hop);
++		if (ret) {
+ 			pr_err("Buffer eviction failed\n");
+-		ttm_resource_free(bo, &evict_mem);
++			ttm_resource_free(bo, &evict_mem);
++			goto out;
++		}
++		/* try and move to final place now. */
++		goto bounce;
+ 	}
+ out:
+ 	return ret;
+@@ -847,31 +877,6 @@ error:
+ }
+ EXPORT_SYMBOL(ttm_bo_mem_space);
+ 
+-static int ttm_bo_bounce_temp_buffer(struct ttm_buffer_object *bo,
+-				     struct ttm_resource **mem,
+-				     struct ttm_operation_ctx *ctx,
+-				     struct ttm_place *hop)
+-{
+-	struct ttm_placement hop_placement;
+-	struct ttm_resource *hop_mem;
+-	int ret;
+-
+-	hop_placement.num_placement = hop_placement.num_busy_placement = 1;
+-	hop_placement.placement = hop_placement.busy_placement = hop;
+-
+-	/* find space in the bounce domain */
+-	ret = ttm_bo_mem_space(bo, &hop_placement, &hop_mem, ctx);
+-	if (ret)
+-		return ret;
+-	/* move to the bounce domain */
+-	ret = ttm_bo_handle_move_mem(bo, hop_mem, false, ctx, NULL);
+-	if (ret) {
+-		ttm_resource_free(bo, &hop_mem);
+-		return ret;
+-	}
+-	return 0;
+-}
+-
+ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo,
+ 			      struct ttm_placement *placement,
+ 			      struct ttm_operation_ctx *ctx)
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index 763fa6f4e07de..1c5ffe2935af5 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -143,7 +143,6 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
+ 	struct ttm_resource *src_mem = bo->resource;
+ 	struct ttm_resource_manager *src_man =
+ 		ttm_manager_type(bdev, src_mem->mem_type);
+-	struct ttm_resource src_copy = *src_mem;
+ 	union {
+ 		struct ttm_kmap_iter_tt tt;
+ 		struct ttm_kmap_iter_linear_io io;
+@@ -173,11 +172,11 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
+ 	}
+ 
+ 	ttm_move_memcpy(bo, dst_mem->num_pages, dst_iter, src_iter);
+-	src_copy = *src_mem;
+-	ttm_bo_move_sync_cleanup(bo, dst_mem);
+ 
+ 	if (!src_iter->ops->maps_tt)
+-		ttm_kmap_iter_linear_io_fini(&_src_iter.io, bdev, &src_copy);
++		ttm_kmap_iter_linear_io_fini(&_src_iter.io, bdev, src_mem);
++	ttm_bo_move_sync_cleanup(bo, dst_mem);
++
+ out_src_iter:
+ 	if (!dst_iter->ops->maps_tt)
+ 		ttm_kmap_iter_linear_io_fini(&_dst_iter.io, bdev, dst_mem);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index c2876731ee2dc..f91d37beb1133 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -613,12 +613,12 @@ static void vc4_hdmi_encoder_post_crtc_disable(struct drm_encoder *encoder,
+ 
+ 	HDMI_WRITE(HDMI_RAM_PACKET_CONFIG, 0);
+ 
+-	HDMI_WRITE(HDMI_VID_CTL, HDMI_READ(HDMI_VID_CTL) |
+-		   VC4_HD_VID_CTL_CLRRGB | VC4_HD_VID_CTL_CLRSYNC);
++	HDMI_WRITE(HDMI_VID_CTL, HDMI_READ(HDMI_VID_CTL) | VC4_HD_VID_CTL_CLRRGB);
+ 
+-	HDMI_WRITE(HDMI_VID_CTL,
+-		   HDMI_READ(HDMI_VID_CTL) | VC4_HD_VID_CTL_BLANKPIX);
++	mdelay(1);
+ 
++	HDMI_WRITE(HDMI_VID_CTL,
++		   HDMI_READ(HDMI_VID_CTL) & ~VC4_HD_VID_CTL_ENABLE);
+ 	vc4_hdmi_disable_scrambling(encoder);
+ }
+ 
+@@ -628,12 +628,12 @@ static void vc4_hdmi_encoder_post_crtc_powerdown(struct drm_encoder *encoder,
+ 	struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
+ 	int ret;
+ 
++	HDMI_WRITE(HDMI_VID_CTL,
++		   HDMI_READ(HDMI_VID_CTL) | VC4_HD_VID_CTL_BLANKPIX);
++
+ 	if (vc4_hdmi->variant->phy_disable)
+ 		vc4_hdmi->variant->phy_disable(vc4_hdmi);
+ 
+-	HDMI_WRITE(HDMI_VID_CTL,
+-		   HDMI_READ(HDMI_VID_CTL) & ~VC4_HD_VID_CTL_ENABLE);
+-
+ 	clk_disable_unprepare(vc4_hdmi->pixel_bvb_clock);
+ 	clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 
+@@ -1015,6 +1015,7 @@ static void vc4_hdmi_encoder_post_crtc_enable(struct drm_encoder *encoder,
+ 
+ 	HDMI_WRITE(HDMI_VID_CTL,
+ 		   VC4_HD_VID_CTL_ENABLE |
++		   VC4_HD_VID_CTL_CLRRGB |
+ 		   VC4_HD_VID_CTL_UNDERFLOW_ENABLE |
+ 		   VC4_HD_VID_CTL_FRAME_COUNTER_RESET |
+ 		   (vsync_pos ? 0 : VC4_HD_VID_CTL_VSYNC_LOW) |
+@@ -1372,7 +1373,9 @@ static int vc4_hdmi_audio_trigger(struct snd_pcm_substream *substream, int cmd,
+ 		HDMI_WRITE(HDMI_MAI_CTL,
+ 			   VC4_SET_FIELD(vc4_hdmi->audio.channels,
+ 					 VC4_HD_MAI_CTL_CHNUM) |
+-			   VC4_HD_MAI_CTL_ENABLE);
++					 VC4_HD_MAI_CTL_WHOLSMP |
++					 VC4_HD_MAI_CTL_CHALIGN |
++					 VC4_HD_MAI_CTL_ENABLE);
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 		HDMI_WRITE(HDMI_MAI_CTL,
+diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
+index 107521ace597a..092514a2155fe 100644
+--- a/drivers/gpu/drm/vkms/vkms_plane.c
++++ b/drivers/gpu/drm/vkms/vkms_plane.c
+@@ -8,7 +8,6 @@
+ #include <drm/drm_gem_atomic_helper.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_plane_helper.h>
+-#include <drm/drm_gem_shmem_helper.h>
+ 
+ #include "vkms_drv.h"
+ 
+@@ -150,45 +149,10 @@ static int vkms_plane_atomic_check(struct drm_plane *plane,
+ 	return 0;
+ }
+ 
+-static int vkms_prepare_fb(struct drm_plane *plane,
+-			   struct drm_plane_state *state)
+-{
+-	struct drm_gem_object *gem_obj;
+-	struct dma_buf_map map;
+-	int ret;
+-
+-	if (!state->fb)
+-		return 0;
+-
+-	gem_obj = drm_gem_fb_get_obj(state->fb, 0);
+-	ret = drm_gem_shmem_vmap(gem_obj, &map);
+-	if (ret)
+-		DRM_ERROR("vmap failed: %d\n", ret);
+-
+-	return drm_gem_plane_helper_prepare_fb(plane, state);
+-}
+-
+-static void vkms_cleanup_fb(struct drm_plane *plane,
+-			    struct drm_plane_state *old_state)
+-{
+-	struct drm_gem_object *gem_obj;
+-	struct drm_gem_shmem_object *shmem_obj;
+-	struct dma_buf_map map;
+-
+-	if (!old_state->fb)
+-		return;
+-
+-	gem_obj = drm_gem_fb_get_obj(old_state->fb, 0);
+-	shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0));
+-	dma_buf_map_set_vaddr(&map, shmem_obj->vaddr);
+-	drm_gem_shmem_vunmap(gem_obj, &map);
+-}
+-
+ static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = {
+ 	.atomic_update		= vkms_plane_atomic_update,
+ 	.atomic_check		= vkms_plane_atomic_check,
+-	.prepare_fb		= vkms_prepare_fb,
+-	.cleanup_fb		= vkms_cleanup_fb,
++	DRM_GEM_SHADOW_PLANE_HELPER_FUNCS,
+ };
+ 
+ struct vkms_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+diff --git a/drivers/gpu/drm/vmwgfx/ttm_memory.c b/drivers/gpu/drm/vmwgfx/ttm_memory.c
+index aeb0a22a2c347..edd17c30d5a51 100644
+--- a/drivers/gpu/drm/vmwgfx/ttm_memory.c
++++ b/drivers/gpu/drm/vmwgfx/ttm_memory.c
+@@ -435,8 +435,10 @@ int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev)
+ 
+ 	si_meminfo(&si);
+ 
++	spin_lock(&glob->lock);
+ 	/* set it as 0 by default to keep original behavior of OOM */
+ 	glob->lower_mem_limit = 0;
++	spin_unlock(&glob->lock);
+ 
+ 	ret = ttm_mem_init_kernel_zone(glob, &si);
+ 	if (unlikely(ret != 0))
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
+index 05b3248259007..ea6d8c86985f6 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
+@@ -715,7 +715,7 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind)
+  * without checking which bindings actually need to be emitted
+  *
+  * @cbs: Pointer to the context's struct vmw_ctx_binding_state
+- * @bi: Pointer to where the binding info array is stored in @cbs
++ * @biv: Pointer to where the binding info array is stored in @cbs
+  * @max_num: Maximum number of entries in the @bi array.
+  *
+  * Scans the @bi array for bindings and builds a buffer of view id data.
+@@ -725,11 +725,9 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind)
+  * contains the command data.
+  */
+ static void vmw_collect_view_ids(struct vmw_ctx_binding_state *cbs,
+-				 const struct vmw_ctx_bindinfo *bi,
++				 const struct vmw_ctx_bindinfo_view *biv,
+ 				 u32 max_num)
+ {
+-	const struct vmw_ctx_bindinfo_view *biv =
+-		container_of(bi, struct vmw_ctx_bindinfo_view, bi);
+ 	unsigned long i;
+ 
+ 	cbs->bind_cmd_count = 0;
+@@ -838,7 +836,7 @@ static int vmw_emit_set_sr(struct vmw_ctx_binding_state *cbs,
+  */
+ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->render_targets[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->render_targets[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetRenderTargets body;
+@@ -874,7 +872,7 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+  * without checking which bindings actually need to be emitted
+  *
+  * @cbs: Pointer to the context's struct vmw_ctx_binding_state
+- * @bi: Pointer to where the binding info array is stored in @cbs
++ * @biso: Pointer to where the binding info array is stored in @cbs
+  * @max_num: Maximum number of entries in the @bi array.
+  *
+  * Scans the @bi array for bindings and builds a buffer of SVGA3dSoTarget data.
+@@ -884,11 +882,9 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+  * contains the command data.
+  */
+ static void vmw_collect_so_targets(struct vmw_ctx_binding_state *cbs,
+-				   const struct vmw_ctx_bindinfo *bi,
++				   const struct vmw_ctx_bindinfo_so_target *biso,
+ 				   u32 max_num)
+ {
+-	const struct vmw_ctx_bindinfo_so_target *biso =
+-		container_of(bi, struct vmw_ctx_bindinfo_so_target, bi);
+ 	unsigned long i;
+ 	SVGA3dSoTarget *so_buffer = (SVGA3dSoTarget *) cbs->bind_cmd_buffer;
+ 
+@@ -919,7 +915,7 @@ static void vmw_collect_so_targets(struct vmw_ctx_binding_state *cbs,
+  */
+ static int vmw_emit_set_so_target(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->so_targets[0].bi;
++	const struct vmw_ctx_bindinfo_so_target *loc = &cbs->so_targets[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetSOTargets body;
+@@ -1066,7 +1062,7 @@ static int vmw_emit_set_vb(struct vmw_ctx_binding_state *cbs)
+ 
+ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->ua_views[0].views[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->ua_views[0].views[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetUAViews body;
+@@ -1096,7 +1092,7 @@ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
+ 
+ static int vmw_emit_set_cs_uav(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->ua_views[1].views[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->ua_views[1].views[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetCSUAViews body;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+index 6bb4961e64a57..9656d4a2abff8 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+@@ -516,7 +516,7 @@ static void vmw_cmdbuf_work_func(struct work_struct *work)
+ 	struct vmw_cmdbuf_man *man =
+ 		container_of(work, struct vmw_cmdbuf_man, work);
+ 	struct vmw_cmdbuf_header *entry, *next;
+-	uint32_t dummy;
++	uint32_t dummy = 0;
+ 	bool send_fence = false;
+ 	struct list_head restart_head[SVGA_CB_CONTEXT_MAX];
+ 	int i;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
+index b262d61d839d5..9487faff52293 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
+@@ -159,6 +159,7 @@ void vmw_cmdbuf_res_commit(struct list_head *list)
+ void vmw_cmdbuf_res_revert(struct list_head *list)
+ {
+ 	struct vmw_cmdbuf_res *entry, *next;
++	int ret;
+ 
+ 	list_for_each_entry_safe(entry, next, list, head) {
+ 		switch (entry->state) {
+@@ -166,7 +167,8 @@ void vmw_cmdbuf_res_revert(struct list_head *list)
+ 			vmw_cmdbuf_res_free(entry->man, entry);
+ 			break;
+ 		case VMW_CMDBUF_RES_DEL:
+-			drm_ht_insert_item(&entry->man->resources, &entry->hash);
++			ret = drm_ht_insert_item(&entry->man->resources, &entry->hash);
++			BUG_ON(ret);
+ 			list_del(&entry->head);
+ 			list_add_tail(&entry->head, &entry->man->list);
+ 			entry->state = VMW_CMDBUF_RES_COMMITTED;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index a2b8464b3f566..06e8332682c5e 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -2546,6 +2546,8 @@ static int vmw_cmd_dx_so_define(struct vmw_private *dev_priv,
+ 
+ 	so_type = vmw_so_cmd_to_type(header->id);
+ 	res = vmw_context_cotable(ctx_node->ctx, vmw_so_cotables[so_type]);
++	if (IS_ERR(res))
++		return PTR_ERR(res);
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 	ret = vmw_cotable_notify(res, cmd->defined_id);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+index f2d6254154585..2d8caf09f1727 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+@@ -506,11 +506,13 @@ static void vmw_mob_pt_setup(struct vmw_mob *mob,
+ {
+ 	unsigned long num_pt_pages = 0;
+ 	struct ttm_buffer_object *bo = mob->pt_bo;
+-	struct vmw_piter save_pt_iter;
++	struct vmw_piter save_pt_iter = {0};
+ 	struct vmw_piter pt_iter;
+ 	const struct vmw_sg_table *vsgt;
+ 	int ret;
+ 
++	BUG_ON(num_data_pages == 0);
++
+ 	ret = ttm_bo_reserve(bo, false, true, NULL);
+ 	BUG_ON(ret != 0);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+index 3d08f5700bdb4..7e3f99722d026 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+@@ -155,6 +155,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
+ 	/* HB port can't access encrypted memory. */
+ 	if (hb && !mem_encrypt_active()) {
+ 		unsigned long bp = channel->cookie_high;
++		u32 channel_id = (channel->channel_id << 16);
+ 
+ 		si = (uintptr_t) msg;
+ 		di = channel->cookie_low;
+@@ -162,7 +163,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
+ 		VMW_PORT_HB_OUT(
+ 			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+ 			msg_len, si, di,
+-			VMWARE_HYPERVISOR_HB | (channel->channel_id << 16) |
++			VMWARE_HYPERVISOR_HB | channel_id |
+ 			VMWARE_HYPERVISOR_OUT,
+ 			VMW_HYPERVISOR_MAGIC, bp,
+ 			eax, ebx, ecx, edx, si, di);
+@@ -210,6 +211,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
+ 	/* HB port can't access encrypted memory */
+ 	if (hb && !mem_encrypt_active()) {
+ 		unsigned long bp = channel->cookie_low;
++		u32 channel_id = (channel->channel_id << 16);
+ 
+ 		si = channel->cookie_high;
+ 		di = (uintptr_t) reply;
+@@ -217,7 +219,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
+ 		VMW_PORT_HB_IN(
+ 			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+ 			reply_len, si, di,
+-			VMWARE_HYPERVISOR_HB | (channel->channel_id << 16),
++			VMWARE_HYPERVISOR_HB | channel_id,
+ 			VMW_HYPERVISOR_MAGIC, bp,
+ 			eax, ebx, ecx, edx, si, di);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index 7b45393ad98e9..3b6f6044c3259 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -114,6 +114,7 @@ static void vmw_resource_release(struct kref *kref)
+ 	    container_of(kref, struct vmw_resource, kref);
+ 	struct vmw_private *dev_priv = res->dev_priv;
+ 	int id;
++	int ret;
+ 	struct idr *idr = &dev_priv->res_idr[res->func->res_type];
+ 
+ 	spin_lock(&dev_priv->resource_lock);
+@@ -122,7 +123,8 @@ static void vmw_resource_release(struct kref *kref)
+ 	if (res->backup) {
+ 		struct ttm_buffer_object *bo = &res->backup->base;
+ 
+-		ttm_bo_reserve(bo, false, false, NULL);
++		ret = ttm_bo_reserve(bo, false, false, NULL);
++		BUG_ON(ret);
+ 		if (vmw_resource_mob_attached(res) &&
+ 		    res->func->unbind != NULL) {
+ 			struct ttm_validate_buffer val_buf;
+@@ -1001,7 +1003,9 @@ int vmw_resource_pin(struct vmw_resource *res, bool interruptible)
+ 		if (res->backup) {
+ 			vbo = res->backup;
+ 
+-			ttm_bo_reserve(&vbo->base, interruptible, false, NULL);
++			ret = ttm_bo_reserve(&vbo->base, interruptible, false, NULL);
++			if (ret)
++				goto out_no_validate;
+ 			if (!vbo->base.pin_count) {
+ 				ret = ttm_bo_validate
+ 					(&vbo->base,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
+index c3a8d6e8380e4..9efb4463ce997 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
+@@ -539,7 +539,8 @@ const SVGACOTableType vmw_so_cotables[] = {
+ 	[vmw_so_ds] = SVGA_COTABLE_DEPTHSTENCIL,
+ 	[vmw_so_rs] = SVGA_COTABLE_RASTERIZERSTATE,
+ 	[vmw_so_ss] = SVGA_COTABLE_SAMPLER,
+-	[vmw_so_so] = SVGA_COTABLE_STREAMOUTPUT
++	[vmw_so_so] = SVGA_COTABLE_STREAMOUTPUT,
++	[vmw_so_max]= SVGA_COTABLE_MAX
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 0835468bb2eed..a04ad7812960c 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -865,7 +865,7 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 	user_srf->prime.base.shareable = false;
+ 	user_srf->prime.base.tfile = NULL;
+ 	if (drm_is_primary_client(file_priv))
+-		user_srf->master = drm_master_get(file_priv->master);
++		user_srf->master = drm_file_get_master(file_priv);
+ 
+ 	/**
+ 	 * From this point, the generic resource management functions
+@@ -1534,7 +1534,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 
+ 	user_srf = container_of(srf, struct vmw_user_surface, srf);
+ 	if (drm_is_primary_client(file_priv))
+-		user_srf->master = drm_master_get(file_priv->master);
++		user_srf->master = drm_file_get_master(file_priv);
+ 
+ 	res = &user_srf->srf.res;
+ 
+@@ -1872,7 +1872,6 @@ static void vmw_surface_dirty_range_add(struct vmw_resource *res, size_t start,
+ static int vmw_surface_dirty_sync(struct vmw_resource *res)
+ {
+ 	struct vmw_private *dev_priv = res->dev_priv;
+-	bool has_dx = 0;
+ 	u32 i, num_dirty;
+ 	struct vmw_surface_dirty *dirty =
+ 		(struct vmw_surface_dirty *) res->dirty;
+@@ -1899,7 +1898,7 @@ static int vmw_surface_dirty_sync(struct vmw_resource *res)
+ 	if (!num_dirty)
+ 		goto out;
+ 
+-	alloc_size = num_dirty * ((has_dx) ? sizeof(*cmd1) : sizeof(*cmd2));
++	alloc_size = num_dirty * ((has_sm4_context(dev_priv)) ? sizeof(*cmd1) : sizeof(*cmd2));
+ 	cmd = VMW_CMD_RESERVE(dev_priv, alloc_size);
+ 	if (!cmd)
+ 		return -ENOMEM;
+@@ -1917,7 +1916,7 @@ static int vmw_surface_dirty_sync(struct vmw_resource *res)
+ 		 * DX_UPDATE_SUBRESOURCE is aware of array surfaces.
+ 		 * UPDATE_GB_IMAGE is not.
+ 		 */
+-		if (has_dx) {
++		if (has_sm4_context(dev_priv)) {
+ 			cmd1->header.id = SVGA_3D_CMD_DX_UPDATE_SUBRESOURCE;
+ 			cmd1->header.size = sizeof(cmd1->body);
+ 			cmd1->body.sid = res->id;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
+index 8338b1d20f2a3..b09094b50c5d0 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
+@@ -586,13 +586,13 @@ int vmw_validation_bo_validate(struct vmw_validation_context *ctx, bool intr)
+ 			container_of(entry->base.bo, typeof(*vbo), base);
+ 
+ 		if (entry->cpu_blit) {
+-			struct ttm_operation_ctx ctx = {
++			struct ttm_operation_ctx ttm_ctx = {
+ 				.interruptible = intr,
+ 				.no_wait_gpu = false
+ 			};
+ 
+ 			ret = ttm_bo_validate(entry->base.bo,
+-					      &vmw_nonfixed_placement, &ctx);
++					      &vmw_nonfixed_placement, &ttm_ctx);
+ 		} else {
+ 			ret = vmw_validation_bo_validate_single
+ 			(entry->base.bo, intr, entry->as_mob);
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 109d627968ac0..01c6ce7784ddb 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1452,9 +1452,10 @@ zynqmp_disp_crtc_atomic_enable(struct drm_crtc *crtc,
+ 	struct drm_display_mode *adjusted_mode = &crtc->state->adjusted_mode;
+ 	int ret, vrefresh;
+ 
++	pm_runtime_get_sync(disp->dev);
++
+ 	zynqmp_disp_crtc_setup_clock(crtc, adjusted_mode);
+ 
+-	pm_runtime_get_sync(disp->dev);
+ 	ret = clk_prepare_enable(disp->pclk);
+ 	if (ret) {
+ 		dev_err(disp->dev, "failed to enable a pixel clock\n");
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+index 82430ca9b9133..6f588dc09ba63 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+@@ -402,10 +402,6 @@ static int zynqmp_dp_phy_init(struct zynqmp_dp *dp)
+ 		}
+ 	}
+ 
+-	ret = zynqmp_dp_reset(dp, false);
+-	if (ret < 0)
+-		return ret;
+-
+ 	zynqmp_dp_clr(dp, ZYNQMP_DP_PHY_RESET, ZYNQMP_DP_PHY_RESET_ALL_RESET);
+ 
+ 	/*
+@@ -441,8 +437,6 @@ static void zynqmp_dp_phy_exit(struct zynqmp_dp *dp)
+ 				ret);
+ 	}
+ 
+-	zynqmp_dp_reset(dp, true);
+-
+ 	for (i = 0; i < dp->num_lanes; i++) {
+ 		ret = phy_exit(dp->phy[i]);
+ 		if (ret)
+@@ -1683,9 +1677,13 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 		return PTR_ERR(dp->reset);
+ 	}
+ 
++	ret = zynqmp_dp_reset(dp, false);
++	if (ret < 0)
++		return ret;
++
+ 	ret = zynqmp_dp_phy_probe(dp);
+ 	if (ret)
+-		return ret;
++		goto err_reset;
+ 
+ 	/* Initialize the hardware. */
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_TX_PHY_POWER_DOWN,
+@@ -1697,7 +1695,7 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 
+ 	ret = zynqmp_dp_phy_init(dp);
+ 	if (ret)
+-		return ret;
++		goto err_reset;
+ 
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_TRANSMITTER_ENABLE, 1);
+ 
+@@ -1709,15 +1707,18 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 					zynqmp_dp_irq_handler, IRQF_ONESHOT,
+ 					dev_name(dp->dev), dp);
+ 	if (ret < 0)
+-		goto error;
++		goto err_phy_exit;
+ 
+ 	dev_dbg(dp->dev, "ZynqMP DisplayPort Tx probed with %u lanes\n",
+ 		dp->num_lanes);
+ 
+ 	return 0;
+ 
+-error:
++err_phy_exit:
+ 	zynqmp_dp_phy_exit(dp);
++err_reset:
++	zynqmp_dp_reset(dp, true);
++
+ 	return ret;
+ }
+ 
+@@ -1735,4 +1736,5 @@ void zynqmp_dp_remove(struct zynqmp_dpsub *dpsub)
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_INT_DS, 0xffffffff);
+ 
+ 	zynqmp_dp_phy_exit(dp);
++	zynqmp_dp_reset(dp, true);
+ }
+diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
+index 1ea1a7c0b20fe..e29efcb1c0402 100644
+--- a/drivers/hid/Makefile
++++ b/drivers/hid/Makefile
+@@ -115,7 +115,6 @@ obj-$(CONFIG_HID_STEELSERIES)	+= hid-steelseries.o
+ obj-$(CONFIG_HID_SUNPLUS)	+= hid-sunplus.o
+ obj-$(CONFIG_HID_GREENASIA)	+= hid-gaff.o
+ obj-$(CONFIG_HID_THRUSTMASTER)	+= hid-tmff.o hid-thrustmaster.o
+-obj-$(CONFIG_HID_TMINIT)	+= hid-tminit.o
+ obj-$(CONFIG_HID_TIVO)		+= hid-tivo.o
+ obj-$(CONFIG_HID_TOPSEED)	+= hid-topseed.o
+ obj-$(CONFIG_HID_TWINHAN)	+= hid-twinhan.o
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index efb849411d254..4710b9aa24a57 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -184,7 +184,7 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ 			rc = -ENOMEM;
+ 			goto cleanup;
+ 		}
+-		info.period = msecs_to_jiffies(AMD_SFH_IDLE_LOOP);
++		info.period = AMD_SFH_IDLE_LOOP;
+ 		info.sensor_idx = cl_idx;
+ 		info.dma_address = cl_data->sensor_dma_addr[i];
+ 
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 4286a51f7f169..4b5ebeacd2836 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -419,8 +419,6 @@ static int hidinput_get_battery_property(struct power_supply *psy,
+ 
+ 		if (dev->battery_status == HID_BATTERY_UNKNOWN)
+ 			val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+-		else if (dev->battery_capacity == 100)
+-			val->intval = POWER_SUPPLY_STATUS_FULL;
+ 		else
+ 			val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
+ 		break;
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 51b39bda9a9d2..2e104682c22b9 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -662,8 +662,6 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb653) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65a) },
+-#endif
+-#if IS_ENABLED(CONFIG_HID_TMINIT)
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65d) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_TIVO)
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 46474612e73c6..517141138b007 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -171,8 +171,6 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+ 	{ I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_3118,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+-	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+-		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ USB_VENDOR_ID_ALPS_JP, HID_ANY_ID,
+ 		 I2C_HID_QUIRK_RESET_ON_RESUME },
+ 	{ I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393,
+@@ -183,7 +181,8 @@ static const struct i2c_hid_quirks {
+ 	 * Sending the wakeup after reset actually break ELAN touchscreen controller
+ 	 */
+ 	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+-		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET },
++		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET |
++		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hwmon/pmbus/ibm-cffps.c b/drivers/hwmon/pmbus/ibm-cffps.c
+index 5668d8305b78e..df712ce4b164d 100644
+--- a/drivers/hwmon/pmbus/ibm-cffps.c
++++ b/drivers/hwmon/pmbus/ibm-cffps.c
+@@ -50,9 +50,9 @@
+ #define CFFPS_MFR_VAUX_FAULT			BIT(6)
+ #define CFFPS_MFR_CURRENT_SHARE_WARNING		BIT(7)
+ 
+-#define CFFPS_LED_BLINK				BIT(0)
+-#define CFFPS_LED_ON				BIT(1)
+-#define CFFPS_LED_OFF				BIT(2)
++#define CFFPS_LED_BLINK				(BIT(0) | BIT(6))
++#define CFFPS_LED_ON				(BIT(1) | BIT(6))
++#define CFFPS_LED_OFF				(BIT(2) | BIT(6))
+ #define CFFPS_BLINK_RATE_MS			250
+ 
+ enum {
+diff --git a/drivers/iio/dac/ad5624r_spi.c b/drivers/iio/dac/ad5624r_spi.c
+index 9bde869829121..530529feebb51 100644
+--- a/drivers/iio/dac/ad5624r_spi.c
++++ b/drivers/iio/dac/ad5624r_spi.c
+@@ -229,7 +229,7 @@ static int ad5624r_probe(struct spi_device *spi)
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+ 	st = iio_priv(indio_dev);
+-	st->reg = devm_regulator_get(&spi->dev, "vcc");
++	st->reg = devm_regulator_get_optional(&spi->dev, "vref");
+ 	if (!IS_ERR(st->reg)) {
+ 		ret = regulator_enable(st->reg);
+ 		if (ret)
+@@ -240,6 +240,22 @@ static int ad5624r_probe(struct spi_device *spi)
+ 			goto error_disable_reg;
+ 
+ 		voltage_uv = ret;
++	} else {
++		if (PTR_ERR(st->reg) != -ENODEV)
++			return PTR_ERR(st->reg);
++		/* Backwards compatibility. This naming is not correct */
++		st->reg = devm_regulator_get_optional(&spi->dev, "vcc");
++		if (!IS_ERR(st->reg)) {
++			ret = regulator_enable(st->reg);
++			if (ret)
++				return ret;
++
++			ret = regulator_get_voltage(st->reg);
++			if (ret < 0)
++				goto error_disable_reg;
++
++			voltage_uv = ret;
++		}
+ 	}
+ 
+ 	spi_set_drvdata(spi, indio_dev);
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 3b5ba26d7d867..3b4a0e60e6059 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -89,6 +89,8 @@
+ 
+ #define	LTC2983_STATUS_START_MASK	BIT(7)
+ #define	LTC2983_STATUS_START(x)		FIELD_PREP(LTC2983_STATUS_START_MASK, x)
++#define	LTC2983_STATUS_UP_MASK		GENMASK(7, 6)
++#define	LTC2983_STATUS_UP(reg)		FIELD_GET(LTC2983_STATUS_UP_MASK, reg)
+ 
+ #define	LTC2983_STATUS_CHAN_SEL_MASK	GENMASK(4, 0)
+ #define	LTC2983_STATUS_CHAN_SEL(x) \
+@@ -1362,17 +1364,16 @@ put_child:
+ 
+ static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
+ {
+-	u32 iio_chan_t = 0, iio_chan_v = 0, chan, iio_idx = 0;
++	u32 iio_chan_t = 0, iio_chan_v = 0, chan, iio_idx = 0, status;
+ 	int ret;
+-	unsigned long time;
+-
+-	/* make sure the device is up */
+-	time = wait_for_completion_timeout(&st->completion,
+-					    msecs_to_jiffies(250));
+ 
+-	if (!time) {
++	/* make sure the device is up: start bit (7) is 0 and done bit (6) is 1 */
++	ret = regmap_read_poll_timeout(st->regmap, LTC2983_STATUS_REG, status,
++				       LTC2983_STATUS_UP(status) == 1, 25000,
++				       25000 * 10);
++	if (ret) {
+ 		dev_err(&st->spi->dev, "Device startup timed out\n");
+-		return -ETIMEDOUT;
++		return ret;
+ 	}
+ 
+ 	st->iio_chan = devm_kzalloc(&st->spi->dev,
+@@ -1492,10 +1493,11 @@ static int ltc2983_probe(struct spi_device *spi)
+ 	ret = ltc2983_parse_dt(st);
+ 	if (ret)
+ 		return ret;
+-	/*
+-	 * let's request the irq now so it is used to sync the device
+-	 * startup in ltc2983_setup()
+-	 */
++
++	ret = ltc2983_setup(st, true);
++	if (ret)
++		return ret;
++
+ 	ret = devm_request_irq(&spi->dev, spi->irq, ltc2983_irq_handler,
+ 			       IRQF_TRIGGER_RISING, name, st);
+ 	if (ret) {
+@@ -1503,10 +1505,6 @@ static int ltc2983_probe(struct spi_device *spi)
+ 		return ret;
+ 	}
+ 
+-	ret = ltc2983_setup(st, true);
+-	if (ret)
+-		return ret;
+-
+ 	indio_dev->name = name;
+ 	indio_dev->num_channels = st->iio_channels;
+ 	indio_dev->channels = st->iio_chan;
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index 42261152b4892..2b47073c61a65 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -1186,29 +1186,34 @@ static int __init iw_cm_init(void)
+ 
+ 	ret = iwpm_init(RDMA_NL_IWCM);
+ 	if (ret)
+-		pr_err("iw_cm: couldn't init iwpm\n");
+-	else
+-		rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table);
++		return ret;
++
+ 	iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", 0);
+ 	if (!iwcm_wq)
+-		return -ENOMEM;
++		goto err_alloc;
+ 
+ 	iwcm_ctl_table_hdr = register_net_sysctl(&init_net, "net/iw_cm",
+ 						 iwcm_ctl_table);
+ 	if (!iwcm_ctl_table_hdr) {
+ 		pr_err("iw_cm: couldn't register sysctl paths\n");
+-		destroy_workqueue(iwcm_wq);
+-		return -ENOMEM;
++		goto err_sysctl;
+ 	}
+ 
++	rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table);
+ 	return 0;
++
++err_sysctl:
++	destroy_workqueue(iwcm_wq);
++err_alloc:
++	iwpm_exit(RDMA_NL_IWCM);
++	return -ENOMEM;
+ }
+ 
+ static void __exit iw_cm_cleanup(void)
+ {
++	rdma_nl_unregister(RDMA_NL_IWCM);
+ 	unregister_net_sysctl_table(iwcm_ctl_table_hdr);
+ 	destroy_workqueue(iwcm_wq);
+-	rdma_nl_unregister(RDMA_NL_IWCM);
+ 	iwpm_exit(RDMA_NL_IWCM);
+ }
+ 
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index be6d3ff0f1be2..29c9df9f25aa3 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -717,7 +717,6 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
+ 
+ 	qp->qp_handle = create_qp_resp.qp_handle;
+ 	qp->ibqp.qp_num = create_qp_resp.qp_num;
+-	qp->ibqp.qp_type = init_attr->qp_type;
+ 	qp->max_send_wr = init_attr->cap.max_send_wr;
+ 	qp->max_recv_wr = init_attr->cap.max_recv_wr;
+ 	qp->max_send_sge = init_attr->cap.max_send_sge;
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 0986aa065418a..34106e5be6794 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -650,12 +650,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
+ 
+ 	ppd->pkeys[default_pkey_idx] = DEFAULT_P_KEY;
+ 	ppd->part_enforce |= HFI1_PART_ENFORCE_IN;
+-
+-	if (loopback) {
+-		dd_dev_err(dd, "Faking data partition 0x8001 in idx %u\n",
+-			   !default_pkey_idx);
+-		ppd->pkeys[!default_pkey_idx] = 0x8001;
+-	}
++	ppd->pkeys[0] = 0x8001;
+ 
+ 	INIT_WORK(&ppd->link_vc_work, handle_verify_cap);
+ 	INIT_WORK(&ppd->link_up_work, handle_link_up);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 991f65269fa61..8518b1571f2c6 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -496,6 +496,12 @@ struct hns_roce_bank {
+ 	u32 next; /* Next ID to allocate. */
+ };
+ 
++struct hns_roce_idx_table {
++	u32 *spare_idx;
++	u32 head;
++	u32 tail;
++};
++
+ struct hns_roce_qp_table {
+ 	struct hns_roce_hem_table	qp_table;
+ 	struct hns_roce_hem_table	irrl_table;
+@@ -504,6 +510,7 @@ struct hns_roce_qp_table {
+ 	struct mutex			scc_mutex;
+ 	struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM];
+ 	struct mutex bank_mutex;
++	struct hns_roce_idx_table	idx_table;
+ };
+ 
+ struct hns_roce_cq_table {
+@@ -1146,7 +1153,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ void hns_roce_init_pd_table(struct hns_roce_dev *hr_dev);
+ void hns_roce_init_mr_table(struct hns_roce_dev *hr_dev);
+ void hns_roce_init_cq_table(struct hns_roce_dev *hr_dev);
+-void hns_roce_init_qp_table(struct hns_roce_dev *hr_dev);
++int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev);
+ int hns_roce_init_srq_table(struct hns_roce_dev *hr_dev);
+ void hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 594d4cef31b36..bf4d9f6658ff9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -4114,6 +4114,9 @@ static void modify_qp_reset_to_init(struct ib_qp *ibqp,
+ 	if (hr_qp->en_flags & HNS_ROCE_QP_CAP_RQ_RECORD_DB)
+ 		hr_reg_enable(context, QPC_RQ_RECORD_EN);
+ 
++	if (hr_qp->en_flags & HNS_ROCE_QP_CAP_OWNER_DB)
++		hr_reg_enable(context, QPC_OWNER_MODE);
++
+ 	hr_reg_write(context, QPC_RQ_DB_RECORD_ADDR_L,
+ 		     lower_32_bits(hr_qp->rdb.dma) >> 1);
+ 	hr_reg_write(context, QPC_RQ_DB_RECORD_ADDR_H,
+@@ -4486,9 +4489,6 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
+ 
+ 	hr_reg_clear(qpc_mask, QPC_CHECK_FLG);
+ 
+-	hr_reg_write(context, QPC_LSN, 0x100);
+-	hr_reg_clear(qpc_mask, QPC_LSN);
+-
+ 	hr_reg_clear(qpc_mask, QPC_V2_IRRL_HEAD);
+ 
+ 	return 0;
+@@ -4507,15 +4507,23 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ {
+ 	const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
++	u32 *spare_idx = hr_dev->qp_table.idx_table.spare_idx;
++	u32 *head =  &hr_dev->qp_table.idx_table.head;
++	u32 *tail =  &hr_dev->qp_table.idx_table.tail;
+ 	struct hns_roce_dip *hr_dip;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
+ 
++	spare_idx[*tail] = ibqp->qp_num;
++	*tail = (*tail == hr_dev->caps.num_qps - 1) ? 0 : (*tail + 1);
++
+ 	list_for_each_entry(hr_dip, &hr_dev->dip_list, node) {
+-		if (!memcmp(grh->dgid.raw, hr_dip->dgid, 16))
++		if (!memcmp(grh->dgid.raw, hr_dip->dgid, 16)) {
++			*dip_idx = hr_dip->dip_idx;
+ 			goto out;
++		}
+ 	}
+ 
+ 	/* If no dgid is found, a new dip and a mapping between dgid and
+@@ -4528,7 +4536,8 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ 	}
+ 
+ 	memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
+-	hr_dip->dip_idx = *dip_idx = ibqp->qp_num;
++	hr_dip->dip_idx = *dip_idx = spare_idx[*head];
++	*head = (*head == hr_dev->caps.num_qps - 1) ? 0 : (*head + 1);
+ 	list_add_tail(&hr_dip->node, &hr_dev->dip_list);
+ 
+ out:
+@@ -5127,7 +5136,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ 
+ 	qp_attr->rq_psn = hr_reg_read(&context, QPC_RX_REQ_EPSN);
+ 	qp_attr->sq_psn = (u32)hr_reg_read(&context, QPC_SQ_CUR_PSN);
+-	qp_attr->dest_qp_num = (u8)hr_reg_read(&context, QPC_DQPN);
++	qp_attr->dest_qp_num = hr_reg_read(&context, QPC_DQPN);
+ 	qp_attr->qp_access_flags =
+ 		((hr_reg_read(&context, QPC_RRE)) << V2_QP_RRE_S) |
+ 		((hr_reg_read(&context, QPC_RWE)) << V2_QP_RWE_S) |
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index b8a09d411e2e5..68c8c4b225ca3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -1447,7 +1447,7 @@ struct hns_roce_v2_priv {
+ 
+ struct hns_roce_dip {
+ 	u8 dgid[GID_LEN_V2];
+-	u8 dip_idx;
++	u32 dip_idx;
+ 	struct list_head node;	/* all dips are on a list */
+ };
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index cc6eab14a2220..217aad8d9bd93 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -748,6 +748,12 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 		goto err_uar_table_free;
+ 	}
+ 
++	ret = hns_roce_init_qp_table(hr_dev);
++	if (ret) {
++		dev_err(dev, "Failed to init qp_table.\n");
++		goto err_uar_table_free;
++	}
++
+ 	hns_roce_init_pd_table(hr_dev);
+ 
+ 	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC)
+@@ -757,8 +763,6 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 
+ 	hns_roce_init_cq_table(hr_dev);
+ 
+-	hns_roce_init_qp_table(hr_dev);
+-
+ 	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) {
+ 		ret = hns_roce_init_srq_table(hr_dev);
+ 		if (ret) {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 006c84bb3f9fd..7089ac7802913 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -352,7 +352,9 @@ struct ib_mr *hns_roce_rereg_user_mr(struct ib_mr *ibmr, int flags, u64 start,
+ free_cmd_mbox:
+ 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ 
+-	return ERR_PTR(ret);
++	if (ret)
++		return ERR_PTR(ret);
++	return NULL;
+ }
+ 
+ int hns_roce_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index b101b7e578f25..a6d1e44b75cf7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -848,7 +848,6 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 				goto err_out;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+-			resp->cap_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+ 		}
+ 
+ 		if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) {
+@@ -861,7 +860,6 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 				goto err_sdb;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+-			resp->cap_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+ 		}
+ 	} else {
+ 		if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
+@@ -1073,6 +1071,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	if (udata) {
++		resp.cap_flags = hr_qp->en_flags;
+ 		ret = ib_copy_to_udata(udata, &resp,
+ 				       min(udata->outlen, sizeof(resp)));
+ 		if (ret) {
+@@ -1171,14 +1170,8 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
+ 	if (!hr_qp)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	if (init_attr->qp_type == IB_QPT_XRC_INI)
+-		init_attr->recv_cq = NULL;
+-
+-	if (init_attr->qp_type == IB_QPT_XRC_TGT) {
++	if (init_attr->qp_type == IB_QPT_XRC_TGT)
+ 		hr_qp->xrcdn = to_hr_xrcd(init_attr->xrcd)->xrcdn;
+-		init_attr->recv_cq = NULL;
+-		init_attr->send_cq = NULL;
+-	}
+ 
+ 	if (init_attr->qp_type == IB_QPT_GSI) {
+ 		hr_qp->port = init_attr->port_num - 1;
+@@ -1429,12 +1422,17 @@ bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, u32 nreq,
+ 	return cur + nreq >= hr_wq->wqe_cnt;
+ }
+ 
+-void hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
++int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ {
+ 	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+ 	unsigned int reserved_from_bot;
+ 	unsigned int i;
+ 
++	qp_table->idx_table.spare_idx = kcalloc(hr_dev->caps.num_qps,
++					sizeof(u32), GFP_KERNEL);
++	if (!qp_table->idx_table.spare_idx)
++		return -ENOMEM;
++
+ 	mutex_init(&qp_table->scc_mutex);
+ 	mutex_init(&qp_table->bank_mutex);
+ 	xa_init(&hr_dev->qp_table_xa);
+@@ -1452,6 +1450,8 @@ void hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ 					       HNS_ROCE_QP_BANK_NUM - 1;
+ 		hr_dev->qp_table.bank[i].next = hr_dev->qp_table.bank[i].min;
+ 	}
++
++	return 0;
+ }
+ 
+ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+@@ -1460,4 +1460,5 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+ 
+ 	for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++)
+ 		ida_destroy(&hr_dev->qp_table.bank[i].ida);
++	kfree(hr_dev->qp_table.idx_table.spare_idx);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index a77db29f83914..fd88b9ae96fe8 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1906,7 +1906,6 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
+ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 			     struct mlx5_create_qp_params *params)
+ {
+-	struct mlx5_ib_create_qp *ucmd = params->ucmd;
+ 	struct ib_qp_init_attr *attr = params->attr;
+ 	u32 uidx = params->uidx;
+ 	struct mlx5_ib_resources *devr = &dev->devr;
+@@ -1926,8 +1925,6 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 	if (!in)
+ 		return -ENOMEM;
+ 
+-	if (MLX5_CAP_GEN(mdev, ece_support) && ucmd)
+-		MLX5_SET(create_qp_in, in, ece, ucmd->ece_options);
+ 	qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
+ 
+ 	MLX5_SET(qpc, qpc, st, MLX5_QP_ST_XRC);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index f2c40e50f25ea..ece3205531b8e 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -478,7 +478,7 @@ static int rtrs_post_send_rdma(struct rtrs_clt_con *con,
+ 	 * From time to time we have to post signalled sends,
+ 	 * or send queue will fill up and only QP reset can help.
+ 	 */
+-	flags = atomic_inc_return(&con->io_cnt) % sess->queue_depth ?
++	flags = atomic_inc_return(&con->c.wr_cnt) % sess->s.signal_interval ?
+ 			0 : IB_SEND_SIGNALED;
+ 
+ 	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, req->iu->dma_addr,
+@@ -680,6 +680,7 @@ static void rtrs_clt_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	case IB_WC_RDMA_WRITE:
+ 		/*
+ 		 * post_send() RDMA write completions of IO reqs (read/write)
++		 * and hb.
+ 		 */
+ 		break;
+ 
+@@ -1043,7 +1044,7 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con,
+ 	 * From time to time we have to post signalled sends,
+ 	 * or send queue will fill up and only QP reset can help.
+ 	 */
+-	flags = atomic_inc_return(&con->io_cnt) % sess->queue_depth ?
++	flags = atomic_inc_return(&con->c.wr_cnt) % sess->s.signal_interval ?
+ 			0 : IB_SEND_SIGNALED;
+ 
+ 	ib_dma_sync_single_for_device(sess->s.dev->ib_dev, req->iu->dma_addr,
+@@ -1601,7 +1602,8 @@ static int create_con(struct rtrs_clt_sess *sess, unsigned int cid)
+ 	con->cpu  = (cid ? cid - 1 : 0) % nr_cpu_ids;
+ 	con->c.cid = cid;
+ 	con->c.sess = &sess->s;
+-	atomic_set(&con->io_cnt, 0);
++	/* Align with srv, init as 1 */
++	atomic_set(&con->c.wr_cnt, 1);
+ 	mutex_init(&con->con_mutex);
+ 
+ 	sess->s.con[cid] = &con->c;
+@@ -1678,6 +1680,7 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
+ 			      sess->queue_depth * 3 + 1);
+ 		max_send_sge = 2;
+ 	}
++	atomic_set(&con->c.sq_wr_avail, max_send_wr);
+ 	cq_num = max_send_wr + max_recv_wr;
+ 	/* alloc iu to recv new rkey reply when server reports flags set */
+ 	if (sess->flags & RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) {
+@@ -1848,6 +1851,8 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 				return -ENOMEM;
+ 		}
+ 		sess->queue_depth = queue_depth;
++		sess->s.signal_interval = min_not_zero(queue_depth,
++						(unsigned short) SERVICE_CON_QUEUE_DEPTH);
+ 		sess->max_hdr_size = le32_to_cpu(msg->max_hdr_size);
+ 		sess->max_io_size = le32_to_cpu(msg->max_io_size);
+ 		sess->flags = le32_to_cpu(msg->flags);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+index e276a2dfcf7c7..3c3ff094588cb 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+@@ -74,7 +74,6 @@ struct rtrs_clt_con {
+ 	u32			queue_num;
+ 	unsigned int		cpu;
+ 	struct mutex		con_mutex;
+-	atomic_t		io_cnt;
+ 	int			cm_err;
+ };
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index 36f184a3b6761..119aa3f7eafe2 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -96,6 +96,8 @@ struct rtrs_con {
+ 	struct rdma_cm_id	*cm_id;
+ 	unsigned int		cid;
+ 	int                     nr_cqe;
++	atomic_t		wr_cnt;
++	atomic_t		sq_wr_avail;
+ };
+ 
+ struct rtrs_sess {
+@@ -108,6 +110,7 @@ struct rtrs_sess {
+ 	unsigned int		con_num;
+ 	unsigned int		irq_con_num;
+ 	unsigned int		recon_cnt;
++	unsigned int		signal_interval;
+ 	struct rtrs_ib_dev	*dev;
+ 	int			dev_ref;
+ 	struct ib_cqe		*hb_cqe;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 3df2900861697..cd9a4ccf4c289 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -201,7 +201,6 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
+ 	struct rtrs_srv_sess *sess = to_srv_sess(s);
+ 	dma_addr_t dma_addr = sess->dma_addr[id->msg_id];
+ 	struct rtrs_srv_mr *srv_mr;
+-	struct rtrs_srv *srv = sess->srv;
+ 	struct ib_send_wr inv_wr;
+ 	struct ib_rdma_wr imm_wr;
+ 	struct ib_rdma_wr *wr = NULL;
+@@ -269,7 +268,7 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
+ 	 * From time to time we have to post signaled sends,
+ 	 * or send queue will fill up and only QP reset can help.
+ 	 */
+-	flags = (atomic_inc_return(&id->con->wr_cnt) % srv->queue_depth) ?
++	flags = (atomic_inc_return(&id->con->c.wr_cnt) % s->signal_interval) ?
+ 		0 : IB_SEND_SIGNALED;
+ 
+ 	if (need_inval) {
+@@ -347,7 +346,6 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ 	struct ib_send_wr inv_wr, *wr = NULL;
+ 	struct ib_rdma_wr imm_wr;
+ 	struct ib_reg_wr rwr;
+-	struct rtrs_srv *srv = sess->srv;
+ 	struct rtrs_srv_mr *srv_mr;
+ 	bool need_inval = false;
+ 	enum ib_send_flags flags;
+@@ -396,7 +394,7 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ 	 * From time to time we have to post signalled sends,
+ 	 * or send queue will fill up and only QP reset can help.
+ 	 */
+-	flags = (atomic_inc_return(&con->wr_cnt) % srv->queue_depth) ?
++	flags = (atomic_inc_return(&con->c.wr_cnt) % s->signal_interval) ?
+ 		0 : IB_SEND_SIGNALED;
+ 	imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval);
+ 	imm_wr.wr.next = NULL;
+@@ -509,11 +507,11 @@ bool rtrs_srv_resp_rdma(struct rtrs_srv_op *id, int status)
+ 		ib_update_fast_reg_key(mr->mr, ib_inc_rkey(mr->mr->rkey));
+ 	}
+ 	if (unlikely(atomic_sub_return(1,
+-				       &con->sq_wr_avail) < 0)) {
++				       &con->c.sq_wr_avail) < 0)) {
+ 		rtrs_err(s, "IB send queue full: sess=%s cid=%d\n",
+ 			 kobject_name(&sess->kobj),
+ 			 con->c.cid);
+-		atomic_add(1, &con->sq_wr_avail);
++		atomic_add(1, &con->c.sq_wr_avail);
+ 		spin_lock(&con->rsp_wr_wait_lock);
+ 		list_add_tail(&id->wait_list, &con->rsp_wr_wait_list);
+ 		spin_unlock(&con->rsp_wr_wait_lock);
+@@ -1268,8 +1266,9 @@ static void rtrs_srv_rdma_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	case IB_WC_SEND:
+ 		/*
+ 		 * post_send() RDMA write completions of IO reqs (read/write)
++		 * and hb.
+ 		 */
+-		atomic_add(srv->queue_depth, &con->sq_wr_avail);
++		atomic_add(s->signal_interval, &con->c.sq_wr_avail);
+ 
+ 		if (unlikely(!list_empty_careful(&con->rsp_wr_wait_list)))
+ 			rtrs_rdma_process_wr_wait_list(con);
+@@ -1648,7 +1647,7 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 	con->c.cm_id = cm_id;
+ 	con->c.sess = &sess->s;
+ 	con->c.cid = cid;
+-	atomic_set(&con->wr_cnt, 1);
++	atomic_set(&con->c.wr_cnt, 1);
+ 	wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr;
+ 
+ 	if (con->c.cid == 0) {
+@@ -1659,6 +1658,8 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 		max_send_wr = min_t(int, wr_limit,
+ 				    SERVICE_CON_QUEUE_DEPTH * 2 + 2);
+ 		max_recv_wr = max_send_wr;
++		s->signal_interval = min_not_zero(srv->queue_depth,
++						  (size_t)SERVICE_CON_QUEUE_DEPTH);
+ 	} else {
+ 		/* when always_invlaidate enalbed, we need linv+rinv+mr+imm */
+ 		if (always_invalidate)
+@@ -1679,7 +1680,7 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 		 */
+ 	}
+ 	cq_num = max_send_wr + max_recv_wr;
+-	atomic_set(&con->sq_wr_avail, max_send_wr);
++	atomic_set(&con->c.sq_wr_avail, max_send_wr);
+ 	cq_vector = rtrs_srv_get_next_cq_vector(sess);
+ 
+ 	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.h b/drivers/infiniband/ulp/rtrs/rtrs-srv.h
+index f8da2e3f0bdac..e81774f5acd33 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.h
+@@ -42,8 +42,6 @@ struct rtrs_srv_stats {
+ 
+ struct rtrs_srv_con {
+ 	struct rtrs_con		c;
+-	atomic_t		wr_cnt;
+-	atomic_t		sq_wr_avail;
+ 	struct list_head	rsp_wr_wait_list;
+ 	spinlock_t		rsp_wr_wait_lock;
+ };
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 61919ebd92b2d..0a4b4e1b5e5ff 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -187,10 +187,16 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe,
+ 				    struct ib_send_wr *head)
+ {
+ 	struct ib_rdma_wr wr;
++	struct rtrs_sess *sess = con->sess;
++	enum ib_send_flags sflags;
++
++	atomic_dec_if_positive(&con->sq_wr_avail);
++	sflags = (atomic_inc_return(&con->wr_cnt) % sess->signal_interval) ?
++		0 : IB_SEND_SIGNALED;
+ 
+ 	wr = (struct ib_rdma_wr) {
+ 		.wr.wr_cqe	= cqe,
+-		.wr.send_flags	= flags,
++		.wr.send_flags	= sflags,
+ 		.wr.opcode	= IB_WR_RDMA_WRITE_WITH_IMM,
+ 		.wr.ex.imm_data	= cpu_to_be32(imm_data),
+ 	};
+diff --git a/drivers/input/mouse/elan_i2c.h b/drivers/input/mouse/elan_i2c.h
+index dc4a240f44895..3c84deefa327d 100644
+--- a/drivers/input/mouse/elan_i2c.h
++++ b/drivers/input/mouse/elan_i2c.h
+@@ -55,8 +55,9 @@
+ #define ETP_FW_PAGE_SIZE_512	512
+ #define ETP_FW_SIGNATURE_SIZE	6
+ 
+-#define ETP_PRODUCT_ID_DELBIN	0x00C2
++#define ETP_PRODUCT_ID_WHITEBOX	0x00B8
+ #define ETP_PRODUCT_ID_VOXEL	0x00BF
++#define ETP_PRODUCT_ID_DELBIN	0x00C2
+ #define ETP_PRODUCT_ID_MAGPIE	0x0120
+ #define ETP_PRODUCT_ID_BOBBA	0x0121
+ 
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index dad22c1ea6a0f..47af62c122672 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -105,6 +105,7 @@ static u32 elan_i2c_lookup_quirks(u16 ic_type, u16 product_id)
+ 		u32 quirks;
+ 	} elan_i2c_quirks[] = {
+ 		{ 0x0D, ETP_PRODUCT_ID_DELBIN, ETP_QUIRK_QUICK_WAKEUP },
++		{ 0x0D, ETP_PRODUCT_ID_WHITEBOX, ETP_QUIRK_QUICK_WAKEUP },
+ 		{ 0x10, ETP_PRODUCT_ID_VOXEL, ETP_QUIRK_QUICK_WAKEUP },
+ 		{ 0x14, ETP_PRODUCT_ID_MAGPIE, ETP_QUIRK_QUICK_WAKEUP },
+ 		{ 0x14, ETP_PRODUCT_ID_BOBBA, ETP_QUIRK_QUICK_WAKEUP },
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index c11bc8b833b8e..d5552e2c160d2 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -28,12 +28,12 @@
+ #define VCMD_CMD_ALLOC			0x1
+ #define VCMD_CMD_FREE			0x2
+ #define VCMD_VRSP_IP			0x1
+-#define VCMD_VRSP_SC(e)			(((e) >> 1) & 0x3)
++#define VCMD_VRSP_SC(e)			(((e) & 0xff) >> 1)
+ #define VCMD_VRSP_SC_SUCCESS		0
+-#define VCMD_VRSP_SC_NO_PASID_AVAIL	2
+-#define VCMD_VRSP_SC_INVALID_PASID	2
+-#define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 8) & 0xfffff)
+-#define VCMD_CMD_OPERAND(e)		((e) << 8)
++#define VCMD_VRSP_SC_NO_PASID_AVAIL	16
++#define VCMD_VRSP_SC_INVALID_PASID	16
++#define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 16) & 0xfffff)
++#define VCMD_CMD_OPERAND(e)		((e) << 16)
+ /*
+  * Domain ID reserved for pasid entries programmed for first-level
+  * only and pass-through transfer modes.
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 67a42b514429e..4f907e8f3894b 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -168,7 +168,8 @@ static void cmdq_task_insert_into_thread(struct cmdq_task *task)
+ 	dma_sync_single_for_cpu(dev, prev_task->pa_base,
+ 				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
+ 	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
+-		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
++		(u64)CMDQ_JUMP_BY_PA << 32 |
++		(task->pa_base >> task->cmdq->shift_pa);
+ 	dma_sync_single_for_device(dev, prev_task->pa_base,
+ 				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
+ 
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 50f4cbd600d58..8c0c3d1f54bba 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2661,7 +2661,12 @@ static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data)
+ 	struct crypt_config *cc = pool_data;
+ 	struct page *page;
+ 
+-	if (unlikely(percpu_counter_compare(&cc->n_allocated_pages, dm_crypt_pages_per_client) >= 0) &&
++	/*
++	 * Note, percpu_counter_read_positive() may over (and under) estimate
++	 * the current usage by at most (batch - 1) * num_online_cpus() pages,
++	 * but avoids potential spinlock contention of an exact result.
++	 */
++	if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) &&
+ 	    likely(gfp_mask & __GFP_NORETRY))
+ 		return NULL;
+ 
+diff --git a/drivers/media/cec/platform/stm32/stm32-cec.c b/drivers/media/cec/platform/stm32/stm32-cec.c
+index ea4b1ebfca991..0ffd89712536b 100644
+--- a/drivers/media/cec/platform/stm32/stm32-cec.c
++++ b/drivers/media/cec/platform/stm32/stm32-cec.c
+@@ -305,14 +305,16 @@ static int stm32_cec_probe(struct platform_device *pdev)
+ 
+ 	cec->clk_hdmi_cec = devm_clk_get(&pdev->dev, "hdmi-cec");
+ 	if (IS_ERR(cec->clk_hdmi_cec) &&
+-	    PTR_ERR(cec->clk_hdmi_cec) == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	    PTR_ERR(cec->clk_hdmi_cec) == -EPROBE_DEFER) {
++		ret = -EPROBE_DEFER;
++		goto err_unprepare_cec_clk;
++	}
+ 
+ 	if (!IS_ERR(cec->clk_hdmi_cec)) {
+ 		ret = clk_prepare(cec->clk_hdmi_cec);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "Can't prepare hdmi-cec clock\n");
+-			return ret;
++			goto err_unprepare_cec_clk;
+ 		}
+ 	}
+ 
+@@ -324,19 +326,27 @@ static int stm32_cec_probe(struct platform_device *pdev)
+ 			CEC_NAME, caps,	CEC_MAX_LOG_ADDRS);
+ 	ret = PTR_ERR_OR_ZERO(cec->adap);
+ 	if (ret)
+-		return ret;
++		goto err_unprepare_hdmi_cec_clk;
+ 
+ 	ret = cec_register_adapter(cec->adap, &pdev->dev);
+-	if (ret) {
+-		cec_delete_adapter(cec->adap);
+-		return ret;
+-	}
++	if (ret)
++		goto err_delete_adapter;
+ 
+ 	cec_hw_init(cec);
+ 
+ 	platform_set_drvdata(pdev, cec);
+ 
+ 	return 0;
++
++err_delete_adapter:
++	cec_delete_adapter(cec->adap);
++
++err_unprepare_hdmi_cec_clk:
++	clk_unprepare(cec->clk_hdmi_cec);
++
++err_unprepare_cec_clk:
++	clk_unprepare(cec->clk_cec);
++	return ret;
+ }
+ 
+ static int stm32_cec_remove(struct platform_device *pdev)
+diff --git a/drivers/media/cec/platform/tegra/tegra_cec.c b/drivers/media/cec/platform/tegra/tegra_cec.c
+index 1ac0c70a59818..5e907395ca2e5 100644
+--- a/drivers/media/cec/platform/tegra/tegra_cec.c
++++ b/drivers/media/cec/platform/tegra/tegra_cec.c
+@@ -366,7 +366,11 @@ static int tegra_cec_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	clk_prepare_enable(cec->clk);
++	ret = clk_prepare_enable(cec->clk);
++	if (ret) {
++		dev_err(&pdev->dev, "Unable to prepare clock for CEC\n");
++		return ret;
++	}
+ 
+ 	/* set context info. */
+ 	cec->dev = &pdev->dev;
+@@ -446,9 +450,7 @@ static int tegra_cec_resume(struct platform_device *pdev)
+ 
+ 	dev_notice(&pdev->dev, "Resuming\n");
+ 
+-	clk_prepare_enable(cec->clk);
+-
+-	return 0;
++	return clk_prepare_enable(cec->clk);
+ }
+ #endif
+ 
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index 082796534b0ae..bb02354a48b81 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -2107,32 +2107,55 @@ static void dib8000_load_ana_fe_coefs(struct dib8000_state *state, const s16 *an
+ 			dib8000_write_word(state, 117 + mode, ana_fe[mode]);
+ }
+ 
+-static const u16 lut_prbs_2k[14] = {
+-	0, 0x423, 0x009, 0x5C7, 0x7A6, 0x3D8, 0x527, 0x7FF, 0x79B, 0x3D6, 0x3A2, 0x53B, 0x2F4, 0x213
++static const u16 lut_prbs_2k[13] = {
++	0x423, 0x009, 0x5C7,
++	0x7A6, 0x3D8, 0x527,
++	0x7FF, 0x79B, 0x3D6,
++	0x3A2, 0x53B, 0x2F4,
++	0x213
+ };
+-static const u16 lut_prbs_4k[14] = {
+-	0, 0x208, 0x0C3, 0x7B9, 0x423, 0x5C7, 0x3D8, 0x7FF, 0x3D6, 0x53B, 0x213, 0x029, 0x0D0, 0x48E
++
++static const u16 lut_prbs_4k[13] = {
++	0x208, 0x0C3, 0x7B9,
++	0x423, 0x5C7, 0x3D8,
++	0x7FF, 0x3D6, 0x53B,
++	0x213, 0x029, 0x0D0,
++	0x48E
+ };
+-static const u16 lut_prbs_8k[14] = {
+-	0, 0x740, 0x069, 0x7DD, 0x208, 0x7B9, 0x5C7, 0x7FF, 0x53B, 0x029, 0x48E, 0x4C4, 0x367, 0x684
++
++static const u16 lut_prbs_8k[13] = {
++	0x740, 0x069, 0x7DD,
++	0x208, 0x7B9, 0x5C7,
++	0x7FF, 0x53B, 0x029,
++	0x48E, 0x4C4, 0x367,
++	0x684
+ };
+ 
+ static u16 dib8000_get_init_prbs(struct dib8000_state *state, u16 subchannel)
+ {
+ 	int sub_channel_prbs_group = 0;
++	int prbs_group;
+ 
+-	sub_channel_prbs_group = (subchannel / 3) + 1;
+-	dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n", sub_channel_prbs_group, subchannel, lut_prbs_8k[sub_channel_prbs_group]);
++	sub_channel_prbs_group = subchannel / 3;
++	if (sub_channel_prbs_group >= ARRAY_SIZE(lut_prbs_2k))
++		return 0;
+ 
+ 	switch (state->fe[0]->dtv_property_cache.transmission_mode) {
+ 	case TRANSMISSION_MODE_2K:
+-			return lut_prbs_2k[sub_channel_prbs_group];
++		prbs_group = lut_prbs_2k[sub_channel_prbs_group];
++		break;
+ 	case TRANSMISSION_MODE_4K:
+-			return lut_prbs_4k[sub_channel_prbs_group];
++		prbs_group =  lut_prbs_4k[sub_channel_prbs_group];
++		break;
+ 	default:
+ 	case TRANSMISSION_MODE_8K:
+-			return lut_prbs_8k[sub_channel_prbs_group];
++		prbs_group = lut_prbs_8k[sub_channel_prbs_group];
+ 	}
++
++	dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n",
++		sub_channel_prbs_group, subchannel, prbs_group);
++
++	return prbs_group;
+ }
+ 
+ static void dib8000_set_13seg_channel(struct dib8000_state *state)
+@@ -2409,10 +2432,8 @@ static void dib8000_set_isdbt_common_channel(struct dib8000_state *state, u8 seq
+ 	/* TSB or ISDBT ? apply it now */
+ 	if (c->isdbt_sb_mode) {
+ 		dib8000_set_sb_channel(state);
+-		if (c->isdbt_sb_subchannel < 14)
+-			init_prbs = dib8000_get_init_prbs(state, c->isdbt_sb_subchannel);
+-		else
+-			init_prbs = 0;
++		init_prbs = dib8000_get_init_prbs(state,
++						  c->isdbt_sb_subchannel);
+ 	} else {
+ 		dib8000_set_13seg_channel(state);
+ 		init_prbs = 0xfff;
+@@ -3004,6 +3025,7 @@ static int dib8000_tune(struct dvb_frontend *fe)
+ 
+ 	unsigned long *timeout = &state->timeout;
+ 	unsigned long now = jiffies;
++	u16 init_prbs;
+ #ifdef DIB8000_AGC_FREEZE
+ 	u16 agc1, agc2;
+ #endif
+@@ -3302,8 +3324,10 @@ static int dib8000_tune(struct dvb_frontend *fe)
+ 		break;
+ 
+ 	case CT_DEMOD_STEP_11:  /* 41 : init prbs autosearch */
+-		if (state->subchannel <= 41) {
+-			dib8000_set_subchannel_prbs(state, dib8000_get_init_prbs(state, state->subchannel));
++		init_prbs = dib8000_get_init_prbs(state, state->subchannel);
++
++		if (init_prbs) {
++			dib8000_set_subchannel_prbs(state, init_prbs);
+ 			*tune_state = CT_DEMOD_STEP_9;
+ 		} else {
+ 			*tune_state = CT_DEMOD_STOP;
+diff --git a/drivers/media/i2c/imx258.c b/drivers/media/i2c/imx258.c
+index 7ab9e5f9f2676..81cdf37216ca7 100644
+--- a/drivers/media/i2c/imx258.c
++++ b/drivers/media/i2c/imx258.c
+@@ -23,7 +23,7 @@
+ #define IMX258_CHIP_ID			0x0258
+ 
+ /* V_TIMING internal */
+-#define IMX258_VTS_30FPS		0x0c98
++#define IMX258_VTS_30FPS		0x0c50
+ #define IMX258_VTS_30FPS_2K		0x0638
+ #define IMX258_VTS_30FPS_VGA		0x034c
+ #define IMX258_VTS_MAX			0xffff
+@@ -47,7 +47,7 @@
+ /* Analog gain control */
+ #define IMX258_REG_ANALOG_GAIN		0x0204
+ #define IMX258_ANA_GAIN_MIN		0
+-#define IMX258_ANA_GAIN_MAX		0x1fff
++#define IMX258_ANA_GAIN_MAX		480
+ #define IMX258_ANA_GAIN_STEP		1
+ #define IMX258_ANA_GAIN_DEFAULT		0x0
+ 
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 3a191e257fad0..ef726faee2a4c 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -1695,14 +1695,15 @@ static int tda1997x_query_dv_timings(struct v4l2_subdev *sd,
+ 				     struct v4l2_dv_timings *timings)
+ {
+ 	struct tda1997x_state *state = to_state(sd);
++	int ret;
+ 
+ 	v4l_dbg(1, debug, state->client, "%s\n", __func__);
+ 	memset(timings, 0, sizeof(struct v4l2_dv_timings));
+ 	mutex_lock(&state->lock);
+-	tda1997x_detect_std(state, timings);
++	ret = tda1997x_detect_std(state, timings);
+ 	mutex_unlock(&state->lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct v4l2_subdev_video_ops tda1997x_video_ops = {
+diff --git a/drivers/media/platform/ti-vpe/cal-camerarx.c b/drivers/media/platform/ti-vpe/cal-camerarx.c
+index 124a4e2bdefe0..e2e384a887ac2 100644
+--- a/drivers/media/platform/ti-vpe/cal-camerarx.c
++++ b/drivers/media/platform/ti-vpe/cal-camerarx.c
+@@ -845,7 +845,9 @@ struct cal_camerarx *cal_camerarx_create(struct cal_dev *cal,
+ 	if (ret)
+ 		goto error;
+ 
+-	cal_camerarx_sd_init_cfg(sd, NULL);
++	ret = cal_camerarx_sd_init_cfg(sd, NULL);
++	if (ret)
++		goto error;
+ 
+ 	ret = v4l2_device_register_subdev(&cal->v4l2_dev, sd);
+ 	if (ret)
+diff --git a/drivers/media/platform/ti-vpe/cal-video.c b/drivers/media/platform/ti-vpe/cal-video.c
+index 15fb5360cf13c..552619cb81a81 100644
+--- a/drivers/media/platform/ti-vpe/cal-video.c
++++ b/drivers/media/platform/ti-vpe/cal-video.c
+@@ -694,7 +694,7 @@ static int cal_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	spin_lock_irq(&ctx->dma.lock);
+ 	buf = list_first_entry(&ctx->dma.queue, struct cal_buffer, list);
+-	ctx->dma.pending = buf;
++	ctx->dma.active = buf;
+ 	list_del(&buf->list);
+ 	spin_unlock_irq(&ctx->dma.lock);
+ 
+diff --git a/drivers/media/rc/rc-loopback.c b/drivers/media/rc/rc-loopback.c
+index 1ba3f96ffa7dc..40ab66c850f23 100644
+--- a/drivers/media/rc/rc-loopback.c
++++ b/drivers/media/rc/rc-loopback.c
+@@ -42,7 +42,7 @@ static int loop_set_tx_mask(struct rc_dev *dev, u32 mask)
+ 
+ 	if ((mask & (RXMASK_REGULAR | RXMASK_LEARNING)) != mask) {
+ 		dprintk("invalid tx mask: %u\n", mask);
+-		return -EINVAL;
++		return 2;
+ 	}
+ 
+ 	dprintk("setting tx mask: %u\n", mask);
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 252136cc885ce..6acb8013de08b 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -899,8 +899,8 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input)
+ {
+ 	struct uvc_fh *handle = fh;
+ 	struct uvc_video_chain *chain = handle->chain;
++	u8 *buf;
+ 	int ret;
+-	u8 i;
+ 
+ 	if (chain->selector == NULL ||
+ 	    (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) {
+@@ -908,22 +908,27 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input)
+ 		return 0;
+ 	}
+ 
++	buf = kmalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
+ 	ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR, chain->selector->id,
+ 			     chain->dev->intfnum,  UVC_SU_INPUT_SELECT_CONTROL,
+-			     &i, 1);
+-	if (ret < 0)
+-		return ret;
++			     buf, 1);
++	if (!ret)
++		*input = *buf - 1;
+ 
+-	*input = i - 1;
+-	return 0;
++	kfree(buf);
++
++	return ret;
+ }
+ 
+ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input)
+ {
+ 	struct uvc_fh *handle = fh;
+ 	struct uvc_video_chain *chain = handle->chain;
++	u8 *buf;
+ 	int ret;
+-	u32 i;
+ 
+ 	ret = uvc_acquire_privileges(handle);
+ 	if (ret < 0)
+@@ -939,10 +944,17 @@ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input)
+ 	if (input >= chain->selector->bNrInPins)
+ 		return -EINVAL;
+ 
+-	i = input + 1;
+-	return uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id,
+-			      chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL,
+-			      &i, 1);
++	buf = kmalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	*buf = input + 1;
++	ret = uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id,
++			     chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL,
++			     buf, 1);
++	kfree(buf);
++
++	return ret;
+ }
+ 
+ static int uvc_ioctl_queryctrl(struct file *file, void *fh,
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 230d65a642178..af48705c704f8 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -196,7 +196,7 @@ bool v4l2_find_dv_timings_cap(struct v4l2_dv_timings *t,
+ 	if (!v4l2_valid_dv_timings(t, cap, fnc, fnc_handle))
+ 		return false;
+ 
+-	for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) {
++	for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) {
+ 		if (v4l2_valid_dv_timings(v4l2_dv_timings_presets + i, cap,
+ 					  fnc, fnc_handle) &&
+ 		    v4l2_match_dv_timings(t, v4l2_dv_timings_presets + i,
+@@ -218,7 +218,7 @@ bool v4l2_find_dv_timings_cea861_vic(struct v4l2_dv_timings *t, u8 vic)
+ {
+ 	unsigned int i;
+ 
+-	for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) {
++	for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) {
+ 		const struct v4l2_bt_timings *bt =
+ 			&v4l2_dv_timings_presets[i].bt;
+ 
+diff --git a/drivers/misc/pvpanic/pvpanic-pci.c b/drivers/misc/pvpanic/pvpanic-pci.c
+index a43c401017ae2..741116b3d9958 100644
+--- a/drivers/misc/pvpanic/pvpanic-pci.c
++++ b/drivers/misc/pvpanic/pvpanic-pci.c
+@@ -108,4 +108,6 @@ static struct pci_driver pvpanic_pci_driver = {
+ 	},
+ };
+ 
++MODULE_DEVICE_TABLE(pci, pvpanic_pci_id_tbl);
++
+ module_pci_driver(pvpanic_pci_driver);
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index 93638ae2753af..4c26b19f5154a 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -97,7 +97,24 @@ static int sram_add_partition(struct sram_dev *sram, struct sram_reserve *block,
+ 	struct sram_partition *part = &sram->partition[sram->partitions];
+ 
+ 	mutex_init(&part->lock);
+-	part->base = sram->virt_base + block->start;
++
++	if (sram->config && sram->config->map_only_reserved) {
++		void __iomem *virt_base;
++
++		if (sram->no_memory_wc)
++			virt_base = devm_ioremap_resource(sram->dev, &block->res);
++		else
++			virt_base = devm_ioremap_resource_wc(sram->dev, &block->res);
++
++		if (IS_ERR(virt_base)) {
++			dev_err(sram->dev, "could not map SRAM at %pr\n", &block->res);
++			return PTR_ERR(virt_base);
++		}
++
++		part->base = virt_base;
++	} else {
++		part->base = sram->virt_base + block->start;
++	}
+ 
+ 	if (block->pool) {
+ 		ret = sram_add_pool(sram, block, start, part);
+@@ -198,6 +215,7 @@ static int sram_reserve_regions(struct sram_dev *sram, struct resource *res)
+ 
+ 		block->start = child_res.start - res->start;
+ 		block->size = resource_size(&child_res);
++		block->res = child_res;
+ 		list_add_tail(&block->list, &reserve_list);
+ 
+ 		if (of_find_property(child, "export", NULL))
+@@ -295,15 +313,17 @@ static int sram_reserve_regions(struct sram_dev *sram, struct resource *res)
+ 		 */
+ 		cur_size = block->start - cur_start;
+ 
+-		dev_dbg(sram->dev, "adding chunk 0x%lx-0x%lx\n",
+-			cur_start, cur_start + cur_size);
++		if (sram->pool) {
++			dev_dbg(sram->dev, "adding chunk 0x%lx-0x%lx\n",
++				cur_start, cur_start + cur_size);
+ 
+-		ret = gen_pool_add_virt(sram->pool,
+-				(unsigned long)sram->virt_base + cur_start,
+-				res->start + cur_start, cur_size, -1);
+-		if (ret < 0) {
+-			sram_free_partitions(sram);
+-			goto err_chunks;
++			ret = gen_pool_add_virt(sram->pool,
++					(unsigned long)sram->virt_base + cur_start,
++					res->start + cur_start, cur_size, -1);
++			if (ret < 0) {
++				sram_free_partitions(sram);
++				goto err_chunks;
++			}
+ 		}
+ 
+ 		/* next allocation after this reserved block */
+@@ -331,40 +351,63 @@ static int atmel_securam_wait(void)
+ 					10000, 500000);
+ }
+ 
++static const struct sram_config atmel_securam_config = {
++	.init = atmel_securam_wait,
++};
++
++/*
++ * SYSRAM contains areas that are not accessible by the
++ * kernel, such as the first 256K that is reserved for TZ.
++ * Accesses to those areas (including speculative accesses)
++ * trigger SErrors. As such we must map only the areas of
++ * SYSRAM specified in the device tree.
++ */
++static const struct sram_config tegra_sysram_config = {
++	.map_only_reserved = true,
++};
++
+ static const struct of_device_id sram_dt_ids[] = {
+ 	{ .compatible = "mmio-sram" },
+-	{ .compatible = "atmel,sama5d2-securam", .data = atmel_securam_wait },
++	{ .compatible = "atmel,sama5d2-securam", .data = &atmel_securam_config },
++	{ .compatible = "nvidia,tegra186-sysram", .data = &tegra_sysram_config },
++	{ .compatible = "nvidia,tegra194-sysram", .data = &tegra_sysram_config },
+ 	{}
+ };
+ 
+ static int sram_probe(struct platform_device *pdev)
+ {
++	const struct sram_config *config;
+ 	struct sram_dev *sram;
+ 	int ret;
+ 	struct resource *res;
+-	int (*init_func)(void);
++
++	config = of_device_get_match_data(&pdev->dev);
+ 
+ 	sram = devm_kzalloc(&pdev->dev, sizeof(*sram), GFP_KERNEL);
+ 	if (!sram)
+ 		return -ENOMEM;
+ 
+ 	sram->dev = &pdev->dev;
++	sram->no_memory_wc = of_property_read_bool(pdev->dev.of_node, "no-memory-wc");
++	sram->config = config;
++
++	if (!config || !config->map_only_reserved) {
++		res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++		if (sram->no_memory_wc)
++			sram->virt_base = devm_ioremap_resource(&pdev->dev, res);
++		else
++			sram->virt_base = devm_ioremap_resource_wc(&pdev->dev, res);
++		if (IS_ERR(sram->virt_base)) {
++			dev_err(&pdev->dev, "could not map SRAM registers\n");
++			return PTR_ERR(sram->virt_base);
++		}
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	if (of_property_read_bool(pdev->dev.of_node, "no-memory-wc"))
+-		sram->virt_base = devm_ioremap_resource(&pdev->dev, res);
+-	else
+-		sram->virt_base = devm_ioremap_resource_wc(&pdev->dev, res);
+-	if (IS_ERR(sram->virt_base)) {
+-		dev_err(&pdev->dev, "could not map SRAM registers\n");
+-		return PTR_ERR(sram->virt_base);
++		sram->pool = devm_gen_pool_create(sram->dev, ilog2(SRAM_GRANULARITY),
++						  NUMA_NO_NODE, NULL);
++		if (IS_ERR(sram->pool))
++			return PTR_ERR(sram->pool);
+ 	}
+ 
+-	sram->pool = devm_gen_pool_create(sram->dev, ilog2(SRAM_GRANULARITY),
+-					  NUMA_NO_NODE, NULL);
+-	if (IS_ERR(sram->pool))
+-		return PTR_ERR(sram->pool);
+-
+ 	sram->clk = devm_clk_get(sram->dev, NULL);
+ 	if (IS_ERR(sram->clk))
+ 		sram->clk = NULL;
+@@ -378,15 +421,15 @@ static int sram_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, sram);
+ 
+-	init_func = of_device_get_match_data(&pdev->dev);
+-	if (init_func) {
+-		ret = init_func();
++	if (config && config->init) {
++		ret = config->init();
+ 		if (ret)
+ 			goto err_free_partitions;
+ 	}
+ 
+-	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+-		gen_pool_size(sram->pool) / 1024, sram->virt_base);
++	if (sram->pool)
++		dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
++			gen_pool_size(sram->pool) / 1024, sram->virt_base);
+ 
+ 	return 0;
+ 
+@@ -405,7 +448,7 @@ static int sram_remove(struct platform_device *pdev)
+ 
+ 	sram_free_partitions(sram);
+ 
+-	if (gen_pool_avail(sram->pool) < gen_pool_size(sram->pool))
++	if (sram->pool && gen_pool_avail(sram->pool) < gen_pool_size(sram->pool))
+ 		dev_err(sram->dev, "removed while SRAM allocated\n");
+ 
+ 	if (sram->clk)
+diff --git a/drivers/misc/sram.h b/drivers/misc/sram.h
+index 9c1d21ff73476..d2058d8c8f1d2 100644
+--- a/drivers/misc/sram.h
++++ b/drivers/misc/sram.h
+@@ -5,6 +5,11 @@
+ #ifndef __SRAM_H
+ #define __SRAM_H
+ 
++struct sram_config {
++	int (*init)(void);
++	bool map_only_reserved;
++};
++
+ struct sram_partition {
+ 	void __iomem *base;
+ 
+@@ -15,8 +20,11 @@ struct sram_partition {
+ };
+ 
+ struct sram_dev {
++	const struct sram_config *config;
++
+ 	struct device *dev;
+ 	void __iomem *virt_base;
++	bool no_memory_wc;
+ 
+ 	struct gen_pool *pool;
+ 	struct clk *clk;
+@@ -29,6 +37,7 @@ struct sram_reserve {
+ 	struct list_head list;
+ 	u32 start;
+ 	u32 size;
++	struct resource res;
+ 	bool export;
+ 	bool pool;
+ 	bool protect_exec;
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index 880c33ab9f47b..94ebf7f3fd58a 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -2243,7 +2243,8 @@ int vmci_qp_broker_map(struct vmci_handle handle,
+ 
+ 	result = VMCI_SUCCESS;
+ 
+-	if (context_id != VMCI_HOST_CONTEXT_ID) {
++	if (context_id != VMCI_HOST_CONTEXT_ID &&
++	    !QPBROKERSTATE_HAS_MEM(entry)) {
+ 		struct vmci_qp_page_store page_store;
+ 
+ 		page_store.pages = guest_mem;
+@@ -2350,7 +2351,8 @@ int vmci_qp_broker_unmap(struct vmci_handle handle,
+ 		goto out;
+ 	}
+ 
+-	if (context_id != VMCI_HOST_CONTEXT_ID) {
++	if (context_id != VMCI_HOST_CONTEXT_ID &&
++	    QPBROKERSTATE_HAS_MEM(entry)) {
+ 		qp_acquire_queue_mutex(entry->produce_q);
+ 		result = qp_save_headers(entry);
+ 		if (result < VMCI_SUCCESS)
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index ce8aed5629295..c3ecec3f6ddc6 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -98,6 +98,11 @@ static int max_devices;
+ static DEFINE_IDA(mmc_blk_ida);
+ static DEFINE_IDA(mmc_rpmb_ida);
+ 
++struct mmc_blk_busy_data {
++	struct mmc_card *card;
++	u32 status;
++};
++
+ /*
+  * There is one mmc_blk_data per slot.
+  */
+@@ -417,42 +422,6 @@ static int mmc_blk_ioctl_copy_to_user(struct mmc_ioc_cmd __user *ic_ptr,
+ 	return 0;
+ }
+ 
+-static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms,
+-			    u32 *resp_errs)
+-{
+-	unsigned long timeout = jiffies + msecs_to_jiffies(timeout_ms);
+-	int err = 0;
+-	u32 status;
+-
+-	do {
+-		bool done = time_after(jiffies, timeout);
+-
+-		err = __mmc_send_status(card, &status, 5);
+-		if (err) {
+-			dev_err(mmc_dev(card->host),
+-				"error %d requesting status\n", err);
+-			return err;
+-		}
+-
+-		/* Accumulate any response error bits seen */
+-		if (resp_errs)
+-			*resp_errs |= status;
+-
+-		/*
+-		 * Timeout if the device never becomes ready for data and never
+-		 * leaves the program state.
+-		 */
+-		if (done) {
+-			dev_err(mmc_dev(card->host),
+-				"Card stuck in wrong state! %s status: %#x\n",
+-				 __func__, status);
+-			return -ETIMEDOUT;
+-		}
+-	} while (!mmc_ready_for_data(status));
+-
+-	return err;
+-}
+-
+ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 			       struct mmc_blk_ioc_data *idata)
+ {
+@@ -549,6 +518,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 		return mmc_sanitize(card, idata->ic.cmd_timeout_ms);
+ 
+ 	mmc_wait_for_req(card->host, &mrq);
++	memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
+ 
+ 	if (cmd.error) {
+ 		dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
+@@ -598,14 +568,13 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	if (idata->ic.postsleep_min_us)
+ 		usleep_range(idata->ic.postsleep_min_us, idata->ic.postsleep_max_us);
+ 
+-	memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp));
+-
+ 	if (idata->rpmb || (cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) {
+ 		/*
+ 		 * Ensure RPMB/R1B command has completed by polling CMD13
+ 		 * "Send Status".
+ 		 */
+-		err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, NULL);
++		err = mmc_poll_for_busy(card, MMC_BLK_TIMEOUT_MS, false,
++					MMC_BUSY_IO);
+ 	}
+ 
+ 	return err;
+@@ -1636,7 +1605,7 @@ static int mmc_blk_fix_state(struct mmc_card *card, struct request *req)
+ 
+ 	mmc_blk_send_stop(card, timeout);
+ 
+-	err = card_busy_detect(card, timeout, NULL);
++	err = mmc_poll_for_busy(card, timeout, false, MMC_BUSY_IO);
+ 
+ 	mmc_retune_release(card->host);
+ 
+@@ -1851,28 +1820,48 @@ static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq)
+ 	       brq->data.error || brq->cmd.resp[0] & CMD_ERRORS;
+ }
+ 
++static int mmc_blk_busy_cb(void *cb_data, bool *busy)
++{
++	struct mmc_blk_busy_data *data = cb_data;
++	u32 status = 0;
++	int err;
++
++	err = mmc_send_status(data->card, &status);
++	if (err)
++		return err;
++
++	/* Accumulate response error bits. */
++	data->status |= status;
++
++	*busy = !mmc_ready_for_data(status);
++	return 0;
++}
++
+ static int mmc_blk_card_busy(struct mmc_card *card, struct request *req)
+ {
+ 	struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
+-	u32 status = 0;
++	struct mmc_blk_busy_data cb_data;
+ 	int err;
+ 
+ 	if (mmc_host_is_spi(card->host) || rq_data_dir(req) == READ)
+ 		return 0;
+ 
+-	err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, &status);
++	cb_data.card = card;
++	cb_data.status = 0;
++	err = __mmc_poll_for_busy(card, MMC_BLK_TIMEOUT_MS, &mmc_blk_busy_cb,
++				  &cb_data);
+ 
+ 	/*
+ 	 * Do not assume data transferred correctly if there are any error bits
+ 	 * set.
+ 	 */
+-	if (status & mmc_blk_stop_err_bits(&mqrq->brq)) {
++	if (cb_data.status & mmc_blk_stop_err_bits(&mqrq->brq)) {
+ 		mqrq->brq.data.bytes_xfered = 0;
+ 		err = err ? err : -EIO;
+ 	}
+ 
+ 	/* Copy the exception bit so it will be seen later on */
+-	if (mmc_card_mmc(card) && status & R1_EXCEPTION_EVENT)
++	if (mmc_card_mmc(card) && cb_data.status & R1_EXCEPTION_EVENT)
+ 		mqrq->brq.cmd.resp[0] |= R1_EXCEPTION_EVENT;
+ 
+ 	return err;
+diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
+index 973756ed4016f..90d213a2203f4 100644
+--- a/drivers/mmc/core/mmc_ops.c
++++ b/drivers/mmc/core/mmc_ops.c
+@@ -435,7 +435,7 @@ static int mmc_busy_cb(void *cb_data, bool *busy)
+ 	u32 status = 0;
+ 	int err;
+ 
+-	if (host->ops->card_busy) {
++	if (data->busy_cmd != MMC_BUSY_IO && host->ops->card_busy) {
+ 		*busy = host->ops->card_busy(host);
+ 		return 0;
+ 	}
+@@ -457,6 +457,7 @@ static int mmc_busy_cb(void *cb_data, bool *busy)
+ 		break;
+ 	case MMC_BUSY_HPI:
+ 	case MMC_BUSY_EXTR_SINGLE:
++	case MMC_BUSY_IO:
+ 		break;
+ 	default:
+ 		err = -EINVAL;
+@@ -509,6 +510,7 @@ int __mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(__mmc_poll_for_busy);
+ 
+ int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
+ 		      bool retry_crc_err, enum mmc_busy_cmd busy_cmd)
+@@ -521,6 +523,7 @@ int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
+ 
+ 	return __mmc_poll_for_busy(card, timeout_ms, &mmc_busy_cb, &cb_data);
+ }
++EXPORT_SYMBOL_GPL(mmc_poll_for_busy);
+ 
+ bool mmc_prepare_busy_cmd(struct mmc_host *host, struct mmc_command *cmd,
+ 			  unsigned int timeout_ms)
+diff --git a/drivers/mmc/core/mmc_ops.h b/drivers/mmc/core/mmc_ops.h
+index 41ab4f573a310..ae25ffc2e8704 100644
+--- a/drivers/mmc/core/mmc_ops.h
++++ b/drivers/mmc/core/mmc_ops.h
+@@ -15,6 +15,7 @@ enum mmc_busy_cmd {
+ 	MMC_BUSY_ERASE,
+ 	MMC_BUSY_HPI,
+ 	MMC_BUSY_EXTR_SINGLE,
++	MMC_BUSY_IO,
+ };
+ 
+ struct mmc_host;
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index 4ca9374157348..58cfaffa3c2d8 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -542,9 +542,22 @@ static int sd_write_long_data(struct realtek_pci_sdmmc *host,
+ 	return 0;
+ }
+ 
++static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host)
++{
++	rtsx_pci_write_register(host->pcr, SD_CFG1,
++			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128);
++}
++
++static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host)
++{
++	rtsx_pci_write_register(host->pcr, SD_CFG1,
++			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0);
++}
++
+ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq)
+ {
+ 	struct mmc_data *data = mrq->data;
++	int err;
+ 
+ 	if (host->sg_count < 0) {
+ 		data->error = host->sg_count;
+@@ -553,22 +566,19 @@ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq)
+ 		return data->error;
+ 	}
+ 
+-	if (data->flags & MMC_DATA_READ)
+-		return sd_read_long_data(host, mrq);
++	if (data->flags & MMC_DATA_READ) {
++		if (host->initial_mode)
++			sd_disable_initial_mode(host);
+ 
+-	return sd_write_long_data(host, mrq);
+-}
++		err = sd_read_long_data(host, mrq);
+ 
+-static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host)
+-{
+-	rtsx_pci_write_register(host->pcr, SD_CFG1,
+-			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128);
+-}
++		if (host->initial_mode)
++			sd_enable_initial_mode(host);
+ 
+-static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host)
+-{
+-	rtsx_pci_write_register(host->pcr, SD_CFG1,
+-			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0);
++		return err;
++	}
++
++	return sd_write_long_data(host, mrq);
+ }
+ 
+ static void sd_normal_rw(struct realtek_pci_sdmmc *host,
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index 0e7c07ed96904..b6902447d7797 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -159,6 +159,12 @@ struct sdhci_arasan_data {
+ /* Controller immediately reports SDHCI_CLOCK_INT_STABLE after enabling the
+  * internal clock even when the clock isn't stable */
+ #define SDHCI_ARASAN_QUIRK_CLOCK_UNSTABLE BIT(1)
++/*
++ * Some of the Arasan variations might not have timing requirements
++ * met at 25MHz for Default Speed mode, those controllers work at
++ * 19MHz instead
++ */
++#define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2)
+ };
+ 
+ struct sdhci_arasan_of_data {
+@@ -267,7 +273,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 			 * through low speeds without power cycling.
+ 			 */
+ 			sdhci_set_clock(host, host->max_clk);
+-			phy_power_on(sdhci_arasan->phy);
++			if (phy_power_on(sdhci_arasan->phy)) {
++				pr_err("%s: Cannot power on phy.\n",
++				       mmc_hostname(host->mmc));
++				return;
++			}
++
+ 			sdhci_arasan->is_phy_on = true;
+ 
+ 			/*
+@@ -290,6 +301,16 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 		sdhci_arasan->is_phy_on = false;
+ 	}
+ 
++	if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN) {
++		/*
++		 * Some of the Arasan variations might not have timing
++		 * requirements met at 25MHz for Default Speed mode,
++		 * those controllers work at 19MHz instead.
++		 */
++		if (clock == DEFAULT_SPEED_MAX_DTR)
++			clock = (DEFAULT_SPEED_MAX_DTR * 19) / 25;
++	}
++
+ 	/* Set the Input and Output Clock Phase Delays */
+ 	if (clk_data->set_clk_delays)
+ 		clk_data->set_clk_delays(host);
+@@ -307,7 +328,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 		msleep(20);
+ 
+ 	if (ctrl_phy) {
+-		phy_power_on(sdhci_arasan->phy);
++		if (phy_power_on(sdhci_arasan->phy)) {
++			pr_err("%s: Cannot power on phy.\n",
++			       mmc_hostname(host->mmc));
++			return;
++		}
++
+ 		sdhci_arasan->is_phy_on = true;
+ 	}
+ }
+@@ -463,7 +489,9 @@ static int sdhci_arasan_suspend(struct device *dev)
+ 		ret = phy_power_off(sdhci_arasan->phy);
+ 		if (ret) {
+ 			dev_err(dev, "Cannot power off phy.\n");
+-			sdhci_resume_host(host);
++			if (sdhci_resume_host(host))
++				dev_err(dev, "Cannot resume host.\n");
++
+ 			return ret;
+ 		}
+ 		sdhci_arasan->is_phy_on = false;
+@@ -1608,6 +1636,8 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
+ 	if (of_device_is_compatible(np, "xlnx,zynqmp-8.9a")) {
+ 		host->mmc_host_ops.execute_tuning =
+ 			arasan_zynqmp_execute_tuning;
++
++		sdhci_arasan->quirks |= SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN;
+ 	}
+ 
+ 	arasan_dt_parse_clk_phases(dev, &sdhci_arasan->clk_data);
+diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c
+index 8b49fd56cf964..29e8a546dcd60 100644
+--- a/drivers/mtd/nand/raw/intel-nand-controller.c
++++ b/drivers/mtd/nand/raw/intel-nand-controller.c
+@@ -631,19 +631,26 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ 	ebu_host->clk_rate = clk_get_rate(ebu_host->clk);
+ 
+ 	ebu_host->dma_tx = dma_request_chan(dev, "tx");
+-	if (IS_ERR(ebu_host->dma_tx))
+-		return dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx),
+-				     "failed to request DMA tx chan!.\n");
++	if (IS_ERR(ebu_host->dma_tx)) {
++		ret = dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx),
++				    "failed to request DMA tx chan!.\n");
++		goto err_disable_unprepare_clk;
++	}
+ 
+ 	ebu_host->dma_rx = dma_request_chan(dev, "rx");
+-	if (IS_ERR(ebu_host->dma_rx))
+-		return dev_err_probe(dev, PTR_ERR(ebu_host->dma_rx),
+-				     "failed to request DMA rx chan!.\n");
++	if (IS_ERR(ebu_host->dma_rx)) {
++		ret = dev_err_probe(dev, PTR_ERR(ebu_host->dma_rx),
++				    "failed to request DMA rx chan!.\n");
++		ebu_host->dma_rx = NULL;
++		goto err_cleanup_dma;
++	}
+ 
+ 	resname = devm_kasprintf(dev, GFP_KERNEL, "addr_sel%d", cs);
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
+-	if (!res)
+-		return -EINVAL;
++	if (!res) {
++		ret = -EINVAL;
++		goto err_cleanup_dma;
++	}
+ 	ebu_host->cs[cs].addr_sel = res->start;
+ 	writel(ebu_host->cs[cs].addr_sel | EBU_ADDR_MASK(5) | EBU_ADDR_SEL_REGEN,
+ 	       ebu_host->ebu + EBU_ADDR_SEL(cs));
+@@ -653,7 +660,8 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ 	mtd = nand_to_mtd(&ebu_host->chip);
+ 	if (!mtd->name) {
+ 		dev_err(ebu_host->dev, "NAND label property is mandatory\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_cleanup_dma;
+ 	}
+ 
+ 	mtd->dev.parent = dev;
+@@ -681,6 +689,7 @@ err_clean_nand:
+ 	nand_cleanup(&ebu_host->chip);
+ err_cleanup_dma:
+ 	ebu_dma_cleanup(ebu_host);
++err_disable_unprepare_clk:
+ 	clk_disable_unprepare(ebu_host->clk);
+ 
+ 	return ret;
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 31730efa75382..8aef6005bfee1 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2252,7 +2252,6 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 	/* recompute stats just before removing the slave */
+ 	bond_get_stats(bond->dev, &bond->bond_stats);
+ 
+-	bond_upper_dev_unlink(bond, slave);
+ 	/* unregister rx_handler early so bond_handle_frame wouldn't be called
+ 	 * for this slave anymore.
+ 	 */
+@@ -2261,6 +2260,8 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 	if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 		bond_3ad_unbind_slave(slave);
+ 
++	bond_upper_dev_unlink(bond, slave);
++
+ 	if (bond_mode_can_use_xmit_hash(bond))
+ 		bond_update_slave_arr(bond, slave);
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index e78026ef6d8cc..64d6dfa831220 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -843,7 +843,8 @@ static int gswip_setup(struct dsa_switch *ds)
+ 
+ 	gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN,
+ 			  GSWIP_MAC_CTRL_2p(cpu_port));
+-	gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN);
++	gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8 + ETH_FCS_LEN,
++		       GSWIP_MAC_FLEN);
+ 	gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD,
+ 			  GSWIP_BM_QUEUE_GCTRL);
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index 98cc0133c3437..5ad5419e8be36 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -3231,12 +3231,6 @@ static int dpaa2_switch_probe(struct fsl_mc_device *sw_dev)
+ 			       &ethsw->fq[i].napi, dpaa2_switch_poll,
+ 			       NAPI_POLL_WEIGHT);
+ 
+-	err = dpsw_enable(ethsw->mc_io, 0, ethsw->dpsw_handle);
+-	if (err) {
+-		dev_err(ethsw->dev, "dpsw_enable err %d\n", err);
+-		goto err_free_netdev;
+-	}
+-
+ 	/* Setup IRQs */
+ 	err = dpaa2_switch_setup_irqs(sw_dev);
+ 	if (err)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index c0a478ae95834..0dbed35645eda 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -10,7 +10,14 @@
+ 
+ static u16 hclge_errno_to_resp(int errno)
+ {
+-	return abs(errno);
++	int resp = abs(errno);
++
++	/* The status for pf to vf msg cmd is u16, constrainted by HW.
++	 * We need to keep the same type with it.
++	 * The intput errno is the stander error code, it's safely to
++	 * use a u16 to store the abs(errno).
++	 */
++	return (u16)resp;
+ }
+ 
+ /* hclge_gen_resp_to_vf: used to generate a synchronous response to VF when PF
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 90793b36126e6..68c80f04113c8 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -186,12 +186,6 @@ enum iavf_state_t {
+ 	__IAVF_RUNNING,		/* opened, working */
+ };
+ 
+-enum iavf_critical_section_t {
+-	__IAVF_IN_CRITICAL_TASK,	/* cannot be interrupted */
+-	__IAVF_IN_CLIENT_TASK,
+-	__IAVF_IN_REMOVE_TASK,	/* device being removed */
+-};
+-
+ #define IAVF_CLOUD_FIELD_OMAC		0x01
+ #define IAVF_CLOUD_FIELD_IMAC		0x02
+ #define IAVF_CLOUD_FIELD_IVLAN	0x04
+@@ -236,6 +230,9 @@ struct iavf_adapter {
+ 	struct iavf_q_vector *q_vectors;
+ 	struct list_head vlan_filter_list;
+ 	struct list_head mac_filter_list;
++	struct mutex crit_lock;
++	struct mutex client_lock;
++	struct mutex remove_lock;
+ 	/* Lock to protect accesses to MAC and VLAN lists */
+ 	spinlock_t mac_vlan_list_lock;
+ 	char misc_vector_name[IFNAMSIZ + 9];
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index af43fbd8cb75e..edbeb27213f83 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -1352,8 +1352,7 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ 	if (!fltr)
+ 		return -ENOMEM;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section)) {
++	while (!mutex_trylock(&adapter->crit_lock)) {
+ 		if (--count == 0) {
+ 			kfree(fltr);
+ 			return -EINVAL;
+@@ -1378,7 +1377,7 @@ ret:
+ 	if (err && fltr)
+ 		kfree(fltr);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 	return err;
+ }
+ 
+@@ -1563,8 +1562,7 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 		return -EINVAL;
+ 	}
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section)) {
++	while (!mutex_trylock(&adapter->crit_lock)) {
+ 		if (--count == 0) {
+ 			kfree(rss_new);
+ 			return -EINVAL;
+@@ -1600,7 +1598,7 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 	if (!err)
+ 		mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	if (!rss_new_add)
+ 		kfree(rss_new);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 606a01ce40739..23762a7ef740b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -131,6 +131,27 @@ enum iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw,
+ 	return 0;
+ }
+ 
++/**
++ * iavf_lock_timeout - try to lock mutex but give up after timeout
++ * @lock: mutex that should be locked
++ * @msecs: timeout in msecs
++ *
++ * Returns 0 on success, negative on failure
++ **/
++static int iavf_lock_timeout(struct mutex *lock, unsigned int msecs)
++{
++	unsigned int wait, delay = 10;
++
++	for (wait = 0; wait < msecs; wait += delay) {
++		if (mutex_trylock(lock))
++			return 0;
++
++		msleep(delay);
++	}
++
++	return -1;
++}
++
+ /**
+  * iavf_schedule_reset - Set the flags and schedule a reset event
+  * @adapter: board private structure
+@@ -1916,7 +1937,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	u32 reg_val;
+ 
+-	if (test_and_set_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section))
++	if (!mutex_trylock(&adapter->crit_lock))
+ 		goto restart_watchdog;
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+@@ -1934,8 +1955,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			adapter->state = __IAVF_STARTUP;
+ 			adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+ 			queue_delayed_work(iavf_wq, &adapter->init_task, 10);
+-			clear_bit(__IAVF_IN_CRITICAL_TASK,
+-				  &adapter->crit_section);
++			mutex_unlock(&adapter->crit_lock);
+ 			/* Don't reschedule the watchdog, since we've restarted
+ 			 * the init task. When init_task contacts the PF and
+ 			 * gets everything set up again, it'll restart the
+@@ -1945,14 +1965,13 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+-		clear_bit(__IAVF_IN_CRITICAL_TASK,
+-			  &adapter->crit_section);
++		mutex_unlock(&adapter->crit_lock);
+ 		queue_delayed_work(iavf_wq,
+ 				   &adapter->watchdog_task,
+ 				   msecs_to_jiffies(10));
+ 		goto watchdog_done;
+ 	case __IAVF_RESETTING:
+-		clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++		mutex_unlock(&adapter->crit_lock);
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2);
+ 		return;
+ 	case __IAVF_DOWN:
+@@ -1975,7 +1994,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		break;
+ 	case __IAVF_REMOVE:
+-		clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++		mutex_unlock(&adapter->crit_lock);
+ 		return;
+ 	default:
+ 		goto restart_watchdog;
+@@ -1984,7 +2003,6 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		/* check for hw reset */
+ 	reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
+ 	if (!reg_val) {
+-		adapter->state = __IAVF_RESETTING;
+ 		adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+@@ -1998,7 +2016,7 @@ watchdog_done:
+ 	if (adapter->state == __IAVF_RUNNING ||
+ 	    adapter->state == __IAVF_COMM_FAILED)
+ 		iavf_detect_recover_hung(&adapter->vsi);
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ restart_watchdog:
+ 	if (adapter->aq_required)
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task,
+@@ -2062,7 +2080,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ 	memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
+ 	iavf_shutdown_adminq(&adapter->hw);
+ 	adapter->netdev->flags &= ~IFF_UP;
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 	adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+ 	adapter->state = __IAVF_DOWN;
+ 	wake_up(&adapter->down_waitqueue);
+@@ -2095,11 +2113,14 @@ static void iavf_reset_task(struct work_struct *work)
+ 	/* When device is being removed it doesn't make sense to run the reset
+ 	 * task, just return in such a case.
+ 	 */
+-	if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
++	if (mutex_is_locked(&adapter->remove_lock))
+ 		return;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CLIENT_TASK,
+-				&adapter->crit_section))
++	if (iavf_lock_timeout(&adapter->crit_lock, 200)) {
++		schedule_work(&adapter->reset_task);
++		return;
++	}
++	while (!mutex_trylock(&adapter->client_lock))
+ 		usleep_range(500, 1000);
+ 	if (CLIENT_ENABLED(adapter)) {
+ 		adapter->flags &= ~(IAVF_FLAG_CLIENT_NEEDS_OPEN |
+@@ -2151,7 +2172,7 @@ static void iavf_reset_task(struct work_struct *work)
+ 		dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+ 			reg_val);
+ 		iavf_disable_vf(adapter);
+-		clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
++		mutex_unlock(&adapter->client_lock);
+ 		return; /* Do not attempt to reinit. It's dead, Jim. */
+ 	}
+ 
+@@ -2278,13 +2299,13 @@ continue_reset:
+ 		adapter->state = __IAVF_DOWN;
+ 		wake_up(&adapter->down_waitqueue);
+ 	}
+-	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->client_lock);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return;
+ reset_err:
+-	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->client_lock);
++	mutex_unlock(&adapter->crit_lock);
+ 	dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+ 	iavf_close(netdev);
+ }
+@@ -2312,6 +2333,8 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	if (!event.msg_buf)
+ 		goto out;
+ 
++	if (iavf_lock_timeout(&adapter->crit_lock, 200))
++		goto freedom;
+ 	do {
+ 		ret = iavf_clean_arq_element(hw, &event, &pending);
+ 		v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
+@@ -2325,6 +2348,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 		if (pending != 0)
+ 			memset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE);
+ 	} while (pending);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	if ((adapter->flags &
+ 	     (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) ||
+@@ -2391,7 +2415,7 @@ static void iavf_client_task(struct work_struct *work)
+ 	 * later.
+ 	 */
+ 
+-	if (test_and_set_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section))
++	if (!mutex_trylock(&adapter->client_lock))
+ 		return;
+ 
+ 	if (adapter->flags & IAVF_FLAG_SERVICE_CLIENT_REQUESTED) {
+@@ -2414,7 +2438,7 @@ static void iavf_client_task(struct work_struct *work)
+ 		adapter->flags &= ~IAVF_FLAG_CLIENT_NEEDS_OPEN;
+ 	}
+ out:
+-	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->client_lock);
+ }
+ 
+ /**
+@@ -3017,8 +3041,7 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	if (!filter)
+ 		return -ENOMEM;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section)) {
++	while (!mutex_trylock(&adapter->crit_lock)) {
+ 		if (--count == 0)
+ 			goto err;
+ 		udelay(1);
+@@ -3049,7 +3072,7 @@ err:
+ 	if (err)
+ 		kfree(filter);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 	return err;
+ }
+ 
+@@ -3196,8 +3219,7 @@ static int iavf_open(struct net_device *netdev)
+ 		return -EIO;
+ 	}
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section))
++	while (!mutex_trylock(&adapter->crit_lock))
+ 		usleep_range(500, 1000);
+ 
+ 	if (adapter->state != __IAVF_DOWN) {
+@@ -3232,7 +3254,7 @@ static int iavf_open(struct net_device *netdev)
+ 
+ 	iavf_irq_enable(adapter, true);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return 0;
+ 
+@@ -3244,7 +3266,7 @@ err_setup_rx:
+ err_setup_tx:
+ 	iavf_free_all_tx_resources(adapter);
+ err_unlock:
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return err;
+ }
+@@ -3268,8 +3290,7 @@ static int iavf_close(struct net_device *netdev)
+ 	if (adapter->state <= __IAVF_DOWN_PENDING)
+ 		return 0;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section))
++	while (!mutex_trylock(&adapter->crit_lock))
+ 		usleep_range(500, 1000);
+ 
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+@@ -3280,7 +3301,7 @@ static int iavf_close(struct net_device *netdev)
+ 	adapter->state = __IAVF_DOWN_PENDING;
+ 	iavf_free_traffic_irqs(adapter);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	/* We explicitly don't free resources here because the hardware is
+ 	 * still active and can DMA into memory. Resources are cleared in
+@@ -3629,6 +3650,10 @@ static void iavf_init_task(struct work_struct *work)
+ 						    init_task.work);
+ 	struct iavf_hw *hw = &adapter->hw;
+ 
++	if (iavf_lock_timeout(&adapter->crit_lock, 5000)) {
++		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
++		return;
++	}
+ 	switch (adapter->state) {
+ 	case __IAVF_STARTUP:
+ 		if (iavf_startup(adapter) < 0)
+@@ -3641,14 +3666,14 @@ static void iavf_init_task(struct work_struct *work)
+ 	case __IAVF_INIT_GET_RESOURCES:
+ 		if (iavf_init_get_resources(adapter) < 0)
+ 			goto init_failed;
+-		return;
++		goto out;
+ 	default:
+ 		goto init_failed;
+ 	}
+ 
+ 	queue_delayed_work(iavf_wq, &adapter->init_task,
+ 			   msecs_to_jiffies(30));
+-	return;
++	goto out;
+ init_failed:
+ 	if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -3657,9 +3682,11 @@ init_failed:
+ 		iavf_shutdown_adminq(hw);
+ 		adapter->state = __IAVF_STARTUP;
+ 		queue_delayed_work(iavf_wq, &adapter->init_task, HZ * 5);
+-		return;
++		goto out;
+ 	}
+ 	queue_delayed_work(iavf_wq, &adapter->init_task, HZ);
++out:
++	mutex_unlock(&adapter->crit_lock);
+ }
+ 
+ /**
+@@ -3676,9 +3703,12 @@ static void iavf_shutdown(struct pci_dev *pdev)
+ 	if (netif_running(netdev))
+ 		iavf_close(netdev);
+ 
++	if (iavf_lock_timeout(&adapter->crit_lock, 5000))
++		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
+ 	/* Prevent the watchdog from running. */
+ 	adapter->state = __IAVF_REMOVE;
+ 	adapter->aq_required = 0;
++	mutex_unlock(&adapter->crit_lock);
+ 
+ #ifdef CONFIG_PM
+ 	pci_save_state(pdev);
+@@ -3772,6 +3802,9 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* set up the locks for the AQ, do this only once in probe
+ 	 * and destroy them only once in remove
+ 	 */
++	mutex_init(&adapter->crit_lock);
++	mutex_init(&adapter->client_lock);
++	mutex_init(&adapter->remove_lock);
+ 	mutex_init(&hw->aq.asq_mutex);
+ 	mutex_init(&hw->aq.arq_mutex);
+ 
+@@ -3823,8 +3856,7 @@ static int __maybe_unused iavf_suspend(struct device *dev_d)
+ 
+ 	netif_device_detach(netdev);
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section))
++	while (!mutex_trylock(&adapter->crit_lock))
+ 		usleep_range(500, 1000);
+ 
+ 	if (netif_running(netdev)) {
+@@ -3835,7 +3867,7 @@ static int __maybe_unused iavf_suspend(struct device *dev_d)
+ 	iavf_free_misc_irq(adapter);
+ 	iavf_reset_interrupt_capability(adapter);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return 0;
+ }
+@@ -3897,7 +3929,7 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int err;
+ 	/* Indicate we are in remove and not to run reset_task */
+-	set_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section);
++	mutex_lock(&adapter->remove_lock);
+ 	cancel_delayed_work_sync(&adapter->init_task);
+ 	cancel_work_sync(&adapter->reset_task);
+ 	cancel_delayed_work_sync(&adapter->client_task);
+@@ -3912,10 +3944,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 				 err);
+ 	}
+ 
+-	/* Shut down all the garbage mashers on the detention level */
+-	adapter->state = __IAVF_REMOVE;
+-	adapter->aq_required = 0;
+-	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_request_reset(adapter);
+ 	msleep(50);
+ 	/* If the FW isn't responding, kick it once, but only once. */
+@@ -3923,6 +3951,13 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		iavf_request_reset(adapter);
+ 		msleep(50);
+ 	}
++	if (iavf_lock_timeout(&adapter->crit_lock, 5000))
++		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
++
++	/* Shut down all the garbage mashers on the detention level */
++	adapter->state = __IAVF_REMOVE;
++	adapter->aq_required = 0;
++	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_free_all_tx_resources(adapter);
+ 	iavf_free_all_rx_resources(adapter);
+ 	iavf_misc_irq_disable(adapter);
+@@ -3942,6 +3977,11 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	/* destroy the locks only once, here */
+ 	mutex_destroy(&hw->aq.arq_mutex);
+ 	mutex_destroy(&hw->aq.asq_mutex);
++	mutex_destroy(&adapter->client_lock);
++	mutex_unlock(&adapter->crit_lock);
++	mutex_destroy(&adapter->crit_lock);
++	mutex_unlock(&adapter->remove_lock);
++	mutex_destroy(&adapter->remove_lock);
+ 
+ 	iounmap(hw->hw_addr);
+ 	pci_release_regions(pdev);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index ed2d66bc2d6c3..f62982c4d933d 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4817,6 +4817,7 @@ static irqreturn_t igc_msix_ring(int irq, void *data)
+  */
+ static int igc_request_msix(struct igc_adapter *adapter)
+ {
++	unsigned int num_q_vectors = adapter->num_q_vectors;
+ 	int i = 0, err = 0, vector = 0, free_vector = 0;
+ 	struct net_device *netdev = adapter->netdev;
+ 
+@@ -4825,7 +4826,13 @@ static int igc_request_msix(struct igc_adapter *adapter)
+ 	if (err)
+ 		goto err_out;
+ 
+-	for (i = 0; i < adapter->num_q_vectors; i++) {
++	if (num_q_vectors > MAX_Q_VECTORS) {
++		num_q_vectors = MAX_Q_VECTORS;
++		dev_warn(&adapter->pdev->dev,
++			 "The number of queue vectors (%d) is higher than max allowed (%d)\n",
++			 adapter->num_q_vectors, MAX_Q_VECTORS);
++	}
++	for (i = 0; i < num_q_vectors; i++) {
+ 		struct igc_q_vector *q_vector = adapter->q_vector[i];
+ 
+ 		vector++;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index f5ec39de026a5..05f4334700e90 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -717,6 +717,7 @@ struct nix_lf_alloc_rsp {
+ 	u8	cgx_links;  /* No. of CGX links present in HW */
+ 	u8	lbk_links;  /* No. of LBK links present in HW */
+ 	u8	sdp_links;  /* No. of SDP links present in HW */
++	u8	tx_link;    /* Transmit channel link number */
+ };
+ 
+ struct nix_lf_free_req {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index c32195073e8a5..87af164951eae 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -249,9 +249,11 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
+ 	return true;
+ }
+ 
+-static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
++static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf,
++			      struct nix_lf_alloc_rsp *rsp)
+ {
+ 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
++	struct rvu_hwinfo *hw = rvu->hw;
+ 	struct mac_ops *mac_ops;
+ 	int pkind, pf, vf, lbkid;
+ 	u8 cgx_id, lmac_id;
+@@ -276,6 +278,8 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
+ 		pfvf->tx_chan_base = pfvf->rx_chan_base;
+ 		pfvf->rx_chan_cnt = 1;
+ 		pfvf->tx_chan_cnt = 1;
++		rsp->tx_link = cgx_id * hw->lmac_per_cgx + lmac_id;
++
+ 		cgx_set_pkind(rvu_cgx_pdata(cgx_id, rvu), lmac_id, pkind);
+ 		rvu_npc_set_pkind(rvu, pkind, pfvf);
+ 
+@@ -309,6 +313,7 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf)
+ 					rvu_nix_chan_lbk(rvu, lbkid, vf + 1);
+ 		pfvf->rx_chan_cnt = 1;
+ 		pfvf->tx_chan_cnt = 1;
++		rsp->tx_link = hw->cgx_links + lbkid;
+ 		rvu_npc_set_pkind(rvu, NPC_RX_LBK_PKIND, pfvf);
+ 		rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf,
+ 					      pfvf->rx_chan_base,
+@@ -1258,7 +1263,7 @@ int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu,
+ 	rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_PARSE_CFG(nixlf), cfg);
+ 
+ 	intf = is_afvf(pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
+-	err = nix_interface_init(rvu, pcifunc, intf, nixlf);
++	err = nix_interface_init(rvu, pcifunc, intf, nixlf, rsp);
+ 	if (err)
+ 		goto free_mem;
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 94dfd64f526fa..124465b3987c4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -586,25 +586,6 @@ void otx2_get_mac_from_af(struct net_device *netdev)
+ }
+ EXPORT_SYMBOL(otx2_get_mac_from_af);
+ 
+-static int otx2_get_link(struct otx2_nic *pfvf)
+-{
+-	int link = 0;
+-	u16 map;
+-
+-	/* cgx lmac link */
+-	if (pfvf->hw.tx_chan_base >= CGX_CHAN_BASE) {
+-		map = pfvf->hw.tx_chan_base & 0x7FF;
+-		link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
+-	}
+-	/* LBK channel */
+-	if (pfvf->hw.tx_chan_base < SDP_CHAN_BASE) {
+-		map = pfvf->hw.tx_chan_base & 0x7FF;
+-		link = pfvf->hw.cgx_links | ((map >> 8) & 0xF);
+-	}
+-
+-	return link;
+-}
+-
+ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl)
+ {
+ 	struct otx2_hw *hw = &pfvf->hw;
+@@ -660,8 +641,7 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl)
+ 		req->regval[1] = TXSCH_TL1_DFLT_RR_PRIO << 24 | DFLT_RR_QTM;
+ 
+ 		req->num_regs++;
+-		req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+-							otx2_get_link(pfvf));
++		req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
+ 		/* Enable this queue and backpressure */
+ 		req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
+ 
+@@ -1204,7 +1184,22 @@ static int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
+ 	/* Enable backpressure for RQ aura */
+ 	if (aura_id < pfvf->hw.rqpool_cnt && !is_otx2_lbkvf(pfvf->pdev)) {
+ 		aq->aura.bp_ena = 0;
++		/* If NIX1 LF is attached then specify NIX1_RX.
++		 *
++		 * Below NPA_AURA_S[BP_ENA] is set according to the
++		 * NPA_BPINTF_E enumeration given as:
++		 * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so
++		 * NIX0_RX is 0x0 + 0*0x1 = 0
++		 * NIX1_RX is 0x0 + 1*0x1 = 1
++		 * But in HRM it is given that
++		 * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to
++		 * NIX-RX based on [BP] level. One bit per NIX-RX; index
++		 * enumerated by NPA_BPINTF_E."
++		 */
++		if (pfvf->nix_blkaddr == BLKADDR_NIX1)
++			aq->aura.bp_ena = 1;
+ 		aq->aura.nix0_bpid = pfvf->bpid[0];
++
+ 		/* Set backpressure level for RQ's Aura */
+ 		aq->aura.bp = RQ_BP_LVL_AURA;
+ 	}
+@@ -1591,6 +1586,7 @@ void mbox_handler_nix_lf_alloc(struct otx2_nic *pfvf,
+ 	pfvf->hw.lso_tsov6_idx = rsp->lso_tsov6_idx;
+ 	pfvf->hw.cgx_links = rsp->cgx_links;
+ 	pfvf->hw.lbk_links = rsp->lbk_links;
++	pfvf->hw.tx_link = rsp->tx_link;
+ }
+ EXPORT_SYMBOL(mbox_handler_nix_lf_alloc);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 8c602d27108a7..11686c5cf45bd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -215,6 +215,7 @@ struct otx2_hw {
+ 	u64			cgx_fec_uncorr_blks;
+ 	u8			cgx_links;  /* No. of CGX links present in HW */
+ 	u8			lbk_links;  /* No. of LBK links present in HW */
++	u8			tx_link;    /* Transmit channel link number */
+ #define HW_TSO			0
+ #define CN10K_MBOX		1
+ #define CN10K_LMTST		2
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 9d79c5ec31e9f..db5dfff585c99 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -877,7 +877,7 @@ static void cb_timeout_handler(struct work_struct *work)
+ 	ent->ret = -ETIMEDOUT;
+ 	mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n",
+ 		       ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+-	mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++	mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ 
+ out:
+ 	cmd_ent_put(ent); /* for the cmd_ent_get() took on schedule delayed work */
+@@ -994,7 +994,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		MLX5_SET(mbox_out, ent->out, status, status);
+ 		MLX5_SET(mbox_out, ent->out, syndrome, drv_synd);
+ 
+-		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++		mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ 		return;
+ 	}
+ 
+@@ -1008,7 +1008,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		poll_timeout(ent);
+ 		/* make sure we read the descriptor after ownership is SW */
+ 		rmb();
+-		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, (ent->ret == -ETIMEDOUT));
++		mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, (ent->ret == -ETIMEDOUT));
+ 	}
+ }
+ 
+@@ -1068,7 +1068,7 @@ static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev,
+ 		       mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ 
+ 	ent->ret = -ETIMEDOUT;
+-	mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++	mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ }
+ 
+ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+index 43356fad53deb..ffdfb5a94b14b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+@@ -846,9 +846,9 @@ again:
+ 			new_htbl = dr_rule_rehash(rule, nic_rule, cur_htbl,
+ 						  ste_location, send_ste_list);
+ 			if (!new_htbl) {
+-				mlx5dr_htbl_put(cur_htbl);
+ 				mlx5dr_err(dmn, "Failed creating rehash table, htbl-log_size: %d\n",
+ 					   cur_htbl->chunk_size);
++				mlx5dr_htbl_put(cur_htbl);
+ 			} else {
+ 				cur_htbl = new_htbl;
+ 			}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 9df0e73d1c358..69b49deb66b22 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -620,6 +620,7 @@ static int dr_cmd_modify_qp_rtr2rts(struct mlx5_core_dev *mdev,
+ 
+ 	MLX5_SET(qpc, qpc, retry_count, attr->retry_cnt);
+ 	MLX5_SET(qpc, qpc, rnr_retry, attr->rnr_retry);
++	MLX5_SET(qpc, qpc, primary_address_path.ack_timeout, 0x8); /* ~1ms */
+ 
+ 	MLX5_SET(rtr2rts_qp_in, in, opcode, MLX5_CMD_OP_RTR2RTS_QP);
+ 	MLX5_SET(rtr2rts_qp_in, in, qpn, dr_qp->qpn);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index 5dfa4799c34f2..ed2ade2a4f043 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1697,7 +1697,7 @@ nfp_net_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
+ 		case NFP_NET_META_RESYNC_INFO:
+ 			if (nfp_net_tls_rx_resync_req(netdev, data, pkt,
+ 						      pkt_len))
+-				return NULL;
++				return false;
+ 			data += sizeof(struct nfp_net_tls_resync_req);
+ 			break;
+ 		default:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 28dd0ed85a824..f7dc8458cde86 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -289,10 +289,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 		val &= ~NSS_COMMON_GMAC_CTL_PHY_IFACE_SEL;
+ 		break;
+ 	default:
+-		dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
+-			phy_modes(gmac->phy_mode));
+-		err = -EINVAL;
+-		goto err_remove_config_dt;
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_GMAC_CTL(gmac->id), val);
+ 
+@@ -309,10 +306,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 			NSS_COMMON_CLK_SRC_CTRL_OFFSET(gmac->id);
+ 		break;
+ 	default:
+-		dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
+-			phy_modes(gmac->phy_mode));
+-		err = -EINVAL;
+-		goto err_remove_config_dt;
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, val);
+ 
+@@ -329,8 +323,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 				NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id);
+ 		break;
+ 	default:
+-		/* We don't get here; the switch above will have errored out */
+-		unreachable();
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val);
+ 
+@@ -361,6 +354,11 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_unsupported_phy:
++	dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
++		phy_modes(gmac->phy_mode));
++	err = -EINVAL;
++
+ err_remove_config_dt:
+ 	stmmac_remove_config_dt(pdev, plat_dat);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index fa90bcdf4e455..8a150cc462dcf 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5342,7 +5342,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 	struct stmmac_channel *ch =
+ 		container_of(napi, struct stmmac_channel, rxtx_napi);
+ 	struct stmmac_priv *priv = ch->priv_data;
+-	int rx_done, tx_done;
++	int rx_done, tx_done, rxtx_done;
+ 	u32 chan = ch->index;
+ 
+ 	priv->xstats.napi_poll++;
+@@ -5352,14 +5352,16 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 
+ 	rx_done = stmmac_rx_zc(priv, budget, chan);
+ 
++	rxtx_done = max(tx_done, rx_done);
++
+ 	/* If either TX or RX work is not complete, return budget
+ 	 * and keep pooling
+ 	 */
+-	if (tx_done >= budget || rx_done >= budget)
++	if (rxtx_done >= budget)
+ 		return budget;
+ 
+ 	/* all work done, exit the polling mode */
+-	if (napi_complete_done(napi, rx_done)) {
++	if (napi_complete_done(napi, rxtx_done)) {
+ 		unsigned long flags;
+ 
+ 		spin_lock_irqsave(&ch->lock, flags);
+@@ -5370,7 +5372,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 		spin_unlock_irqrestore(&ch->lock, flags);
+ 	}
+ 
+-	return min(rx_done, budget - 1);
++	return min(rxtx_done, budget - 1);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
+index 811815f8cd3bb..f974e70a82e8b 100644
+--- a/drivers/net/ethernet/wiznet/w5100.c
++++ b/drivers/net/ethernet/wiznet/w5100.c
+@@ -1047,6 +1047,8 @@ static int w5100_mmio_probe(struct platform_device *pdev)
+ 		mac_addr = data->mac_addr;
+ 
+ 	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!mem)
++		return -EINVAL;
+ 	if (resource_size(mem) < W5100_BUS_DIRECT_SIZE)
+ 		ops = &w5100_mmio_indirect_ops;
+ 	else
+diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
+index af44ca41189e3..bda8677eae88d 100644
+--- a/drivers/net/ipa/ipa_cmd.c
++++ b/drivers/net/ipa/ipa_cmd.c
+@@ -159,35 +159,45 @@ static void ipa_cmd_validate_build(void)
+ 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
+ #undef TABLE_COUNT_MAX
+ #undef TABLE_SIZE
+-}
+ 
+-#ifdef IPA_VALIDATE
++	/* Hashed and non-hashed fields are assumed to be the same size */
++	BUILD_BUG_ON(field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK) !=
++		     field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
++	BUILD_BUG_ON(field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK) !=
++		     field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK));
++}
+ 
+ /* Validate a memory region holding a table */
+-bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+-			 bool route, bool ipv6, bool hashed)
++bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, bool route)
+ {
++	u32 offset_max = field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
++	u32 size_max = field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
++	const char *table = route ? "route" : "filter";
+ 	struct device *dev = &ipa->pdev->dev;
+-	u32 offset_max;
+ 
+-	offset_max = hashed ? field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK)
+-			    : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
++	/* Size must fit in the immediate command field that holds it */
++	if (mem->size > size_max) {
++		dev_err(dev, "%s table region size too large\n", table);
++		dev_err(dev, "    (0x%04x > 0x%04x)\n",
++			mem->size, size_max);
++
++		return false;
++	}
++
++	/* Offset must fit in the immediate command field that holds it */
+ 	if (mem->offset > offset_max ||
+ 	    ipa->mem_offset > offset_max - mem->offset) {
+-		dev_err(dev, "IPv%c %s%s table region offset too large\n",
+-			ipv6 ? '6' : '4', hashed ? "hashed " : "",
+-			route ? "route" : "filter");
++		dev_err(dev, "%s table region offset too large\n", table);
+ 		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
+ 			ipa->mem_offset, mem->offset, offset_max);
+ 
+ 		return false;
+ 	}
+ 
++	/* Entire memory range must fit within IPA-local memory */
+ 	if (mem->offset > ipa->mem_size ||
+ 	    mem->size > ipa->mem_size - mem->offset) {
+-		dev_err(dev, "IPv%c %s%s table region out of range\n",
+-			ipv6 ? '6' : '4', hashed ? "hashed " : "",
+-			route ? "route" : "filter");
++		dev_err(dev, "%s table region out of range\n", table);
+ 		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
+ 			mem->offset, mem->size, ipa->mem_size);
+ 
+@@ -197,6 +207,8 @@ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+ 	return true;
+ }
+ 
++#ifdef IPA_VALIDATE
++
+ /* Validate the memory region that holds headers */
+ static bool ipa_cmd_header_valid(struct ipa *ipa)
+ {
+diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h
+index b99262281f41c..ea723419c826b 100644
+--- a/drivers/net/ipa/ipa_cmd.h
++++ b/drivers/net/ipa/ipa_cmd.h
+@@ -57,20 +57,18 @@ struct ipa_cmd_info {
+ 	enum dma_data_direction direction;
+ };
+ 
+-#ifdef IPA_VALIDATE
+-
+ /**
+  * ipa_cmd_table_valid() - Validate a memory region holding a table
+  * @ipa:	- IPA pointer
+  * @mem:	- IPA memory region descriptor
+  * @route:	- Whether the region holds a route or filter table
+- * @ipv6:	- Whether the table is for IPv6 or IPv4
+- * @hashed:	- Whether the table is hashed or non-hashed
+  *
+  * Return:	true if region is valid, false otherwise
+  */
+ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+-			    bool route, bool ipv6, bool hashed);
++			    bool route);
++
++#ifdef IPA_VALIDATE
+ 
+ /**
+  * ipa_cmd_data_valid() - Validate command-realted configuration is valid
+@@ -82,13 +80,6 @@ bool ipa_cmd_data_valid(struct ipa *ipa);
+ 
+ #else /* !IPA_VALIDATE */
+ 
+-static inline bool ipa_cmd_table_valid(struct ipa *ipa,
+-				       const struct ipa_mem *mem, bool route,
+-				       bool ipv6, bool hashed)
+-{
+-	return true;
+-}
+-
+ static inline bool ipa_cmd_data_valid(struct ipa *ipa)
+ {
+ 	return true;
+diff --git a/drivers/net/ipa/ipa_data-v4.11.c b/drivers/net/ipa/ipa_data-v4.11.c
+index 9353efbd504fb..598b410cd7ab4 100644
+--- a/drivers/net/ipa/ipa_data-v4.11.c
++++ b/drivers/net/ipa/ipa_data-v4.11.c
+@@ -368,18 +368,13 @@ static const struct ipa_mem_data ipa_mem_data = {
+ static const struct ipa_interconnect_data ipa_interconnect_data[] = {
+ 	{
+ 		.name			= "memory",
+-		.peak_bandwidth		= 465000,	/* 465 MBps */
+-		.average_bandwidth	= 80000,	/* 80 MBps */
+-	},
+-	/* Average rate is unused for the next two interconnects */
+-	{
+-		.name			= "imem",
+-		.peak_bandwidth		= 68570,	/* 68.57 MBps */
+-		.average_bandwidth	= 80000,	/* 80 MBps (unused?) */
++		.peak_bandwidth		= 600000,	/* 600 MBps */
++		.average_bandwidth	= 150000,	/* 150 MBps */
+ 	},
++	/* Average rate is unused for the next interconnect */
+ 	{
+ 		.name			= "config",
+-		.peak_bandwidth		= 30000,	/* 30 MBps */
++		.peak_bandwidth		= 74000,	/* 74 MBps */
+ 		.average_bandwidth	= 0,		/* unused */
+ 	},
+ };
+diff --git a/drivers/net/ipa/ipa_data-v4.9.c b/drivers/net/ipa/ipa_data-v4.9.c
+index 798d43e1eb133..4cce5dce92158 100644
+--- a/drivers/net/ipa/ipa_data-v4.9.c
++++ b/drivers/net/ipa/ipa_data-v4.9.c
+@@ -416,18 +416,13 @@ static const struct ipa_mem_data ipa_mem_data = {
+ /* Interconnect rates are in 1000 byte/second units */
+ static const struct ipa_interconnect_data ipa_interconnect_data[] = {
+ 	{
+-		.name			= "ipa_to_llcc",
++		.name			= "memory",
+ 		.peak_bandwidth		= 600000,	/* 600 MBps */
+ 		.average_bandwidth	= 150000,	/* 150 MBps */
+ 	},
+-	{
+-		.name			= "llcc_to_ebi1",
+-		.peak_bandwidth		= 1804000,	/* 1.804 GBps */
+-		.average_bandwidth	= 150000,	/* 150 MBps */
+-	},
+ 	/* Average rate is unused for the next interconnect */
+ 	{
+-		.name			= "appss_to_ipa",
++		.name			= "config",
+ 		.peak_bandwidth		= 74000,	/* 74 MBps */
+ 		.average_bandwidth	= 0,		/* unused */
+ 	},
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index c617a9156f26d..c607ebec74567 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -120,8 +120,6 @@
+  */
+ #define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
+ 
+-#ifdef IPA_VALIDATE
+-
+ /* Check things that can be validated at build time. */
+ static void ipa_table_validate_build(void)
+ {
+@@ -161,7 +159,7 @@ ipa_table_valid_one(struct ipa *ipa, enum ipa_mem_id mem_id, bool route)
+ 	else
+ 		size = (1 + IPA_FILTER_COUNT_MAX) * sizeof(__le64);
+ 
+-	if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed))
++	if (!ipa_cmd_table_valid(ipa, mem, route))
+ 		return false;
+ 
+ 	/* mem->size >= size is sufficient, but we'll demand more */
+@@ -169,7 +167,7 @@ ipa_table_valid_one(struct ipa *ipa, enum ipa_mem_id mem_id, bool route)
+ 		return true;
+ 
+ 	/* Hashed table regions can be zero size if hashing is not supported */
+-	if (hashed && !mem->size)
++	if (ipa_table_hash_support(ipa) && !mem->size)
+ 		return true;
+ 
+ 	dev_err(dev, "%s table region %u size 0x%02x, expected 0x%02x\n",
+@@ -183,14 +181,22 @@ bool ipa_table_valid(struct ipa *ipa)
+ {
+ 	bool valid;
+ 
+-	valid = ipa_table_valid_one(IPA_MEM_V4_FILTER, false);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V4_FILTER_HASHED, false);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V6_FILTER, false);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V6_FILTER_HASHED, false);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V4_ROUTE, true);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V4_ROUTE_HASHED, true);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V6_ROUTE, true);
+-	valid = valid && ipa_table_valid_one(IPA_MEM_V6_ROUTE_HASHED, true);
++	valid = ipa_table_valid_one(ipa, IPA_MEM_V4_FILTER, false);
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_FILTER, false);
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V4_ROUTE, true);
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_ROUTE, true);
++
++	if (!ipa_table_hash_support(ipa))
++		return valid;
++
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V4_FILTER_HASHED,
++					     false);
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_FILTER_HASHED,
++					     false);
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V4_ROUTE_HASHED,
++					     true);
++	valid = valid && ipa_table_valid_one(ipa, IPA_MEM_V6_ROUTE_HASHED,
++					     true);
+ 
+ 	return valid;
+ }
+@@ -217,14 +223,6 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_map)
+ 	return true;
+ }
+ 
+-#else /* !IPA_VALIDATE */
+-static void ipa_table_validate_build(void)
+-
+-{
+-}
+-
+-#endif /* !IPA_VALIDATE */
+-
+ /* Zero entry count means no table, so just return a 0 address */
+ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
+ {
+diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
+index 1e2be9fce2f81..b6a9a0d79d68e 100644
+--- a/drivers/net/ipa/ipa_table.h
++++ b/drivers/net/ipa/ipa_table.h
+@@ -16,8 +16,6 @@ struct ipa;
+ /* The maximum number of route table entries (IPv4, IPv6; hashed or not) */
+ #define IPA_ROUTE_COUNT_MAX	15
+ 
+-#ifdef IPA_VALIDATE
+-
+ /**
+  * ipa_table_valid() - Validate route and filter table memory regions
+  * @ipa:	IPA pointer
+@@ -35,20 +33,6 @@ bool ipa_table_valid(struct ipa *ipa);
+  */
+ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
+ 
+-#else /* !IPA_VALIDATE */
+-
+-static inline bool ipa_table_valid(struct ipa *ipa)
+-{
+-	return true;
+-}
+-
+-static inline bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask)
+-{
+-	return true;
+-}
+-
+-#endif /* !IPA_VALIDATE */
+-
+ /**
+  * ipa_table_hash_support() - Return true if hashed tables are supported
+  * @ipa:	IPA pointer
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index f7a2ec150e542..211b5476a6f51 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -326,11 +326,9 @@ static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
+ 
+ static int dp8382x_disable_wol(struct phy_device *phydev)
+ {
+-	int value = DP83822_WOL_EN | DP83822_WOL_MAGIC_EN |
+-		    DP83822_WOL_SECURE_ON;
+-
+-	return phy_clear_bits_mmd(phydev, DP83822_DEVADDR,
+-				  MII_DP83822_WOL_CFG, value);
++	return phy_clear_bits_mmd(phydev, DP83822_DEVADDR, MII_DP83822_WOL_CFG,
++				  DP83822_WOL_EN | DP83822_WOL_MAGIC_EN |
++				  DP83822_WOL_SECURE_ON);
+ }
+ 
+ static int dp83822_read_status(struct phy_device *phydev)
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index b4885a700296e..b0a4ca3559fd8 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -3351,7 +3351,8 @@ found:
+ 			"Found block at %x: code=%d ref=%d length=%d major=%d minor=%d\n",
+ 			cptr, code, reference, length, major, minor);
+ 		if ((!AR_SREV_9485(ah) && length >= 1024) ||
+-		    (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485)) {
++		    (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485) ||
++		    (length > cptr)) {
+ 			ath_dbg(common, EEPROM, "Skipping bad header\n");
+ 			cptr -= COMP_HDR_LEN;
+ 			continue;
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index 2ca3b86714a9d..172081ffe4774 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -1621,7 +1621,6 @@ static void ath9k_hw_apply_gpio_override(struct ath_hw *ah)
+ 		ath9k_hw_gpio_request_out(ah, i, NULL,
+ 					  AR_GPIO_OUTPUT_MUX_AS_OUTPUT);
+ 		ath9k_hw_set_gpio(ah, i, !!(ah->gpio_val & BIT(i)));
+-		ath9k_hw_gpio_free(ah, i);
+ 	}
+ }
+ 
+@@ -2728,14 +2727,17 @@ static void ath9k_hw_gpio_cfg_output_mux(struct ath_hw *ah, u32 gpio, u32 type)
+ static void ath9k_hw_gpio_cfg_soc(struct ath_hw *ah, u32 gpio, bool out,
+ 				  const char *label)
+ {
++	int err;
++
+ 	if (ah->caps.gpio_requested & BIT(gpio))
+ 		return;
+ 
+-	/* may be requested by BSP, free anyway */
+-	gpio_free(gpio);
+-
+-	if (gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label))
++	err = gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label);
++	if (err) {
++		ath_err(ath9k_hw_common(ah), "request GPIO%d failed:%d\n",
++			gpio, err);
+ 		return;
++	}
+ 
+ 	ah->caps.gpio_requested |= BIT(gpio);
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index d202f2128df23..67f4db662402b 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -408,13 +408,14 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 		wcn36xx_dbg(WCN36XX_DBG_MAC, "wcn36xx_config channel switch=%d\n",
+ 			    ch);
+ 
+-		if (wcn->sw_scan_opchannel == ch) {
++		if (wcn->sw_scan_opchannel == ch && wcn->sw_scan_channel) {
+ 			/* If channel is the initial operating channel, we may
+ 			 * want to receive/transmit regular data packets, then
+ 			 * simply stop the scan session and exit PS mode.
+ 			 */
+ 			wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
+ 						wcn->sw_scan_vif);
++			wcn->sw_scan_channel = 0;
+ 		} else if (wcn->sw_scan) {
+ 			/* A scan is ongoing, do not change the operating
+ 			 * channel, but start a scan session on the channel.
+@@ -422,6 +423,7 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 			wcn36xx_smd_init_scan(wcn, HAL_SYS_MODE_SCAN,
+ 					      wcn->sw_scan_vif);
+ 			wcn36xx_smd_start_scan(wcn, ch);
++			wcn->sw_scan_channel = ch;
+ 		} else {
+ 			wcn36xx_change_opchannel(wcn, ch);
+ 		}
+@@ -702,6 +704,7 @@ static void wcn36xx_sw_scan_start(struct ieee80211_hw *hw,
+ 
+ 	wcn->sw_scan = true;
+ 	wcn->sw_scan_vif = vif;
++	wcn->sw_scan_channel = 0;
+ 	if (vif_priv->sta_assoc)
+ 		wcn->sw_scan_opchannel = WCN36XX_HW_CHANNEL(wcn);
+ 	else
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index 1b831157ede17..cab196bb38cd4 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -287,6 +287,10 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 		status.rate_idx = 0;
+ 	}
+ 
++	if (ieee80211_is_beacon(hdr->frame_control) ||
++	    ieee80211_is_probe_resp(hdr->frame_control))
++		status.boottime_ns = ktime_get_boottime_ns();
++
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 6121d8a5641ab..0feb235b5a426 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -246,6 +246,7 @@ struct wcn36xx {
+ 	struct cfg80211_scan_request *scan_req;
+ 	bool			sw_scan;
+ 	u8			sw_scan_opchannel;
++	u8			sw_scan_channel;
+ 	struct ieee80211_vif	*sw_scan_vif;
+ 	struct mutex		scan_lock;
+ 	bool			scan_aborted;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+index b2605aefc2909..8b200379f7c20 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2012-2014, 2018-2020 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+@@ -874,7 +874,7 @@ struct iwl_scan_probe_params_v3 {
+ 	u8 reserved;
+ 	struct iwl_ssid_ie direct_scan[PROBE_OPTION_MAX];
+ 	__le32 short_ssid[SCAN_SHORT_SSID_MAX_SIZE];
+-	u8 bssid_array[ETH_ALEN][SCAN_BSSID_MAX_SIZE];
++	u8 bssid_array[SCAN_BSSID_MAX_SIZE][ETH_ALEN];
+ } __packed; /* SCAN_PROBE_PARAMS_API_S_VER_3 */
+ 
+ /**
+@@ -894,7 +894,7 @@ struct iwl_scan_probe_params_v4 {
+ 	__le16 reserved;
+ 	struct iwl_ssid_ie direct_scan[PROBE_OPTION_MAX];
+ 	__le32 short_ssid[SCAN_SHORT_SSID_MAX_SIZE];
+-	u8 bssid_array[ETH_ALEN][SCAN_BSSID_MAX_SIZE];
++	u8 bssid_array[SCAN_BSSID_MAX_SIZE][ETH_ALEN];
+ } __packed; /* SCAN_PROBE_PARAMS_API_S_VER_4 */
+ 
+ #define SCAN_MAX_NUM_CHANS_V3 67
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index df7c55e06f54e..a13fe01e487b9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -2321,7 +2321,7 @@ static void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt,
+ 		return;
+ 
+ 	if (dump_data->monitor_only)
+-		dump_mask &= IWL_FW_ERROR_DUMP_FW_MONITOR;
++		dump_mask &= BIT(IWL_FW_ERROR_DUMP_FW_MONITOR);
+ 
+ 	fw_error_dump.trans_ptr = iwl_trans_dump_data(fwrt->trans, dump_mask);
+ 	file_len = le32_to_cpu(dump_file->file_len);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index fd5e089616515..7f0c821898082 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -1005,8 +1005,10 @@ int iwl_mvm_mac_ctxt_beacon_changed(struct iwl_mvm *mvm,
+ 		return -ENOMEM;
+ 
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+-	if (mvm->beacon_inject_active)
++	if (mvm->beacon_inject_active) {
++		dev_kfree_skb(beacon);
+ 		return -EBUSY;
++	}
+ #endif
+ 
+ 	ret = iwl_mvm_mac_ctxt_send_beacon(mvm, vif, beacon);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 70ebecb73c244..79f44435972e4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2987,16 +2987,20 @@ static void iwl_mvm_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ 						    void *_data)
+ {
+ 	struct iwl_mvm_he_obss_narrow_bw_ru_data *data = _data;
++	const struct cfg80211_bss_ies *ies;
+ 	const struct element *elem;
+ 
+-	elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, bss->ies->data,
+-				  bss->ies->len);
++	rcu_read_lock();
++	ies = rcu_dereference(bss->ies);
++	elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, ies->data,
++				  ies->len);
+ 
+ 	if (!elem || elem->datalen < 10 ||
+ 	    !(elem->data[10] &
+ 	      WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT)) {
+ 		data->tolerated = false;
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static void iwl_mvm_check_he_obss_narrow_bw_ru(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 20e8d343a9501..b637cf9d85fd7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -792,10 +792,26 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 
+ 	mvm->fw_restart = iwlwifi_mod_params.fw_restart ? -1 : 0;
+ 
+-	mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
+-	mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE;
+-	mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
+-	mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE;
++	if (iwl_mvm_has_new_tx_api(mvm)) {
++		/*
++		 * If we have the new TX/queue allocation API initialize them
++		 * all to invalid numbers. We'll rewrite the ones that we need
++		 * later, but that doesn't happen for all of them all of the
++		 * time (e.g. P2P Device is optional), and if a dynamic queue
++		 * ends up getting number 2 (IWL_MVM_DQA_P2P_DEVICE_QUEUE) then
++		 * iwl_mvm_is_static_queue() erroneously returns true, and we
++		 * might have things getting stuck.
++		 */
++		mvm->aux_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->snif_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->probe_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->p2p_dev_queue = IWL_MVM_INVALID_QUEUE;
++	} else {
++		mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
++		mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE;
++		mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
++		mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE;
++	}
+ 
+ 	mvm->sf_state = SF_UNINIT;
+ 	if (iwl_mvm_has_unified_ucode(mvm))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 0368b7101222c..2d600a8b20ed7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1648,7 +1648,7 @@ iwl_mvm_umac_scan_cfg_channels_v6(struct iwl_mvm *mvm,
+ 		struct iwl_scan_channel_cfg_umac *cfg = &cp->channel_config[i];
+ 		u32 n_aps_flag =
+ 			iwl_mvm_scan_ch_n_aps_flag(vif_type,
+-						   cfg->v2.channel_num);
++						   channels[i]->hw_value);
+ 
+ 		cfg->flags = cpu_to_le32(flags | n_aps_flag);
+ 		cfg->v2.channel_num = channels[i]->hw_value;
+@@ -2368,14 +2368,17 @@ static int iwl_mvm_scan_umac_v14(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 	if (ret)
+ 		return ret;
+ 
+-	iwl_mvm_scan_umac_fill_probe_p_v4(params, &scan_p->probe_params,
+-					  &bitmap_ssid);
+ 	if (!params->scan_6ghz) {
++		iwl_mvm_scan_umac_fill_probe_p_v4(params, &scan_p->probe_params,
++					  &bitmap_ssid);
+ 		iwl_mvm_scan_umac_fill_ch_p_v6(mvm, params, vif,
+-					       &scan_p->channel_params, bitmap_ssid);
++				       &scan_p->channel_params, bitmap_ssid);
+ 
+ 		return 0;
++	} else {
++		pb->preq = params->preq;
+ 	}
++
+ 	cp->flags = iwl_mvm_scan_umac_chan_flags_v2(mvm, params, vif);
+ 	cp->n_aps_override[0] = IWL_SCAN_ADWELL_N_APS_GO_FRIENDLY;
+ 	cp->n_aps_override[1] = IWL_SCAN_ADWELL_N_APS_SOCIAL_CHS;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 9c45a64c50094..252b81b1dc8cf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -316,8 +316,9 @@ static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue,
+ }
+ 
+ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+-			       int queue, u8 tid, u8 flags)
++			       u16 *queueptr, u8 tid, u8 flags)
+ {
++	int queue = *queueptr;
+ 	struct iwl_scd_txq_cfg_cmd cmd = {
+ 		.scd_queue = queue,
+ 		.action = SCD_CFG_DISABLE_QUEUE,
+@@ -326,6 +327,7 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 
+ 	if (iwl_mvm_has_new_tx_api(mvm)) {
+ 		iwl_trans_txq_free(mvm->trans, queue);
++		*queueptr = IWL_MVM_INVALID_QUEUE;
+ 
+ 		return 0;
+ 	}
+@@ -487,6 +489,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue,
+ 	u8 sta_id, tid;
+ 	unsigned long disable_agg_tids = 0;
+ 	bool same_sta;
++	u16 queue_tmp = queue;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+@@ -509,7 +512,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue,
+ 		iwl_mvm_invalidate_sta_queue(mvm, queue,
+ 					     disable_agg_tids, false);
+ 
+-	ret = iwl_mvm_disable_txq(mvm, old_sta, queue, tid, 0);
++	ret = iwl_mvm_disable_txq(mvm, old_sta, &queue_tmp, tid, 0);
+ 	if (ret) {
+ 		IWL_ERR(mvm,
+ 			"Failed to free inactive queue %d (ret=%d)\n",
+@@ -1184,6 +1187,7 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	unsigned int wdg_timeout =
+ 		iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false);
+ 	int queue = -1;
++	u16 queue_tmp;
+ 	unsigned long disable_agg_tids = 0;
+ 	enum iwl_mvm_agg_state queue_state;
+ 	bool shared_queue = false, inc_ssn;
+@@ -1332,7 +1336,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	return 0;
+ 
+ out_err:
+-	iwl_mvm_disable_txq(mvm, sta, queue, tid, 0);
++	queue_tmp = queue;
++	iwl_mvm_disable_txq(mvm, sta, &queue_tmp, tid, 0);
+ 
+ 	return ret;
+ }
+@@ -1779,7 +1784,7 @@ static void iwl_mvm_disable_sta_queues(struct iwl_mvm *mvm,
+ 		if (mvm_sta->tid_data[i].txq_id == IWL_MVM_INVALID_QUEUE)
+ 			continue;
+ 
+-		iwl_mvm_disable_txq(mvm, sta, mvm_sta->tid_data[i].txq_id, i,
++		iwl_mvm_disable_txq(mvm, sta, &mvm_sta->tid_data[i].txq_id, i,
+ 				    0);
+ 		mvm_sta->tid_data[i].txq_id = IWL_MVM_INVALID_QUEUE;
+ 	}
+@@ -1987,7 +1992,7 @@ static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx,
+ 	ret = iwl_mvm_add_int_sta_common(mvm, sta, addr, macidx, maccolor);
+ 	if (ret) {
+ 		if (!iwl_mvm_has_new_tx_api(mvm))
+-			iwl_mvm_disable_txq(mvm, NULL, *queue,
++			iwl_mvm_disable_txq(mvm, NULL, queue,
+ 					    IWL_MAX_TID_COUNT, 0);
+ 		return ret;
+ 	}
+@@ -2060,7 +2065,7 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 	if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
+ 		return -EINVAL;
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
+ 	if (ret)
+ 		IWL_WARN(mvm, "Failed sending remove station\n");
+@@ -2077,7 +2082,7 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
+ 	if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
+ 		return -EINVAL;
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
+ 	if (ret)
+ 		IWL_WARN(mvm, "Failed sending remove station\n");
+@@ -2173,7 +2178,7 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 					  struct ieee80211_vif *vif)
+ {
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	int queue;
++	u16 *queueptr, queue;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -2182,10 +2187,10 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 	switch (vif->type) {
+ 	case NL80211_IFTYPE_AP:
+ 	case NL80211_IFTYPE_ADHOC:
+-		queue = mvm->probe_queue;
++		queueptr = &mvm->probe_queue;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_DEVICE:
+-		queue = mvm->p2p_dev_queue;
++		queueptr = &mvm->p2p_dev_queue;
+ 		break;
+ 	default:
+ 		WARN(1, "Can't free bcast queue on vif type %d\n",
+@@ -2193,7 +2198,8 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 		return;
+ 	}
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, queue, IWL_MAX_TID_COUNT, 0);
++	queue = *queueptr;
++	iwl_mvm_disable_txq(mvm, NULL, queueptr, IWL_MAX_TID_COUNT, 0);
+ 	if (iwl_mvm_has_new_tx_api(mvm))
+ 		return;
+ 
+@@ -2428,7 +2434,7 @@ int iwl_mvm_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 
+ 	iwl_mvm_flush_sta(mvm, &mvmvif->mcast_sta, true);
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvmvif->cab_queue, 0, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvmvif->cab_queue, 0, 0);
+ 
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvmvif->mcast_sta.sta_id);
+ 	if (ret)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index d3307a11fcac4..24b658a3098aa 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -246,6 +246,18 @@ static void iwl_mvm_te_check_trigger(struct iwl_mvm *mvm,
+ 	}
+ }
+ 
++static void iwl_mvm_p2p_roc_finished(struct iwl_mvm *mvm)
++{
++	/*
++	 * If the IWL_MVM_STATUS_NEED_FLUSH_P2P is already set, then the
++	 * roc_done_wk is already scheduled or running, so don't schedule it
++	 * again to avoid a race where the roc_done_wk clears this bit after
++	 * it is set here, affecting the next run of the roc_done_wk.
++	 */
++	if (!test_and_set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status))
++		iwl_mvm_roc_finished(mvm);
++}
++
+ /*
+  * Handles a FW notification for an event that is known to the driver.
+  *
+@@ -297,8 +309,7 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
+ 		switch (te_data->vif->type) {
+ 		case NL80211_IFTYPE_P2P_DEVICE:
+ 			ieee80211_remain_on_channel_expired(mvm->hw);
+-			set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
+-			iwl_mvm_roc_finished(mvm);
++			iwl_mvm_p2p_roc_finished(mvm);
+ 			break;
+ 		case NL80211_IFTYPE_STATION:
+ 			/*
+@@ -674,8 +685,7 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
+ 			/* Session protection is still ongoing. Cancel it */
+ 			iwl_mvm_cancel_session_protection(mvm, mvmvif, id);
+ 			if (iftype == NL80211_IFTYPE_P2P_DEVICE) {
+-				set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
+-				iwl_mvm_roc_finished(mvm);
++				iwl_mvm_p2p_roc_finished(mvm);
+ 			}
+ 		}
+ 		return false;
+@@ -842,8 +852,7 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
+ 		/* End TE, notify mac80211 */
+ 		mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;
+ 		ieee80211_remain_on_channel_expired(mvm->hw);
+-		set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
+-		iwl_mvm_roc_finished(mvm);
++		iwl_mvm_p2p_roc_finished(mvm);
+ 	} else if (le32_to_cpu(notif->start)) {
+ 		if (WARN_ON(mvmvif->time_event_data.id !=
+ 				le32_to_cpu(notif->conf_id)))
+@@ -1004,14 +1013,13 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 		if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ 			iwl_mvm_cancel_session_protection(mvm, mvmvif,
+ 							  mvmvif->time_event_data.id);
+-			set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
++			iwl_mvm_p2p_roc_finished(mvm);
+ 		} else {
+ 			iwl_mvm_remove_aux_roc_te(mvm, mvmvif,
+ 						  &mvmvif->time_event_data);
++			iwl_mvm_roc_finished(mvm);
+ 		}
+ 
+-		iwl_mvm_roc_finished(mvm);
+-
+ 		return;
+ 	}
+ 
+@@ -1025,12 +1033,11 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 
+ 	if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ 		iwl_mvm_remove_time_event(mvm, mvmvif, te_data);
+-		set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);
++		iwl_mvm_p2p_roc_finished(mvm);
+ 	} else {
+ 		iwl_mvm_remove_aux_roc_te(mvm, mvmvif, te_data);
++		iwl_mvm_roc_finished(mvm);
+ 	}
+-
+-	iwl_mvm_roc_finished(mvm);
+ }
+ 
+ void iwl_mvm_remove_csa_period(struct iwl_mvm *mvm,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 4f6f4b2720f01..ff7ca3c57f34d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -487,6 +487,9 @@ void iwl_pcie_free_rbs_pool(struct iwl_trans *trans)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	int i;
+ 
++	if (!trans_pcie->rx_pool)
++		return;
++
+ 	for (i = 0; i < RX_POOL_SIZE(trans_pcie->num_rx_bufs); i++) {
+ 		if (!trans_pcie->rx_pool[i].page)
+ 			continue;
+@@ -1062,7 +1065,7 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
+ 	INIT_LIST_HEAD(&rba->rbd_empty);
+ 	spin_unlock_bh(&rba->lock);
+ 
+-	/* free all first - we might be reconfigured for a different size */
++	/* free all first - we overwrite everything here */
+ 	iwl_pcie_free_rbs_pool(trans);
+ 
+ 	for (i = 0; i < RX_QUEUE_SIZE; i++)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index bee6b45742268..65cc25cbb9ec0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1866,6 +1866,9 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans,
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 
++	/* free all first - we might be reconfigured for a different size */
++	iwl_pcie_free_rbs_pool(trans);
++
+ 	trans->txqs.cmd.q_id = trans_cfg->cmd_queue;
+ 	trans->txqs.cmd.fifo = trans_cfg->cmd_fifo;
+ 	trans->txqs.cmd.wdg_timeout = trans_cfg->cmd_q_wdg_timeout;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index 01735776345a9..7ddce3c3f0c48 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1378,6 +1378,8 @@ struct rtl8xxxu_priv {
+ 	u8 no_pape:1;
+ 	u8 int_buf[USB_INTR_CONTENT_LENGTH];
+ 	u8 rssi_level;
++	DECLARE_BITMAP(tx_aggr_started, IEEE80211_NUM_TIDS);
++	DECLARE_BITMAP(tid_tx_operational, IEEE80211_NUM_TIDS);
+ 	/*
+ 	 * Only one virtual interface permitted because only STA mode
+ 	 * is supported and no iface_combinations are provided.
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index ac1061caacd65..3285a91efb91e 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4805,6 +4805,8 @@ rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 	struct ieee80211_rate *tx_rate = ieee80211_get_tx_rate(hw, tx_info);
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
++	u8 *qc = ieee80211_get_qos_ctl(hdr);
++	u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 	u32 rate;
+ 	u16 rate_flags = tx_info->control.rates[0].flags;
+ 	u16 seq_number;
+@@ -4828,7 +4830,7 @@ rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 
+ 	tx_desc->txdw3 = cpu_to_le32((u32)seq_number << TXDESC32_SEQ_SHIFT);
+ 
+-	if (ampdu_enable)
++	if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ 		tx_desc->txdw1 |= cpu_to_le32(TXDESC32_AGG_ENABLE);
+ 	else
+ 		tx_desc->txdw1 |= cpu_to_le32(TXDESC32_AGG_BREAK);
+@@ -4876,6 +4878,8 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
+ 	struct rtl8xxxu_txdesc40 *tx_desc40;
++	u8 *qc = ieee80211_get_qos_ctl(hdr);
++	u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 	u32 rate;
+ 	u16 rate_flags = tx_info->control.rates[0].flags;
+ 	u16 seq_number;
+@@ -4902,7 +4906,7 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 
+ 	tx_desc40->txdw9 = cpu_to_le32((u32)seq_number << TXDESC40_SEQ_SHIFT);
+ 
+-	if (ampdu_enable)
++	if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ 		tx_desc40->txdw2 |= cpu_to_le32(TXDESC40_AGG_ENABLE);
+ 	else
+ 		tx_desc40->txdw2 |= cpu_to_le32(TXDESC40_AGG_BREAK);
+@@ -5015,12 +5019,19 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ 	if (ieee80211_is_data_qos(hdr->frame_control) && sta) {
+ 		if (sta->ht_cap.ht_supported) {
+ 			u32 ampdu, val32;
++			u8 *qc = ieee80211_get_qos_ctl(hdr);
++			u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 
+ 			ampdu = (u32)sta->ht_cap.ampdu_density;
+ 			val32 = ampdu << TXDESC_AMPDU_DENSITY_SHIFT;
+ 			tx_desc->txdw2 |= cpu_to_le32(val32);
+ 
+ 			ampdu_enable = true;
++
++			if (!test_bit(tid, priv->tx_aggr_started) &&
++			    !(skb->protocol == cpu_to_be16(ETH_P_PAE)))
++				if (!ieee80211_start_tx_ba_session(sta, tid, 0))
++					set_bit(tid, priv->tx_aggr_started);
+ 		}
+ 	}
+ 
+@@ -6096,6 +6107,7 @@ rtl8xxxu_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	struct device *dev = &priv->udev->dev;
+ 	u8 ampdu_factor, ampdu_density;
+ 	struct ieee80211_sta *sta = params->sta;
++	u16 tid = params->tid;
+ 	enum ieee80211_ampdu_mlme_action action = params->action;
+ 
+ 	switch (action) {
+@@ -6108,17 +6120,20 @@ rtl8xxxu_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		dev_dbg(dev,
+ 			"Changed HT: ampdu_factor %02x, ampdu_density %02x\n",
+ 			ampdu_factor, ampdu_density);
+-		break;
++		return IEEE80211_AMPDU_TX_START_IMMEDIATE;
++	case IEEE80211_AMPDU_TX_STOP_CONT:
+ 	case IEEE80211_AMPDU_TX_STOP_FLUSH:
+-		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP_FLUSH\n", __func__);
+-		rtl8xxxu_set_ampdu_factor(priv, 0);
+-		rtl8xxxu_set_ampdu_min_space(priv, 0);
+-		break;
+ 	case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+-		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP_FLUSH_CONT\n",
+-			 __func__);
++		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP\n", __func__);
+ 		rtl8xxxu_set_ampdu_factor(priv, 0);
+ 		rtl8xxxu_set_ampdu_min_space(priv, 0);
++		clear_bit(tid, priv->tx_aggr_started);
++		clear_bit(tid, priv->tid_tx_operational);
++		ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
++		break;
++	case IEEE80211_AMPDU_TX_OPERATIONAL:
++		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_OPERATIONAL\n", __func__);
++		set_bit(tid, priv->tid_tx_operational);
+ 		break;
+ 	case IEEE80211_AMPDU_RX_START:
+ 		dev_dbg(dev, "%s: IEEE80211_AMPDU_RX_START\n", __func__);
+diff --git a/drivers/net/wireless/realtek/rtw88/Makefile b/drivers/net/wireless/realtek/rtw88/Makefile
+index c0e4b111c8b4e..73d6807a8cdfb 100644
+--- a/drivers/net/wireless/realtek/rtw88/Makefile
++++ b/drivers/net/wireless/realtek/rtw88/Makefile
+@@ -15,9 +15,9 @@ rtw88_core-y += main.o \
+ 	   ps.o \
+ 	   sec.o \
+ 	   bf.o \
+-	   wow.o \
+ 	   regd.o
+ 
++rtw88_core-$(CONFIG_PM) += wow.o
+ 
+ obj-$(CONFIG_RTW88_8822B)	+= rtw88_8822b.o
+ rtw88_8822b-objs		:= rtw8822b.o rtw8822b_table.o
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 3bfa5ecc00537..e6399519584bd 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -819,7 +819,7 @@ static u16 rtw_get_rsvd_page_probe_req_size(struct rtw_dev *rtwdev,
+ 			continue;
+ 		if ((!ssid && !rsvd_pkt->ssid) ||
+ 		    rtw_ssid_equal(rsvd_pkt->ssid, ssid))
+-			size = rsvd_pkt->skb->len;
++			size = rsvd_pkt->probe_req_size;
+ 	}
+ 
+ 	return size;
+@@ -1047,6 +1047,8 @@ static struct sk_buff *rtw_get_rsvd_page_skb(struct ieee80211_hw *hw,
+ 							 ssid->ssid_len, 0);
+ 		else
+ 			skb_new = ieee80211_probereq_get(hw, vif->addr, NULL, 0, 0);
++		if (skb_new)
++			rsvd_pkt->probe_req_size = (u16)skb_new->len;
+ 		break;
+ 	case RSVD_NLO_INFO:
+ 		skb_new = rtw_nlo_info_get(hw);
+@@ -1643,6 +1645,7 @@ int rtw_fw_dump_fifo(struct rtw_dev *rtwdev, u8 fifo_sel, u32 addr, u32 size,
+ static void __rtw_fw_update_pkt(struct rtw_dev *rtwdev, u8 pkt_id, u16 size,
+ 				u8 location)
+ {
++	struct rtw_chip_info *chip = rtwdev->chip;
+ 	u8 h2c_pkt[H2C_PKT_SIZE] = {0};
+ 	u16 total_size = H2C_PKT_HDR_SIZE + H2C_PKT_UPDATE_PKT_LEN;
+ 
+@@ -1653,6 +1656,7 @@ static void __rtw_fw_update_pkt(struct rtw_dev *rtwdev, u8 pkt_id, u16 size,
+ 	UPDATE_PKT_SET_LOCATION(h2c_pkt, location);
+ 
+ 	/* include txdesc size */
++	size += chip->tx_pkt_desc_sz;
+ 	UPDATE_PKT_SET_SIZE(h2c_pkt, size);
+ 
+ 	rtw_fw_send_h2c_packet(rtwdev, h2c_pkt);
+@@ -1662,7 +1666,7 @@ void rtw_fw_update_pkt_probe_req(struct rtw_dev *rtwdev,
+ 				 struct cfg80211_ssid *ssid)
+ {
+ 	u8 loc;
+-	u32 size;
++	u16 size;
+ 
+ 	loc = rtw_get_rsvd_page_probe_req_location(rtwdev, ssid);
+ 	if (!loc) {
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.h b/drivers/net/wireless/realtek/rtw88/fw.h
+index a8a7162fbe64c..a3a28ac6f1ded 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.h
++++ b/drivers/net/wireless/realtek/rtw88/fw.h
+@@ -147,6 +147,7 @@ struct rtw_rsvd_page {
+ 	u8 page;
+ 	bool add_txdesc;
+ 	struct cfg80211_ssid *ssid;
++	u16 probe_req_size;
+ };
+ 
+ enum rtw_keep_alive_pkt_type {
+diff --git a/drivers/net/wireless/realtek/rtw88/wow.c b/drivers/net/wireless/realtek/rtw88/wow.c
+index fc9544f4e5e45..bdccfa70dddc7 100644
+--- a/drivers/net/wireless/realtek/rtw88/wow.c
++++ b/drivers/net/wireless/realtek/rtw88/wow.c
+@@ -283,15 +283,26 @@ static void rtw_wow_rx_dma_start(struct rtw_dev *rtwdev)
+ 
+ static int rtw_wow_check_fw_status(struct rtw_dev *rtwdev, bool wow_enable)
+ {
+-	/* wait 100ms for wow firmware to finish work */
+-	msleep(100);
++	int ret;
++	u8 check;
++	u32 check_dis;
+ 
+ 	if (wow_enable) {
+-		if (rtw_read8(rtwdev, REG_WOWLAN_WAKE_REASON))
++		ret = read_poll_timeout(rtw_read8, check, !check, 1000,
++					100000, true, rtwdev,
++					REG_WOWLAN_WAKE_REASON);
++		if (ret)
+ 			goto wow_fail;
+ 	} else {
+-		if (rtw_read32_mask(rtwdev, REG_FE1IMR, BIT_FS_RXDONE) ||
+-		    rtw_read32_mask(rtwdev, REG_RXPKT_NUM, BIT_RW_RELEASE))
++		ret = read_poll_timeout(rtw_read32_mask, check_dis,
++					!check_dis, 1000, 100000, true, rtwdev,
++					REG_FE1IMR, BIT_FS_RXDONE);
++		if (ret)
++			goto wow_fail;
++		ret = read_poll_timeout(rtw_read32_mask, check_dis,
++					!check_dis, 1000, 100000, false, rtwdev,
++					REG_RXPKT_NUM, BIT_RW_RELEASE);
++		if (ret)
+ 			goto wow_fail;
+ 	}
+ 
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 1e0615b8565e7..72de88ff0d30d 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -450,11 +450,11 @@ static int pmem_attach_disk(struct device *dev,
+ 		pmem->pfn_flags |= PFN_MAP;
+ 		bb_range = pmem->pgmap.range;
+ 	} else {
++		addr = devm_memremap(dev, pmem->phys_addr,
++				pmem->size, ARCH_MEMREMAP_PMEM);
+ 		if (devm_add_action_or_reset(dev, pmem_release_queue,
+ 					&pmem->pgmap))
+ 			return -ENOMEM;
+-		addr = devm_memremap(dev, pmem->phys_addr,
+-				pmem->size, ARCH_MEMREMAP_PMEM);
+ 		bb_range.start =  res->start;
+ 		bb_range.end = res->end;
+ 	}
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index dfd9dec0c1f60..2f0cbaba12ac4 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1029,7 +1029,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
+ 		return BLK_STS_IOERR;
+ 	}
+ 
+-	cmd->common.command_id = req->tag;
++	nvme_req(req)->genctr++;
++	cmd->common.command_id = nvme_cid(req);
+ 	trace_nvme_setup_cmd(req, cmd);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 5cd1fa3b8464d..26511794629bc 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -158,6 +158,7 @@ enum nvme_quirks {
+ struct nvme_request {
+ 	struct nvme_command	*cmd;
+ 	union nvme_result	result;
++	u8			genctr;
+ 	u8			retries;
+ 	u8			flags;
+ 	u16			status;
+@@ -497,6 +498,49 @@ struct nvme_ctrl_ops {
+ 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+ };
+ 
++/*
++ * nvme command_id is constructed as such:
++ * | xxxx | xxxxxxxxxxxx |
++ *   gen    request tag
++ */
++#define nvme_genctr_mask(gen)			(gen & 0xf)
++#define nvme_cid_install_genctr(gen)		(nvme_genctr_mask(gen) << 12)
++#define nvme_genctr_from_cid(cid)		((cid & 0xf000) >> 12)
++#define nvme_tag_from_cid(cid)			(cid & 0xfff)
++
++static inline u16 nvme_cid(struct request *rq)
++{
++	return nvme_cid_install_genctr(nvme_req(rq)->genctr) | rq->tag;
++}
++
++static inline struct request *nvme_find_rq(struct blk_mq_tags *tags,
++		u16 command_id)
++{
++	u8 genctr = nvme_genctr_from_cid(command_id);
++	u16 tag = nvme_tag_from_cid(command_id);
++	struct request *rq;
++
++	rq = blk_mq_tag_to_rq(tags, tag);
++	if (unlikely(!rq)) {
++		pr_err("could not locate request for tag %#x\n",
++			tag);
++		return NULL;
++	}
++	if (unlikely(nvme_genctr_mask(nvme_req(rq)->genctr) != genctr)) {
++		dev_err(nvme_req(rq)->ctrl->device,
++			"request %#x genctr mismatch (got %#x expected %#x)\n",
++			tag, genctr, nvme_genctr_mask(nvme_req(rq)->genctr));
++		return NULL;
++	}
++	return rq;
++}
++
++static inline struct request *nvme_cid_to_rq(struct blk_mq_tags *tags,
++                u16 command_id)
++{
++	return blk_mq_tag_to_rq(tags, nvme_tag_from_cid(command_id));
++}
++
+ #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
+ void nvme_fault_inject_init(struct nvme_fault_inject *fault_inj,
+ 			    const char *dev_name);
+@@ -594,7 +638,8 @@ static inline void nvme_put_ctrl(struct nvme_ctrl *ctrl)
+ 
+ static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
+ {
+-	return !qid && command_id >= NVME_AQ_BLK_MQ_DEPTH;
++	return !qid &&
++		nvme_tag_from_cid(command_id) >= NVME_AQ_BLK_MQ_DEPTH;
+ }
+ 
+ void nvme_complete_rq(struct request *req);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 51852085239ef..c246fdacba2e5 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1014,7 +1014,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ 		return;
+ 	}
+ 
+-	req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), command_id);
++	req = nvme_find_rq(nvme_queue_tagset(nvmeq), command_id);
+ 	if (unlikely(!req)) {
+ 		dev_warn(nvmeq->dev->ctrl.device,
+ 			"invalid id %d completed on queue %d\n",
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 3bd9cbc80246f..a68704e39084e 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1730,10 +1730,10 @@ static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
+ 	struct request *rq;
+ 	struct nvme_rdma_request *req;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_rdma_tagset(queue), cqe->command_id);
++	rq = nvme_find_rq(nvme_rdma_tagset(queue), cqe->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"tag 0x%x on QP %#x not found\n",
++			"got bad command_id %#x on QP %#x\n",
+ 			cqe->command_id, queue->qp->qp_num);
+ 		nvme_rdma_error_recovery(queue->ctrl);
+ 		return;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 18bd68b82d78f..48b70e5235a39 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -487,11 +487,11 @@ static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
+ {
+ 	struct request *rq;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), cqe->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), cqe->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag 0x%x not found\n",
+-			nvme_tcp_queue_id(queue), cqe->command_id);
++			"got bad cqe.command_id %#x on queue %d\n",
++			cqe->command_id, nvme_tcp_queue_id(queue));
+ 		nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+ 		return -EINVAL;
+ 	}
+@@ -508,11 +508,11 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
+ {
+ 	struct request *rq;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
++			"got bad c2hdata.command_id %#x on queue %d\n",
++			pdu->command_id, nvme_tcp_queue_id(queue));
+ 		return -ENOENT;
+ 	}
+ 
+@@ -606,7 +606,7 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 	data->hdr.plen =
+ 		cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
+ 	data->ttag = pdu->ttag;
+-	data->command_id = rq->tag;
++	data->command_id = nvme_cid(rq);
+ 	data->data_offset = cpu_to_le32(req->data_sent);
+ 	data->data_length = cpu_to_le32(req->pdu_len);
+ 	return 0;
+@@ -619,11 +619,11 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ 	struct request *rq;
+ 	int ret;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
++			"got bad r2t.command_id %#x on queue %d\n",
++			pdu->command_id, nvme_tcp_queue_id(queue));
+ 		return -ENOENT;
+ 	}
+ 	req = blk_mq_rq_to_pdu(rq);
+@@ -702,17 +702,9 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ 			      unsigned int *offset, size_t *len)
+ {
+ 	struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu;
+-	struct nvme_tcp_request *req;
+-	struct request *rq;
+-
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
+-	if (!rq) {
+-		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
+-		return -ENOENT;
+-	}
+-	req = blk_mq_rq_to_pdu(rq);
++	struct request *rq =
++		nvme_cid_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+ 
+ 	while (true) {
+ 		int recv_len, ret;
+@@ -804,8 +796,8 @@ static int nvme_tcp_recv_ddgst(struct nvme_tcp_queue *queue,
+ 	}
+ 
+ 	if (pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS) {
+-		struct request *rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue),
+-						pdu->command_id);
++		struct request *rq = nvme_cid_to_rq(nvme_tcp_tagset(queue),
++					pdu->command_id);
+ 
+ 		nvme_tcp_end_request(rq, NVME_SC_SUCCESS);
+ 		queue->nr_cqe++;
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index 3a17a7e26bbfc..0285ccc7541f6 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -107,10 +107,10 @@ static void nvme_loop_queue_response(struct nvmet_req *req)
+ 	} else {
+ 		struct request *rq;
+ 
+-		rq = blk_mq_tag_to_rq(nvme_loop_tagset(queue), cqe->command_id);
++		rq = nvme_find_rq(nvme_loop_tagset(queue), cqe->command_id);
+ 		if (!rq) {
+ 			dev_err(queue->ctrl->ctrl.device,
+-				"tag 0x%x on queue %d not found\n",
++				"got bad command_id %#x on queue %d\n",
+ 				cqe->command_id, nvme_loop_queue_idx(queue));
+ 			return;
+ 		}
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index b3bc30a04ed7c..3d87fadaa160d 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -824,8 +824,11 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 
+ 	if (nvmem->nkeepout) {
+ 		rval = nvmem_validate_keepouts(nvmem);
+-		if (rval)
+-			goto err_put_device;
++		if (rval) {
++			ida_free(&nvmem_ida, nvmem->id);
++			kfree(nvmem);
++			return ERR_PTR(rval);
++		}
+ 	}
+ 
+ 	dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index 81fbad5e939df..b0ca4c6264665 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -139,6 +139,9 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ {
+ 	int ret;
+ 
++	writel(old->timer_val, priv->qfpconf + QFPROM_BLOW_TIMER_OFFSET);
++	writel(old->accel_val, priv->qfpconf + QFPROM_ACCEL_OFFSET);
++
+ 	/*
+ 	 * This may be a shared rail and may be able to run at a lower rate
+ 	 * when we're not blowing fuses.  At the moment, the regulator framework
+@@ -159,9 +162,6 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ 			 "Failed to set clock rate for disable (ignoring)\n");
+ 
+ 	clk_disable_unprepare(priv->secclk);
+-
+-	writel(old->timer_val, priv->qfpconf + QFPROM_BLOW_TIMER_OFFSET);
+-	writel(old->accel_val, priv->qfpconf + QFPROM_ACCEL_OFFSET);
+ }
+ 
+ /**
+diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c
+index a32e60b024b8d..6675b5e56960c 100644
+--- a/drivers/of/kobj.c
++++ b/drivers/of/kobj.c
+@@ -119,7 +119,7 @@ int __of_attach_node_sysfs(struct device_node *np)
+ 	struct property *pp;
+ 	int rc;
+ 
+-	if (!of_kset)
++	if (!IS_ENABLED(CONFIG_SYSFS) || !of_kset)
+ 		return 0;
+ 
+ 	np->kobj.kset = of_kset;
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 67f2e0710e79c..2a97c6535c4c6 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -95,15 +95,7 @@ static struct dev_pm_opp *_find_opp_of_np(struct opp_table *opp_table,
+ static struct device_node *of_parse_required_opp(struct device_node *np,
+ 						 int index)
+ {
+-	struct device_node *required_np;
+-
+-	required_np = of_parse_phandle(np, "required-opps", index);
+-	if (unlikely(!required_np)) {
+-		pr_err("%s: Unable to parse required-opps: %pOF, index: %d\n",
+-		       __func__, np, index);
+-	}
+-
+-	return required_np;
++	return of_parse_phandle(np, "required-opps", index);
+ }
+ 
+ /* The caller must call dev_pm_opp_put_opp_table() after the table is used */
+@@ -1328,7 +1320,7 @@ int of_get_required_opp_performance_state(struct device_node *np, int index)
+ 
+ 	required_np = of_parse_required_opp(np, index);
+ 	if (!required_np)
+-		return -EINVAL;
++		return -ENODEV;
+ 
+ 	opp_table = _find_table_of_opp_np(required_np);
+ 	if (IS_ERR(opp_table)) {
+diff --git a/drivers/parport/ieee1284_ops.c b/drivers/parport/ieee1284_ops.c
+index 2c11bd3fe1fd6..17061f1df0f44 100644
+--- a/drivers/parport/ieee1284_ops.c
++++ b/drivers/parport/ieee1284_ops.c
+@@ -518,7 +518,7 @@ size_t parport_ieee1284_ecp_read_data (struct parport *port,
+ 				goto out;
+ 
+ 			/* Yield the port for a while. */
+-			if (count && dev->port->irq != PARPORT_IRQ_NONE) {
++			if (dev->port->irq != PARPORT_IRQ_NONE) {
+ 				parport_release (dev);
+ 				schedule_timeout_interruptible(msecs_to_jiffies(40));
+ 				parport_claim_or_block (dev);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index c95ebe808f92b..fdbf051586970 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -58,6 +58,7 @@
+ #define   PIO_COMPLETION_STATUS_CRS		2
+ #define   PIO_COMPLETION_STATUS_CA		4
+ #define   PIO_NON_POSTED_REQ			BIT(10)
++#define   PIO_ERR_STATUS			BIT(11)
+ #define PIO_ADDR_LS				(PIO_BASE_ADDR + 0x8)
+ #define PIO_ADDR_MS				(PIO_BASE_ADDR + 0xc)
+ #define PIO_WR_DATA				(PIO_BASE_ADDR + 0x10)
+@@ -118,6 +119,46 @@
+ #define PCIE_MSI_MASK_REG			(CONTROL_BASE_ADDR + 0x5C)
+ #define PCIE_MSI_PAYLOAD_REG			(CONTROL_BASE_ADDR + 0x9C)
+ 
++/* PCIe window configuration */
++#define OB_WIN_BASE_ADDR			0x4c00
++#define OB_WIN_BLOCK_SIZE			0x20
++#define OB_WIN_COUNT				8
++#define OB_WIN_REG_ADDR(win, offset)		(OB_WIN_BASE_ADDR + \
++						 OB_WIN_BLOCK_SIZE * (win) + \
++						 (offset))
++#define OB_WIN_MATCH_LS(win)			OB_WIN_REG_ADDR(win, 0x00)
++#define     OB_WIN_ENABLE			BIT(0)
++#define OB_WIN_MATCH_MS(win)			OB_WIN_REG_ADDR(win, 0x04)
++#define OB_WIN_REMAP_LS(win)			OB_WIN_REG_ADDR(win, 0x08)
++#define OB_WIN_REMAP_MS(win)			OB_WIN_REG_ADDR(win, 0x0c)
++#define OB_WIN_MASK_LS(win)			OB_WIN_REG_ADDR(win, 0x10)
++#define OB_WIN_MASK_MS(win)			OB_WIN_REG_ADDR(win, 0x14)
++#define OB_WIN_ACTIONS(win)			OB_WIN_REG_ADDR(win, 0x18)
++#define OB_WIN_DEFAULT_ACTIONS			(OB_WIN_ACTIONS(OB_WIN_COUNT-1) + 0x4)
++#define     OB_WIN_FUNC_NUM_MASK		GENMASK(31, 24)
++#define     OB_WIN_FUNC_NUM_SHIFT		24
++#define     OB_WIN_FUNC_NUM_ENABLE		BIT(23)
++#define     OB_WIN_BUS_NUM_BITS_MASK		GENMASK(22, 20)
++#define     OB_WIN_BUS_NUM_BITS_SHIFT		20
++#define     OB_WIN_MSG_CODE_ENABLE		BIT(22)
++#define     OB_WIN_MSG_CODE_MASK		GENMASK(21, 14)
++#define     OB_WIN_MSG_CODE_SHIFT		14
++#define     OB_WIN_MSG_PAYLOAD_LEN		BIT(12)
++#define     OB_WIN_ATTR_ENABLE			BIT(11)
++#define     OB_WIN_ATTR_TC_MASK			GENMASK(10, 8)
++#define     OB_WIN_ATTR_TC_SHIFT		8
++#define     OB_WIN_ATTR_RELAXED			BIT(7)
++#define     OB_WIN_ATTR_NOSNOOP			BIT(6)
++#define     OB_WIN_ATTR_POISON			BIT(5)
++#define     OB_WIN_ATTR_IDO			BIT(4)
++#define     OB_WIN_TYPE_MASK			GENMASK(3, 0)
++#define     OB_WIN_TYPE_SHIFT			0
++#define     OB_WIN_TYPE_MEM			0x0
++#define     OB_WIN_TYPE_IO			0x4
++#define     OB_WIN_TYPE_CONFIG_TYPE0		0x8
++#define     OB_WIN_TYPE_CONFIG_TYPE1		0x9
++#define     OB_WIN_TYPE_MSG			0xc
++
+ /* LMI registers base address and register offsets */
+ #define LMI_BASE_ADDR				0x6000
+ #define CFG_REG					(LMI_BASE_ADDR + 0x0)
+@@ -166,7 +207,7 @@
+ #define PCIE_CONFIG_WR_TYPE0			0xa
+ #define PCIE_CONFIG_WR_TYPE1			0xb
+ 
+-#define PIO_RETRY_CNT			500
++#define PIO_RETRY_CNT			750000 /* 1.5 s */
+ #define PIO_RETRY_DELAY			2 /* 2 us*/
+ 
+ #define LINK_WAIT_MAX_RETRIES		10
+@@ -180,8 +221,16 @@
+ struct advk_pcie {
+ 	struct platform_device *pdev;
+ 	void __iomem *base;
++	struct {
++		phys_addr_t match;
++		phys_addr_t remap;
++		phys_addr_t mask;
++		u32 actions;
++	} wins[OB_WIN_COUNT];
++	u8 wins_count;
+ 	struct irq_domain *irq_domain;
+ 	struct irq_chip irq_chip;
++	raw_spinlock_t irq_lock;
+ 	struct irq_domain *msi_domain;
+ 	struct irq_domain *msi_inner_domain;
+ 	struct irq_chip msi_bottom_irq_chip;
+@@ -366,9 +415,39 @@ err:
+ 	dev_err(dev, "link never came up\n");
+ }
+ 
++/*
++ * Set PCIe address window register which could be used for memory
++ * mapping.
++ */
++static void advk_pcie_set_ob_win(struct advk_pcie *pcie, u8 win_num,
++				 phys_addr_t match, phys_addr_t remap,
++				 phys_addr_t mask, u32 actions)
++{
++	advk_writel(pcie, OB_WIN_ENABLE |
++			  lower_32_bits(match), OB_WIN_MATCH_LS(win_num));
++	advk_writel(pcie, upper_32_bits(match), OB_WIN_MATCH_MS(win_num));
++	advk_writel(pcie, lower_32_bits(remap), OB_WIN_REMAP_LS(win_num));
++	advk_writel(pcie, upper_32_bits(remap), OB_WIN_REMAP_MS(win_num));
++	advk_writel(pcie, lower_32_bits(mask), OB_WIN_MASK_LS(win_num));
++	advk_writel(pcie, upper_32_bits(mask), OB_WIN_MASK_MS(win_num));
++	advk_writel(pcie, actions, OB_WIN_ACTIONS(win_num));
++}
++
++static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num)
++{
++	advk_writel(pcie, 0, OB_WIN_MATCH_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MATCH_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_REMAP_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_REMAP_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MASK_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MASK_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_ACTIONS(win_num));
++}
++
+ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ {
+ 	u32 reg;
++	int i;
+ 
+ 	/* Enable TX */
+ 	reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
+@@ -447,15 +526,51 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
+ 	advk_writel(pcie, reg, HOST_CTRL_INT_MASK_REG);
+ 
++	/*
++	 * Enable AXI address window location generation:
++	 * When it is enabled, the default outbound window
++	 * configurations (Default User Field: 0xD0074CFC)
++	 * are used to transparent address translation for
++	 * the outbound transactions. Thus, PCIe address
++	 * windows are not required for transparent memory
++	 * access when default outbound window configuration
++	 * is set for memory access.
++	 */
+ 	reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
+ 	reg |= PCIE_CORE_CTRL2_OB_WIN_ENABLE;
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
+ 
+-	/* Bypass the address window mapping for PIO */
++	/*
++	 * Set memory access in Default User Field so it
++	 * is not required to configure PCIe address for
++	 * transparent memory access.
++	 */
++	advk_writel(pcie, OB_WIN_TYPE_MEM, OB_WIN_DEFAULT_ACTIONS);
++
++	/*
++	 * Bypass the address window mapping for PIO:
++	 * Since PIO access already contains all required
++	 * info over AXI interface by PIO registers, the
++	 * address window is not required.
++	 */
+ 	reg = advk_readl(pcie, PIO_CTRL);
+ 	reg |= PIO_CTRL_ADDR_WIN_DISABLE;
+ 	advk_writel(pcie, reg, PIO_CTRL);
+ 
++	/*
++	 * Configure PCIe address windows for non-memory or
++	 * non-transparent access as by default PCIe uses
++	 * transparent memory access.
++	 */
++	for (i = 0; i < pcie->wins_count; i++)
++		advk_pcie_set_ob_win(pcie, i,
++				     pcie->wins[i].match, pcie->wins[i].remap,
++				     pcie->wins[i].mask, pcie->wins[i].actions);
++
++	/* Disable remaining PCIe outbound windows */
++	for (i = pcie->wins_count; i < OB_WIN_COUNT; i++)
++		advk_pcie_disable_ob_win(pcie, i);
++
+ 	advk_pcie_train_link(pcie);
+ 
+ 	/*
+@@ -472,7 +587,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+-static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
++static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
+ {
+ 	struct device *dev = &pcie->pdev->dev;
+ 	u32 reg;
+@@ -483,14 +598,49 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 	status = (reg & PIO_COMPLETION_STATUS_MASK) >>
+ 		PIO_COMPLETION_STATUS_SHIFT;
+ 
+-	if (!status)
+-		return;
+-
++	/*
++	 * According to HW spec, the PIO status check sequence as below:
++	 * 1) even if COMPLETION_STATUS(bit9:7) indicates successful,
++	 *    it still needs to check Error Status(bit11), only when this bit
++	 *    indicates no error happen, the operation is successful.
++	 * 2) value Unsupported Request(1) of COMPLETION_STATUS(bit9:7) only
++	 *    means a PIO write error, and for PIO read it is successful with
++	 *    a read value of 0xFFFFFFFF.
++	 * 3) value Completion Retry Status(CRS) of COMPLETION_STATUS(bit9:7)
++	 *    only means a PIO write error, and for PIO read it is successful
++	 *    with a read value of 0xFFFF0001.
++	 * 4) value Completer Abort (CA) of COMPLETION_STATUS(bit9:7) means
++	 *    error for both PIO read and PIO write operation.
++	 * 5) other errors are indicated as 'unknown'.
++	 */
+ 	switch (status) {
++	case PIO_COMPLETION_STATUS_OK:
++		if (reg & PIO_ERR_STATUS) {
++			strcomp_status = "COMP_ERR";
++			break;
++		}
++		/* Get the read result */
++		if (val)
++			*val = advk_readl(pcie, PIO_RD_DATA);
++		/* No error */
++		strcomp_status = NULL;
++		break;
+ 	case PIO_COMPLETION_STATUS_UR:
+ 		strcomp_status = "UR";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CRS:
++		/* PCIe r4.0, sec 2.3.2, says:
++		 * If CRS Software Visibility is not enabled, the Root Complex
++		 * must re-issue the Configuration Request as a new Request.
++		 * A Root Complex implementation may choose to limit the number
++		 * of Configuration Request/CRS Completion Status loops before
++		 * determining that something is wrong with the target of the
++		 * Request and taking appropriate action, e.g., complete the
++		 * Request to the host as a failed transaction.
++		 *
++		 * To simplify implementation do not re-issue the Configuration
++		 * Request and complete the Request as a failed transaction.
++		 */
+ 		strcomp_status = "CRS";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CA:
+@@ -501,6 +651,9 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 		break;
+ 	}
+ 
++	if (!strcomp_status)
++		return 0;
++
+ 	if (reg & PIO_NON_POSTED_REQ)
+ 		str_posted = "Non-posted";
+ 	else
+@@ -508,6 +661,8 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 
+ 	dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
+ 		str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
++
++	return -EFAULT;
+ }
+ 
+ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+@@ -745,10 +900,13 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+ 
+-	advk_pcie_check_pio_status(pcie);
++	/* Check PIO status and get the read result */
++	ret = advk_pcie_check_pio_status(pcie, val);
++	if (ret < 0) {
++		*val = 0xffffffff;
++		return PCIBIOS_SET_FAILED;
++	}
+ 
+-	/* Get the read result */
+-	*val = advk_readl(pcie, PIO_RD_DATA);
+ 	if (size == 1)
+ 		*val = (*val >> (8 * (where & 3))) & 0xff;
+ 	else if (size == 2)
+@@ -812,7 +970,9 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+-	advk_pcie_check_pio_status(pcie);
++	ret = advk_pcie_check_pio_status(pcie, NULL);
++	if (ret < 0)
++		return PCIBIOS_SET_FAILED;
+ 
+ 	return PCIBIOS_SUCCESSFUL;
+ }
+@@ -886,22 +1046,28 @@ static void advk_pcie_irq_mask(struct irq_data *d)
+ {
+ 	struct advk_pcie *pcie = d->domain->host_data;
+ 	irq_hw_number_t hwirq = irqd_to_hwirq(d);
++	unsigned long flags;
+ 	u32 mask;
+ 
++	raw_spin_lock_irqsave(&pcie->irq_lock, flags);
+ 	mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	mask |= PCIE_ISR1_INTX_ASSERT(hwirq);
+ 	advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
++	raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
+ }
+ 
+ static void advk_pcie_irq_unmask(struct irq_data *d)
+ {
+ 	struct advk_pcie *pcie = d->domain->host_data;
+ 	irq_hw_number_t hwirq = irqd_to_hwirq(d);
++	unsigned long flags;
+ 	u32 mask;
+ 
++	raw_spin_lock_irqsave(&pcie->irq_lock, flags);
+ 	mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq);
+ 	advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
++	raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
+ }
+ 
+ static int advk_pcie_irq_map(struct irq_domain *h,
+@@ -985,6 +1151,8 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
+ 	struct irq_chip *irq_chip;
+ 	int ret = 0;
+ 
++	raw_spin_lock_init(&pcie->irq_lock);
++
+ 	pcie_intc_node =  of_get_next_child(node, NULL);
+ 	if (!pcie_intc_node) {
+ 		dev_err(dev, "No PCIe Intc node found\n");
+@@ -1162,6 +1330,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct advk_pcie *pcie;
+ 	struct pci_host_bridge *bridge;
++	struct resource_entry *entry;
+ 	int ret, irq;
+ 
+ 	bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
+@@ -1172,6 +1341,80 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 	pcie->pdev = pdev;
+ 	platform_set_drvdata(pdev, pcie);
+ 
++	resource_list_for_each_entry(entry, &bridge->windows) {
++		resource_size_t start = entry->res->start;
++		resource_size_t size = resource_size(entry->res);
++		unsigned long type = resource_type(entry->res);
++		u64 win_size;
++
++		/*
++		 * Aardvark hardware allows to configure also PCIe window
++		 * for config type 0 and type 1 mapping, but driver uses
++		 * only PIO for issuing configuration transfers which does
++		 * not use PCIe window configuration.
++		 */
++		if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 &&
++		    type != IORESOURCE_IO)
++			continue;
++
++		/*
++		 * Skip transparent memory resources. Default outbound access
++		 * configuration is set to transparent memory access so it
++		 * does not need window configuration.
++		 */
++		if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) &&
++		    entry->offset == 0)
++			continue;
++
++		/*
++		 * The n-th PCIe window is configured by tuple (match, remap, mask)
++		 * and an access to address A uses this window if A matches the
++		 * match with given mask.
++		 * So every PCIe window size must be a power of two and every start
++		 * address must be aligned to window size. Minimal size is 64 KiB
++		 * because lower 16 bits of mask must be zero. Remapped address
++		 * may have set only bits from the mask.
++		 */
++		while (pcie->wins_count < OB_WIN_COUNT && size > 0) {
++			/* Calculate the largest aligned window size */
++			win_size = (1ULL << (fls64(size)-1)) |
++				   (start ? (1ULL << __ffs64(start)) : 0);
++			win_size = 1ULL << __ffs64(win_size);
++			if (win_size < 0x10000)
++				break;
++
++			dev_dbg(dev,
++				"Configuring PCIe window %d: [0x%llx-0x%llx] as %lu\n",
++				pcie->wins_count, (unsigned long long)start,
++				(unsigned long long)start + win_size, type);
++
++			if (type == IORESOURCE_IO) {
++				pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_IO;
++				pcie->wins[pcie->wins_count].match = pci_pio_to_address(start);
++			} else {
++				pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_MEM;
++				pcie->wins[pcie->wins_count].match = start;
++			}
++			pcie->wins[pcie->wins_count].remap = start - entry->offset;
++			pcie->wins[pcie->wins_count].mask = ~(win_size - 1);
++
++			if (pcie->wins[pcie->wins_count].remap & (win_size - 1))
++				break;
++
++			start += win_size;
++			size -= win_size;
++			pcie->wins_count++;
++		}
++
++		if (size > 0) {
++			dev_err(&pcie->pdev->dev,
++				"Invalid PCIe region [0x%llx-0x%llx]\n",
++				(unsigned long long)entry->res->start,
++				(unsigned long long)entry->res->end + 1);
++			return -EINVAL;
++		}
++	}
++
+ 	pcie->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(pcie->base))
+ 		return PTR_ERR(pcie->base);
+@@ -1252,6 +1495,7 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ {
+ 	struct advk_pcie *pcie = platform_get_drvdata(pdev);
+ 	struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
++	int i;
+ 
+ 	pci_lock_rescan_remove();
+ 	pci_stop_root_bus(bridge->bus);
+@@ -1261,6 +1505,10 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ 	advk_pcie_remove_msi_irq_domain(pcie);
+ 	advk_pcie_remove_irq_domain(pcie);
+ 
++	/* Disable outbound address windows mapping */
++	for (i = 0; i < OB_WIN_COUNT; i++)
++		advk_pcie_disable_ob_win(pcie, i);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c
+index 8689311c5ef66..1c3d5b87ef20e 100644
+--- a/drivers/pci/controller/pcie-xilinx-nwl.c
++++ b/drivers/pci/controller/pcie-xilinx-nwl.c
+@@ -6,6 +6,7 @@
+  * (C) Copyright 2014 - 2015, Xilinx, Inc.
+  */
+ 
++#include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+@@ -169,6 +170,7 @@ struct nwl_pcie {
+ 	u8 last_busno;
+ 	struct nwl_msi msi;
+ 	struct irq_domain *legacy_irq_domain;
++	struct clk *clk;
+ 	raw_spinlock_t leg_mask_lock;
+ };
+ 
+@@ -823,6 +825,16 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
++	pcie->clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(pcie->clk))
++		return PTR_ERR(pcie->clk);
++
++	err = clk_prepare_enable(pcie->clk);
++	if (err) {
++		dev_err(dev, "can't enable PCIe ref clock\n");
++		return err;
++	}
++
+ 	err = nwl_pcie_bridge_init(pcie);
+ 	if (err) {
+ 		dev_err(dev, "HW Initialization failed\n");
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 3f353572588df..a5e6759c407b9 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1906,11 +1906,7 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
+ 	 * so that things like MSI message writing will behave as expected
+ 	 * (e.g. if the device really is in D0 at enable time).
+ 	 */
+-	if (dev->pm_cap) {
+-		u16 pmcsr;
+-		pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+-		dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+-	}
++	pci_update_current_state(dev, dev->current_state);
+ 
+ 	if (atomic_inc_return(&dev->enable_cnt) > 1)
+ 		return 0;		/* already enabled */
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index e1fed6649c41f..3ee63968deaa5 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -257,8 +257,13 @@ static int get_port_device_capability(struct pci_dev *dev)
+ 		services |= PCIE_PORT_SERVICE_DPC;
+ 
+ 	if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
+-	    pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+-		services |= PCIE_PORT_SERVICE_BWNOTIF;
++	    pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
++		u32 linkcap;
++
++		pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap);
++		if (linkcap & PCI_EXP_LNKCAP_LBNC)
++			services |= PCIE_PORT_SERVICE_BWNOTIF;
++	}
+ 
+ 	return services;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 7b1c81b899cdf..1905ee0297a4c 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3241,6 +3241,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
+ 			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
+ 			PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ASMEDIA, 0x0612, fixup_mpss_256);
+ 
+ /*
+  * Intel 5000 and 5100 Memory controllers have an erratum with read completion
+diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c
+index 8b003c890b87b..c9f03418e71e0 100644
+--- a/drivers/pci/syscall.c
++++ b/drivers/pci/syscall.c
+@@ -22,8 +22,10 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
+ 	long err;
+ 	int cfg_ret;
+ 
++	err = -EPERM;
++	dev = NULL;
+ 	if (!capable(CAP_SYS_ADMIN))
+-		return -EPERM;
++		goto error;
+ 
+ 	err = -ENODEV;
+ 	dev = pci_get_domain_bus_and_slot(0, bus, dfn);
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 5a68e242f6b34..5cb018f988003 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -167,10 +167,14 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ 	PIN_GRP_GPIO("jtag", 20, 5, BIT(0), "jtag"),
+ 	PIN_GRP_GPIO("sdio0", 8, 3, BIT(1), "sdio"),
+ 	PIN_GRP_GPIO("emmc_nb", 27, 9, BIT(2), "emmc"),
+-	PIN_GRP_GPIO("pwm0", 11, 1, BIT(3), "pwm"),
+-	PIN_GRP_GPIO("pwm1", 12, 1, BIT(4), "pwm"),
+-	PIN_GRP_GPIO("pwm2", 13, 1, BIT(5), "pwm"),
+-	PIN_GRP_GPIO("pwm3", 14, 1, BIT(6), "pwm"),
++	PIN_GRP_GPIO_3("pwm0", 11, 1, BIT(3) | BIT(20), 0, BIT(20), BIT(3),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm1", 12, 1, BIT(4) | BIT(21), 0, BIT(21), BIT(4),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm2", 13, 1, BIT(5) | BIT(22), 0, BIT(22), BIT(5),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm3", 14, 1, BIT(6) | BIT(23), 0, BIT(23), BIT(6),
++		       "pwm", "led"),
+ 	PIN_GRP_GPIO("pmic1", 7, 1, BIT(7), "pmic"),
+ 	PIN_GRP_GPIO("pmic0", 6, 1, BIT(8), "pmic"),
+ 	PIN_GRP_GPIO("i2c2", 2, 2, BIT(9), "i2c"),
+@@ -184,10 +188,6 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ 	PIN_GRP_EXTRA("uart2", 9, 2, BIT(1) | BIT(13) | BIT(14) | BIT(19),
+ 		      BIT(1) | BIT(13) | BIT(14), BIT(1) | BIT(19),
+ 		      18, 2, "gpio", "uart"),
+-	PIN_GRP_GPIO_2("led0_od", 11, 1, BIT(20), BIT(20), 0, "led"),
+-	PIN_GRP_GPIO_2("led1_od", 12, 1, BIT(21), BIT(21), 0, "led"),
+-	PIN_GRP_GPIO_2("led2_od", 13, 1, BIT(22), BIT(22), 0, "led"),
+-	PIN_GRP_GPIO_2("led3_od", 14, 1, BIT(23), BIT(23), 0, "led"),
+ };
+ 
+ static struct armada_37xx_pin_group armada_37xx_sb_groups[] = {
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 983ba9865f772..263498be8e319 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -710,7 +710,7 @@ static const struct ingenic_chip_info jz4755_chip_info = {
+ };
+ 
+ static const u32 jz4760_pull_ups[6] = {
+-	0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0xfffff00f,
++	0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0x0000000f,
+ };
+ 
+ static const u32 jz4760_pull_downs[6] = {
+@@ -936,11 +936,11 @@ static const struct ingenic_chip_info jz4760_chip_info = {
+ };
+ 
+ static const u32 jz4770_pull_ups[6] = {
+-	0x3fffffff, 0xfff0030c, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0xffa7f00f,
++	0x3fffffff, 0xfff0f3fc, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0x0024f00f,
+ };
+ 
+ static const u32 jz4770_pull_downs[6] = {
+-	0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x00580ff0,
++	0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x005b0ff0,
+ };
+ 
+ static int jz4770_uart0_data_pins[] = { 0xa0, 0xa3, };
+@@ -3441,17 +3441,17 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc,
+ {
+ 	if (jzpc->info->version >= ID_X2000) {
+ 		switch (bias) {
+-		case PIN_CONFIG_BIAS_PULL_UP:
++		case GPIO_PULL_UP:
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPD, false);
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPU, true);
+ 			break;
+ 
+-		case PIN_CONFIG_BIAS_PULL_DOWN:
++		case GPIO_PULL_DOWN:
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPU, false);
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPD, true);
+ 			break;
+ 
+-		case PIN_CONFIG_BIAS_DISABLE:
++		case GPIO_PULL_DIS:
+ 		default:
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPU, false);
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPD, false);
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index e3aa64798f7d3..4fcae8458359c 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1224,6 +1224,7 @@ static int pcs_parse_bits_in_pinctrl_entry(struct pcs_device *pcs,
+ 
+ 	if (PCS_HAS_PINCONF) {
+ 		dev_err(pcs->dev, "pinconf not supported\n");
++		res = -ENOTSUPP;
+ 		goto free_pingroups;
+ 	}
+ 
+diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c
+index 008c83107a3ca..5fa2488fae87a 100644
+--- a/drivers/pinctrl/pinctrl-stmfx.c
++++ b/drivers/pinctrl/pinctrl-stmfx.c
+@@ -566,7 +566,7 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id)
+ 	u8 pending[NR_GPIO_REGS];
+ 	u8 src[NR_GPIO_REGS] = {0, 0, 0};
+ 	unsigned long n, status;
+-	int ret;
++	int i, ret;
+ 
+ 	ret = regmap_bulk_read(pctl->stmfx->map, STMFX_REG_IRQ_GPI_PENDING,
+ 			       &pending, NR_GPIO_REGS);
+@@ -576,7 +576,9 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id)
+ 	regmap_bulk_write(pctl->stmfx->map, STMFX_REG_IRQ_GPI_SRC,
+ 			  src, NR_GPIO_REGS);
+ 
+-	status = *(unsigned long *)pending;
++	BUILD_BUG_ON(NR_GPIO_REGS > sizeof(status));
++	for (i = 0, status = 0; i < NR_GPIO_REGS; i++)
++		status |= (unsigned long)pending[i] << (i * 8);
+ 	for_each_set_bit(n, &status, gc->ngpio) {
+ 		handle_nested_irq(irq_find_mapping(gc->irq.domain, n));
+ 		stmfx_pinctrl_irq_toggle_trigger(pctl, n);
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 376876bd66058..2975b4369f32f 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -918,7 +918,7 @@ static int samsung_pinctrl_register(struct platform_device *pdev,
+ 		pin_bank->grange.pin_base = drvdata->pin_base
+ 						+ pin_bank->pin_base;
+ 		pin_bank->grange.base = pin_bank->grange.pin_base;
+-		pin_bank->grange.npins = pin_bank->gpio_chip.ngpio;
++		pin_bank->grange.npins = pin_bank->nr_pins;
+ 		pin_bank->grange.gc = &pin_bank->gpio_chip;
+ 		pinctrl_add_gpio_range(drvdata->pctl_dev, &pin_bank->grange);
+ 	}
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index aa7f7aa772971..a7404d69b2d32 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -279,6 +279,15 @@ static int cros_ec_host_command_proto_query(struct cros_ec_device *ec_dev,
+ 	msg->insize = sizeof(struct ec_response_get_protocol_info);
+ 
+ 	ret = send_command(ec_dev, msg);
++	/*
++	 * Send command once again when timeout occurred.
++	 * Fingerprint MCU (FPMCU) is restarted during system boot which
++	 * introduces small window in which FPMCU won't respond for any
++	 * messages sent by kernel. There is no need to wait before next
++	 * attempt because we waited at least EC_MSG_DEADLINE_MS.
++	 */
++	if (ret == -ETIMEDOUT)
++		ret = send_command(ec_dev, msg);
+ 
+ 	if (ret < 0) {
+ 		dev_dbg(ec_dev->dev,
+diff --git a/drivers/platform/x86/dell/dell-smbios-wmi.c b/drivers/platform/x86/dell/dell-smbios-wmi.c
+index 33f8237727335..8e761991455af 100644
+--- a/drivers/platform/x86/dell/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell/dell-smbios-wmi.c
+@@ -69,6 +69,7 @@ static int run_smbios_call(struct wmi_device *wdev)
+ 		if (obj->type == ACPI_TYPE_INTEGER)
+ 			dev_dbg(&wdev->dev, "SMBIOS call failed: %llu\n",
+ 				obj->integer.value);
++		kfree(output.pointer);
+ 		return -EIO;
+ 	}
+ 	memcpy(&priv->buf->std, obj->buffer.pointer, obj->buffer.length);
+diff --git a/drivers/platform/x86/intel_speed_select_if/isst_if_common.c b/drivers/platform/x86/intel_speed_select_if/isst_if_common.c
+index 6f0cc679c8e5c..8a4d52a9028d5 100644
+--- a/drivers/platform/x86/intel_speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel_speed_select_if/isst_if_common.c
+@@ -379,6 +379,8 @@ static int isst_if_cpu_online(unsigned int cpu)
+ 	u64 data;
+ 	int ret;
+ 
++	isst_cpu_info[cpu].numa_node = cpu_to_node(cpu);
++
+ 	ret = rdmsrl_safe(MSR_CPU_BUS_NUMBER, &data);
+ 	if (ret) {
+ 		/* This is not a fatal error on MSR mailbox only I/F */
+@@ -397,7 +399,6 @@ static int isst_if_cpu_online(unsigned int cpu)
+ 		return ret;
+ 	}
+ 	isst_cpu_info[cpu].punit_cpu_id = data;
+-	isst_cpu_info[cpu].numa_node = cpu_to_node(cpu);
+ 
+ 	isst_restore_msr_local(cpu);
+ 
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 215e77d3b6d93..622bdae6182c0 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -869,8 +869,12 @@ static irqreturn_t max17042_thread_handler(int id, void *dev)
+ {
+ 	struct max17042_chip *chip = dev;
+ 	u32 val;
++	int ret;
++
++	ret = regmap_read(chip->regmap, MAX17042_STATUS, &val);
++	if (ret)
++		return IRQ_HANDLED;
+ 
+-	regmap_read(chip->regmap, MAX17042_STATUS, &val);
+ 	if ((val & STATUS_INTR_SOCMIN_BIT) ||
+ 		(val & STATUS_INTR_SOCMAX_BIT)) {
+ 		dev_info(&chip->client->dev, "SOC threshold INTR\n");
+diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c
+index bc89c62ccb9b5..75e4c2d777b9c 100644
+--- a/drivers/rtc/rtc-tps65910.c
++++ b/drivers/rtc/rtc-tps65910.c
+@@ -467,6 +467,6 @@ static struct platform_driver tps65910_rtc_driver = {
+ };
+ 
+ module_platform_driver(tps65910_rtc_driver);
+-MODULE_ALIAS("platform:rtc-tps65910");
++MODULE_ALIAS("platform:tps65910-rtc");
+ MODULE_AUTHOR("Venu Byravarasu <vbyravarasu@nvidia.com>");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index 3052fab00597c..3567912440dc3 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -890,6 +890,33 @@ static void qdio_shutdown_queues(struct qdio_irq *irq_ptr)
+ 	}
+ }
+ 
++static int qdio_cancel_ccw(struct qdio_irq *irq, int how)
++{
++	struct ccw_device *cdev = irq->cdev;
++	int rc;
++
++	spin_lock_irq(get_ccwdev_lock(cdev));
++	qdio_set_state(irq, QDIO_IRQ_STATE_CLEANUP);
++	if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
++		rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
++	else
++		/* default behaviour is halt */
++		rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
++	spin_unlock_irq(get_ccwdev_lock(cdev));
++	if (rc) {
++		DBF_ERROR("%4x SHUTD ERR", irq->schid.sch_no);
++		DBF_ERROR("rc:%4d", rc);
++		return rc;
++	}
++
++	wait_event_interruptible_timeout(cdev->private->wait_q,
++					 irq->state == QDIO_IRQ_STATE_INACTIVE ||
++					 irq->state == QDIO_IRQ_STATE_ERR,
++					 10 * HZ);
++
++	return 0;
++}
++
+ /**
+  * qdio_shutdown - shut down a qdio subchannel
+  * @cdev: associated ccw device
+@@ -927,27 +954,7 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
+ 	qdio_shutdown_queues(irq_ptr);
+ 	qdio_shutdown_debug_entries(irq_ptr);
+ 
+-	/* cleanup subchannel */
+-	spin_lock_irq(get_ccwdev_lock(cdev));
+-	qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+-	if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
+-		rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
+-	else
+-		/* default behaviour is halt */
+-		rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
+-	spin_unlock_irq(get_ccwdev_lock(cdev));
+-	if (rc) {
+-		DBF_ERROR("%4x SHUTD ERR", irq_ptr->schid.sch_no);
+-		DBF_ERROR("rc:%4d", rc);
+-		goto no_cleanup;
+-	}
+-
+-	wait_event_interruptible_timeout(cdev->private->wait_q,
+-		irq_ptr->state == QDIO_IRQ_STATE_INACTIVE ||
+-		irq_ptr->state == QDIO_IRQ_STATE_ERR,
+-		10 * HZ);
+-
+-no_cleanup:
++	rc = qdio_cancel_ccw(irq_ptr, how);
+ 	qdio_shutdown_thinint(irq_ptr);
+ 	qdio_shutdown_irq(irq_ptr);
+ 
+@@ -1083,6 +1090,7 @@ int qdio_establish(struct ccw_device *cdev,
+ {
+ 	struct qdio_irq *irq_ptr = cdev->private->qdio_data;
+ 	struct subchannel_id schid;
++	long timeout;
+ 	int rc;
+ 
+ 	ccw_device_get_schid(cdev, &schid);
+@@ -1111,11 +1119,8 @@ int qdio_establish(struct ccw_device *cdev,
+ 	qdio_setup_irq(irq_ptr, init_data);
+ 
+ 	rc = qdio_establish_thinint(irq_ptr);
+-	if (rc) {
+-		qdio_shutdown_irq(irq_ptr);
+-		mutex_unlock(&irq_ptr->setup_mutex);
+-		return rc;
+-	}
++	if (rc)
++		goto err_thinint;
+ 
+ 	/* establish q */
+ 	irq_ptr->ccw.cmd_code = irq_ptr->equeue.cmd;
+@@ -1131,15 +1136,16 @@ int qdio_establish(struct ccw_device *cdev,
+ 	if (rc) {
+ 		DBF_ERROR("%4x est IO ERR", irq_ptr->schid.sch_no);
+ 		DBF_ERROR("rc:%4x", rc);
+-		qdio_shutdown_thinint(irq_ptr);
+-		qdio_shutdown_irq(irq_ptr);
+-		mutex_unlock(&irq_ptr->setup_mutex);
+-		return rc;
++		goto err_ccw_start;
+ 	}
+ 
+-	wait_event_interruptible_timeout(cdev->private->wait_q,
+-		irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
+-		irq_ptr->state == QDIO_IRQ_STATE_ERR, HZ);
++	timeout = wait_event_interruptible_timeout(cdev->private->wait_q,
++						   irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
++						   irq_ptr->state == QDIO_IRQ_STATE_ERR, HZ);
++	if (timeout <= 0) {
++		rc = (timeout == -ERESTARTSYS) ? -EINTR : -ETIME;
++		goto err_ccw_timeout;
++	}
+ 
+ 	if (irq_ptr->state != QDIO_IRQ_STATE_ESTABLISHED) {
+ 		mutex_unlock(&irq_ptr->setup_mutex);
+@@ -1156,6 +1162,16 @@ int qdio_establish(struct ccw_device *cdev,
+ 	qdio_print_subchannel_info(irq_ptr);
+ 	qdio_setup_debug_entries(irq_ptr);
+ 	return 0;
++
++err_ccw_timeout:
++	qdio_cancel_ccw(irq_ptr, QDIO_FLAG_CLEANUP_USING_CLEAR);
++err_ccw_start:
++	qdio_shutdown_thinint(irq_ptr);
++err_thinint:
++	qdio_shutdown_irq(irq_ptr);
++	qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
++	mutex_unlock(&irq_ptr->setup_mutex);
++	return rc;
+ }
+ EXPORT_SYMBOL_GPL(qdio_establish);
+ 
+diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
+index adddcd5899416..0df93d2cd3c36 100644
+--- a/drivers/scsi/BusLogic.c
++++ b/drivers/scsi/BusLogic.c
+@@ -1711,7 +1711,7 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
+ 	if (adapter->adapter_bus_type != BLOGIC_PCI_BUS) {
+ 		blogic_info("  DMA Channel: None, ", adapter);
+ 		if (adapter->bios_addr > 0)
+-			blogic_info("BIOS Address: 0x%lX, ", adapter,
++			blogic_info("BIOS Address: 0x%X, ", adapter,
+ 					adapter->bios_addr);
+ 		else
+ 			blogic_info("BIOS Address: None, ", adapter);
+@@ -3451,7 +3451,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt,
+ 			if (buf[0] != '\n' || len > 1)
+ 				printk("%sscsi%d: %s", blogic_msglevelmap[msglevel], adapter->host_no, buf);
+ 		} else
+-			printk("%s", buf);
++			pr_cont("%s", buf);
+ 	} else {
+ 		if (begin) {
+ 			if (adapter != NULL && adapter->adapter_initd)
+@@ -3459,7 +3459,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt,
+ 			else
+ 				printk("%s%s", blogic_msglevelmap[msglevel], buf);
+ 		} else
+-			printk("%s", buf);
++			pr_cont("%s", buf);
+ 	}
+ 	begin = (buf[len - 1] == '\n');
+ }
+diff --git a/drivers/scsi/pcmcia/fdomain_cs.c b/drivers/scsi/pcmcia/fdomain_cs.c
+index e42acf314d068..33df6a9ba9b5f 100644
+--- a/drivers/scsi/pcmcia/fdomain_cs.c
++++ b/drivers/scsi/pcmcia/fdomain_cs.c
+@@ -45,8 +45,10 @@ static int fdomain_probe(struct pcmcia_device *link)
+ 		goto fail_disable;
+ 
+ 	if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE,
+-			    "fdomain_cs"))
++			    "fdomain_cs")) {
++		ret = -EBUSY;
+ 		goto fail_disable;
++	}
+ 
+ 	sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev);
+ 	if (!sh) {
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 85f41abcb56c1..42d0d941dba5c 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3004,7 +3004,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ {
+ 	u32 *list;
+ 	int i;
+-	int status = 0, rc;
++	int status;
+ 	u32 *pbl;
+ 	dma_addr_t page;
+ 	int num_pages;
+@@ -3016,7 +3016,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 */
+ 	if (!qedf->num_queues) {
+ 		QEDF_ERR(&(qedf->dbg_ctx), "No MSI-X vectors available!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/*
+@@ -3024,7 +3024,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedf->p_cpuq) {
+-		status = 1;
++		status = -EINVAL;
+ 		QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
+ 		goto mem_alloc_failure;
+ 	}
+@@ -3040,8 +3040,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 		   "qedf->global_queues=%p.\n", qedf->global_queues);
+ 
+ 	/* Allocate DMA coherent buffers for BDQ */
+-	rc = qedf_alloc_bdq(qedf);
+-	if (rc) {
++	status = qedf_alloc_bdq(qedf);
++	if (status) {
+ 		QEDF_ERR(&qedf->dbg_ctx, "Unable to allocate bdq.\n");
+ 		goto mem_alloc_failure;
+ 	}
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 0b0acb8270719..e6dc0b495a829 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1621,7 +1621,7 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ {
+ 	u32 *list;
+ 	int i;
+-	int status = 0, rc;
++	int status;
+ 	u32 *pbl;
+ 	dma_addr_t page;
+ 	int num_pages;
+@@ -1632,14 +1632,14 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 	 */
+ 	if (!qedi->num_queues) {
+ 		QEDI_ERR(&qedi->dbg_ctx, "No MSI-X vectors available!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Make sure we allocated the PBL that will contain the physical
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedi->p_cpuq) {
+-		status = 1;
++		status = -EINVAL;
+ 		goto mem_alloc_failure;
+ 	}
+ 
+@@ -1654,13 +1654,13 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 		  "qedi->global_queues=%p.\n", qedi->global_queues);
+ 
+ 	/* Allocate DMA coherent buffers for BDQ */
+-	rc = qedi_alloc_bdq(qedi);
+-	if (rc)
++	status = qedi_alloc_bdq(qedi);
++	if (status)
+ 		goto mem_alloc_failure;
+ 
+ 	/* Allocate DMA coherent buffers for NVM_ISCSI_CFG */
+-	rc = qedi_alloc_nvm_iscsi_cfg(qedi);
+-	if (rc)
++	status = qedi_alloc_nvm_iscsi_cfg(qedi);
++	if (status)
+ 		goto mem_alloc_failure;
+ 
+ 	/* Allocate a CQ and an associated PBL for each MSI-X
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 3e5c70a1d969c..a7259733e4709 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -91,8 +91,9 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
+ 	struct qla_hw_data *ha;
+ 	struct qla_qpair *qpair;
+ 
+-	if (!qidx)
+-		qidx++;
++	/* Map admin queue and 1st IO queue to index 0 */
++	if (qidx)
++		qidx--;
+ 
+ 	vha = (struct scsi_qla_host *)lport->private;
+ 	ha = vha->hw;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index cedd558f65ebf..37ab71b6a8a78 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/blk-mq-pci.h>
+ #include <linux/refcount.h>
++#include <linux/crash_dump.h>
+ 
+ #include <scsi/scsi_tcq.h>
+ #include <scsi/scsicam.h>
+@@ -2818,6 +2819,11 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 			return ret;
+ 	}
+ 
++	if (is_kdump_kernel()) {
++		ql2xmqsupport = 0;
++		ql2xallocfwdump = 0;
++	}
++
+ 	/* This may fail but that's ok */
+ 	pci_enable_pcie_error_reporting(pdev);
+ 
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index dcc0b9618a649..8819a407c02b6 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -1322,6 +1322,7 @@ static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info,
+ 				"requested %u bytes, received %u bytes\n",
+ 				raid_map_size,
+ 				get_unaligned_le32(&raid_map->structure_size));
++			rc = -EINVAL;
+ 			goto error;
+ 		}
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index cf46d6f86e0ed..427a2ff7e9da1 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -260,7 +260,7 @@ static int exynos_ufs_get_clk_info(struct exynos_ufs *ufs)
+ 	struct ufs_hba *hba = ufs->hba;
+ 	struct list_head *head = &hba->clk_list_head;
+ 	struct ufs_clk_info *clki;
+-	u32 pclk_rate;
++	unsigned long pclk_rate;
+ 	u32 f_min, f_max;
+ 	u8 div = 0;
+ 	int ret = 0;
+@@ -299,7 +299,7 @@ static int exynos_ufs_get_clk_info(struct exynos_ufs *ufs)
+ 	}
+ 
+ 	if (unlikely(pclk_rate < f_min || pclk_rate > f_max)) {
+-		dev_err(hba->dev, "not available pclk range %d\n", pclk_rate);
++		dev_err(hba->dev, "not available pclk range %lu\n", pclk_rate);
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.h b/drivers/scsi/ufs/ufs-exynos.h
+index 67505fe32ebf9..dadf4fd10dd80 100644
+--- a/drivers/scsi/ufs/ufs-exynos.h
++++ b/drivers/scsi/ufs/ufs-exynos.h
+@@ -184,7 +184,7 @@ struct exynos_ufs {
+ 	u32 pclk_div;
+ 	u32 pclk_avail_min;
+ 	u32 pclk_avail_max;
+-	u32 mclk_rate;
++	unsigned long mclk_rate;
+ 	int avail_ln_rx;
+ 	int avail_ln_tx;
+ 	int rx_sel_idx;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 708b3b62fc4d1..15ac5fa148058 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -2766,15 +2766,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 	WARN_ON(ufshcd_is_clkgating_allowed(hba) &&
+ 		(hba->clk_gating.state != CLKS_ON));
+ 
+-	if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
+-		if (hba->pm_op_in_progress)
+-			set_host_byte(cmd, DID_BAD_TARGET);
+-		else
+-			err = SCSI_MLQUEUE_HOST_BUSY;
+-		ufshcd_release(hba);
+-		goto out;
+-	}
+-
+ 	lrbp = &hba->lrb[tag];
+ 	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = cmd;
+@@ -2949,11 +2940,11 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 		enum dev_cmd_type cmd_type, int timeout)
+ {
+ 	struct request_queue *q = hba->cmd_queue;
++	DECLARE_COMPLETION_ONSTACK(wait);
+ 	struct request *req;
+ 	struct ufshcd_lrb *lrbp;
+ 	int err;
+ 	int tag;
+-	struct completion wait;
+ 
+ 	down_read(&hba->clk_scaling_lock);
+ 
+@@ -2973,12 +2964,6 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 	req->timeout = msecs_to_jiffies(2 * timeout);
+ 	blk_mq_start_request(req);
+ 
+-	if (unlikely(test_bit(tag, &hba->outstanding_reqs))) {
+-		err = -EBUSY;
+-		goto out;
+-	}
+-
+-	init_completion(&wait);
+ 	lrbp = &hba->lrb[tag];
+ 	WARN_ON(lrbp->cmd);
+ 	err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag);
+@@ -3419,9 +3404,11 @@ int ufshcd_read_desc_param(struct ufs_hba *hba,
+ 
+ 	if (is_kmalloc) {
+ 		/* Make sure we don't copy more data than available */
+-		if (param_offset + param_size > buff_len)
+-			param_size = buff_len - param_offset;
+-		memcpy(param_read_buf, &desc_buf[param_offset], param_size);
++		if (param_offset >= buff_len)
++			ret = -EINVAL;
++		else
++			memcpy(param_read_buf, &desc_buf[param_offset],
++			       min_t(u32, param_size, buff_len - param_offset));
+ 	}
+ out:
+ 	if (is_kmalloc)
+@@ -3983,14 +3970,13 @@ EXPORT_SYMBOL_GPL(ufshcd_dme_get_attr);
+  */
+ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ {
+-	struct completion uic_async_done;
++	DECLARE_COMPLETION_ONSTACK(uic_async_done);
+ 	unsigned long flags;
+ 	u8 status;
+ 	int ret;
+ 	bool reenable_intr = false;
+ 
+ 	mutex_lock(&hba->uic_cmd_mutex);
+-	init_completion(&uic_async_done);
+ 	ufshcd_add_delay_before_dme_cmd(hba);
+ 
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+@@ -5020,15 +5006,34 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
+ static void ufshcd_slave_destroy(struct scsi_device *sdev)
+ {
+ 	struct ufs_hba *hba;
++	unsigned long flags;
+ 
+ 	hba = shost_priv(sdev->host);
+ 	/* Drop the reference as it won't be needed anymore */
+ 	if (ufshcd_scsi_to_upiu_lun(sdev->lun) == UFS_UPIU_UFS_DEVICE_WLUN) {
+-		unsigned long flags;
+-
+ 		spin_lock_irqsave(hba->host->host_lock, flags);
+ 		hba->sdev_ufs_device = NULL;
+ 		spin_unlock_irqrestore(hba->host->host_lock, flags);
++	} else if (hba->sdev_ufs_device) {
++		struct device *supplier = NULL;
++
++		/* Ensure UFS Device WLUN exists and does not disappear */
++		spin_lock_irqsave(hba->host->host_lock, flags);
++		if (hba->sdev_ufs_device) {
++			supplier = &hba->sdev_ufs_device->sdev_gendev;
++			get_device(supplier);
++		}
++		spin_unlock_irqrestore(hba->host->host_lock, flags);
++
++		if (supplier) {
++			/*
++			 * If a LUN fails to probe (e.g. absent BOOT WLUN), the
++			 * device will not have been registered but can still
++			 * have a device link holding a reference to the device.
++			 */
++			device_link_remove(&sdev->sdev_gendev, supplier);
++			put_device(supplier);
++		}
+ 	}
+ }
+ 
+@@ -6663,11 +6668,11 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ 					enum query_opcode desc_op)
+ {
+ 	struct request_queue *q = hba->cmd_queue;
++	DECLARE_COMPLETION_ONSTACK(wait);
+ 	struct request *req;
+ 	struct ufshcd_lrb *lrbp;
+ 	int err = 0;
+ 	int tag;
+-	struct completion wait;
+ 	u8 upiu_flags;
+ 
+ 	down_read(&hba->clk_scaling_lock);
+@@ -6685,7 +6690,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ 		goto out;
+ 	}
+ 
+-	init_completion(&wait);
+ 	lrbp = &hba->lrb[tag];
+ 	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = NULL;
+@@ -6984,8 +6988,8 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ 	struct Scsi_Host *host;
+ 	struct ufs_hba *hba;
+ 	unsigned long flags;
+-	unsigned int tag;
+-	int err = 0;
++	int tag;
++	int err = FAILED;
+ 	struct ufshcd_lrb *lrbp;
+ 	u32 reg;
+ 
+@@ -7002,12 +7006,12 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ 
+ 	ufshcd_hold(hba, false);
+ 	reg = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
+-	/* If command is already aborted/completed, return SUCCESS */
++	/* If command is already aborted/completed, return FAILED. */
+ 	if (!(test_bit(tag, &hba->outstanding_reqs))) {
+ 		dev_err(hba->dev,
+ 			"%s: cmd at tag %d already completed, outstanding=0x%lx, doorbell=0x%x\n",
+ 			__func__, tag, hba->outstanding_reqs, reg);
+-		goto out;
++		goto release;
+ 	}
+ 
+ 	/* Print Transfer Request of aborted task */
+@@ -7036,7 +7040,8 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ 		dev_err(hba->dev,
+ 		"%s: cmd was completed, but without a notifying intr, tag = %d",
+ 		__func__, tag);
+-		goto cleanup;
++		__ufshcd_transfer_req_compl(hba, 1UL << tag);
++		goto release;
+ 	}
+ 
+ 	/*
+@@ -7049,36 +7054,33 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ 	 */
+ 	if (lrbp->lun == UFS_UPIU_UFS_DEVICE_WLUN) {
+ 		ufshcd_update_evt_hist(hba, UFS_EVT_ABORT, lrbp->lun);
+-		__ufshcd_transfer_req_compl(hba, (1UL << tag));
+-		set_bit(tag, &hba->outstanding_reqs);
++
+ 		spin_lock_irqsave(host->host_lock, flags);
+ 		hba->force_reset = true;
+ 		ufshcd_schedule_eh_work(hba);
+ 		spin_unlock_irqrestore(host->host_lock, flags);
+-		goto out;
++		goto release;
+ 	}
+ 
+ 	/* Skip task abort in case previous aborts failed and report failure */
+-	if (lrbp->req_abort_skip)
+-		err = -EIO;
+-	else
+-		err = ufshcd_try_to_abort_task(hba, tag);
++	if (lrbp->req_abort_skip) {
++		dev_err(hba->dev, "%s: skipping abort\n", __func__);
++		ufshcd_set_req_abort_skip(hba, hba->outstanding_reqs);
++		goto release;
++	}
+ 
+-	if (!err) {
+-cleanup:
+-		__ufshcd_transfer_req_compl(hba, (1UL << tag));
+-out:
+-		err = SUCCESS;
+-	} else {
++	err = ufshcd_try_to_abort_task(hba, tag);
++	if (err) {
+ 		dev_err(hba->dev, "%s: failed with err %d\n", __func__, err);
+ 		ufshcd_set_req_abort_skip(hba, hba->outstanding_reqs);
+ 		err = FAILED;
++		goto release;
+ 	}
+ 
+-	/*
+-	 * This ufshcd_release() corresponds to the original scsi cmd that got
+-	 * aborted here (as we won't get any IRQ for it).
+-	 */
++	err = SUCCESS;
++
++release:
++	/* Matches the ufshcd_hold() call at the start of this function. */
+ 	ufshcd_release(hba);
+ 	return err;
+ }
+diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+index c557ffd0992c7..55e46fa6cf424 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+@@ -51,7 +51,7 @@ static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma)
+ 	unsigned long vsize = vma->vm_end - vma->vm_start;
+ 	pgprot_t prot = vma->vm_page_prot;
+ 
+-	if (vma->vm_pgoff + vsize > lpc_ctrl->mem_base + lpc_ctrl->mem_size)
++	if (vma->vm_pgoff + vma_pages(vma) > lpc_ctrl->mem_size >> PAGE_SHIFT)
+ 		return -EINVAL;
+ 
+ 	/* ast2400/2500 AHB accesses are not cache coherent */
+diff --git a/drivers/soc/aspeed/aspeed-p2a-ctrl.c b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+index b60fbeaffcbd0..20b5fb2a207cc 100644
+--- a/drivers/soc/aspeed/aspeed-p2a-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+@@ -110,7 +110,7 @@ static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma)
+ 	vsize = vma->vm_end - vma->vm_start;
+ 	prot = vma->vm_page_prot;
+ 
+-	if (vma->vm_pgoff + vsize > ctrl->mem_base + ctrl->mem_size)
++	if (vma->vm_pgoff + vma_pages(vma) > ctrl->mem_size >> PAGE_SHIFT)
+ 		return -EINVAL;
+ 
+ 	/* ast2400/2500 AHB accesses are not cache coherent */
+diff --git a/drivers/soc/mediatek/mtk-mmsys.h b/drivers/soc/mediatek/mtk-mmsys.h
+index 5f3e2bf0c40bc..9e2b81bd38db1 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.h
++++ b/drivers/soc/mediatek/mtk-mmsys.h
+@@ -262,6 +262,10 @@ static const struct mtk_mmsys_routes mmsys_default_routing_table[] = {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3,
+ 		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK,
+ 		DSI3_SEL_IN_RDMA2
++	}, {
++		DDP_COMPONENT_UFOE, DDP_COMPONENT_DSI0,
++		DISP_REG_CONFIG_DISP_UFOE_MOUT_EN, UFOE_MOUT_EN_DSI0,
++		UFOE_MOUT_EN_DSI0
+ 	}
+ };
+ 
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index 934fcc4d2b057..7b6b94332510a 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -476,12 +476,12 @@ static int qmp_cooling_device_add(struct qmp *qmp,
+ static int qmp_cooling_devices_register(struct qmp *qmp)
+ {
+ 	struct device_node *np, *child;
+-	int count = QMP_NUM_COOLING_RESOURCES;
++	int count = 0;
+ 	int ret;
+ 
+ 	np = qmp->dev->of_node;
+ 
+-	qmp->cooling_devs = devm_kcalloc(qmp->dev, count,
++	qmp->cooling_devs = devm_kcalloc(qmp->dev, QMP_NUM_COOLING_RESOURCES,
+ 					 sizeof(*qmp->cooling_devs),
+ 					 GFP_KERNEL);
+ 
+@@ -497,12 +497,16 @@ static int qmp_cooling_devices_register(struct qmp *qmp)
+ 			goto unroll;
+ 	}
+ 
++	if (!count)
++		devm_kfree(qmp->dev, qmp->cooling_devs);
++
+ 	return 0;
+ 
+ unroll:
+ 	while (--count >= 0)
+ 		thermal_cooling_device_unregister
+ 			(qmp->cooling_devs[count].cdev);
++	devm_kfree(qmp->dev, qmp->cooling_devs);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index c11e3d8cd308f..f156de765c68c 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -537,12 +537,14 @@ static int intel_link_power_down(struct sdw_intel *sdw)
+ 
+ 	mutex_lock(sdw->link_res->shim_lock);
+ 
+-	intel_shim_master_ip_to_glue(sdw);
+-
+ 	if (!(*shim_mask & BIT(link_id)))
+ 		dev_err(sdw->cdns.dev,
+ 			"%s: Unbalanced power-up/down calls\n", __func__);
+ 
++	sdw->cdns.link_up = false;
++
++	intel_shim_master_ip_to_glue(sdw);
++
+ 	*shim_mask &= ~BIT(link_id);
+ 
+ 	if (!*shim_mask) {
+@@ -559,18 +561,19 @@ static int intel_link_power_down(struct sdw_intel *sdw)
+ 		link_control &=  spa_mask;
+ 
+ 		ret = intel_clear_bit(shim, SDW_SHIM_LCTL, link_control, cpa_mask);
++		if (ret < 0) {
++			dev_err(sdw->cdns.dev, "%s: could not power down link\n", __func__);
++
++			/*
++			 * we leave the sdw->cdns.link_up flag as false since we've disabled
++			 * the link at this point and cannot handle interrupts any longer.
++			 */
++		}
+ 	}
+ 
+ 	mutex_unlock(sdw->link_res->shim_lock);
+ 
+-	if (ret < 0) {
+-		dev_err(sdw->cdns.dev, "%s: could not power down link\n", __func__);
+-
+-		return ret;
+-	}
+-
+-	sdw->cdns.link_up = false;
+-	return 0;
++	return ret;
+ }
+ 
+ static void intel_shim_sync_arm(struct sdw_intel *sdw)
+diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c
+index 87f8829c39952..829770b8ec74c 100644
+--- a/drivers/spi/spi-fsi.c
++++ b/drivers/spi/spi-fsi.c
+@@ -25,16 +25,11 @@
+ 
+ #define SPI_FSI_BASE			0x70000
+ #define SPI_FSI_INIT_TIMEOUT_MS		1000
+-#define SPI_FSI_MAX_XFR_SIZE		2048
+-#define SPI_FSI_MAX_XFR_SIZE_RESTRICTED	8
++#define SPI_FSI_MAX_RX_SIZE		8
++#define SPI_FSI_MAX_TX_SIZE		40
+ 
+ #define SPI_FSI_ERROR			0x0
+ #define SPI_FSI_COUNTER_CFG		0x1
+-#define  SPI_FSI_COUNTER_CFG_LOOPS(x)	 (((u64)(x) & 0xffULL) << 32)
+-#define  SPI_FSI_COUNTER_CFG_N2_RX	 BIT_ULL(8)
+-#define  SPI_FSI_COUNTER_CFG_N2_TX	 BIT_ULL(9)
+-#define  SPI_FSI_COUNTER_CFG_N2_IMPLICIT BIT_ULL(10)
+-#define  SPI_FSI_COUNTER_CFG_N2_RELOAD	 BIT_ULL(11)
+ #define SPI_FSI_CFG1			0x2
+ #define SPI_FSI_CLOCK_CFG		0x3
+ #define  SPI_FSI_CLOCK_CFG_MM_ENABLE	 BIT_ULL(32)
+@@ -76,8 +71,6 @@ struct fsi_spi {
+ 	struct device *dev;	/* SPI controller device */
+ 	struct fsi_device *fsi;	/* FSI2SPI CFAM engine device */
+ 	u32 base;
+-	size_t max_xfr_size;
+-	bool restricted;
+ };
+ 
+ struct fsi_spi_sequence {
+@@ -241,7 +234,7 @@ static int fsi_spi_reset(struct fsi_spi *ctx)
+ 	return fsi_spi_write_reg(ctx, SPI_FSI_STATUS, 0ULL);
+ }
+ 
+-static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
++static void fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
+ {
+ 	/*
+ 	 * Add the next byte of instruction to the 8-byte sequence register.
+@@ -251,8 +244,6 @@ static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
+ 	 */
+ 	seq->data |= (u64)val << seq->bit;
+ 	seq->bit -= 8;
+-
+-	return ((64 - seq->bit) / 8) - 2;
+ }
+ 
+ static void fsi_spi_sequence_init(struct fsi_spi_sequence *seq)
+@@ -261,71 +252,11 @@ static void fsi_spi_sequence_init(struct fsi_spi_sequence *seq)
+ 	seq->data = 0ULL;
+ }
+ 
+-static int fsi_spi_sequence_transfer(struct fsi_spi *ctx,
+-				     struct fsi_spi_sequence *seq,
+-				     struct spi_transfer *transfer)
+-{
+-	int loops;
+-	int idx;
+-	int rc;
+-	u8 val = 0;
+-	u8 len = min(transfer->len, 8U);
+-	u8 rem = transfer->len % len;
+-
+-	loops = transfer->len / len;
+-
+-	if (transfer->tx_buf) {
+-		val = SPI_FSI_SEQUENCE_SHIFT_OUT(len);
+-		idx = fsi_spi_sequence_add(seq, val);
+-
+-		if (rem)
+-			rem = SPI_FSI_SEQUENCE_SHIFT_OUT(rem);
+-	} else if (transfer->rx_buf) {
+-		val = SPI_FSI_SEQUENCE_SHIFT_IN(len);
+-		idx = fsi_spi_sequence_add(seq, val);
+-
+-		if (rem)
+-			rem = SPI_FSI_SEQUENCE_SHIFT_IN(rem);
+-	} else {
+-		return -EINVAL;
+-	}
+-
+-	if (ctx->restricted && loops > 1) {
+-		dev_warn(ctx->dev,
+-			 "Transfer too large; no branches permitted.\n");
+-		return -EINVAL;
+-	}
+-
+-	if (loops > 1) {
+-		u64 cfg = SPI_FSI_COUNTER_CFG_LOOPS(loops - 1);
+-
+-		fsi_spi_sequence_add(seq, SPI_FSI_SEQUENCE_BRANCH(idx));
+-
+-		if (transfer->rx_buf)
+-			cfg |= SPI_FSI_COUNTER_CFG_N2_RX |
+-				SPI_FSI_COUNTER_CFG_N2_TX |
+-				SPI_FSI_COUNTER_CFG_N2_IMPLICIT |
+-				SPI_FSI_COUNTER_CFG_N2_RELOAD;
+-
+-		rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, cfg);
+-		if (rc)
+-			return rc;
+-	} else {
+-		fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL);
+-	}
+-
+-	if (rem)
+-		fsi_spi_sequence_add(seq, rem);
+-
+-	return 0;
+-}
+-
+ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 				 struct spi_transfer *transfer)
+ {
+ 	int rc = 0;
+ 	u64 status = 0ULL;
+-	u64 cfg = 0ULL;
+ 
+ 	if (transfer->tx_buf) {
+ 		int nb;
+@@ -363,16 +294,6 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 		u64 in = 0ULL;
+ 		u8 *rx = transfer->rx_buf;
+ 
+-		rc = fsi_spi_read_reg(ctx, SPI_FSI_COUNTER_CFG, &cfg);
+-		if (rc)
+-			return rc;
+-
+-		if (cfg & SPI_FSI_COUNTER_CFG_N2_IMPLICIT) {
+-			rc = fsi_spi_write_reg(ctx, SPI_FSI_DATA_TX, 0);
+-			if (rc)
+-				return rc;
+-		}
+-
+ 		while (transfer->len > recv) {
+ 			do {
+ 				rc = fsi_spi_read_reg(ctx, SPI_FSI_STATUS,
+@@ -439,6 +360,10 @@ static int fsi_spi_transfer_init(struct fsi_spi *ctx)
+ 		}
+ 	} while (seq_state && (seq_state != SPI_FSI_STATUS_SEQ_STATE_IDLE));
+ 
++	rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL);
++	if (rc)
++		return rc;
++
+ 	rc = fsi_spi_read_reg(ctx, SPI_FSI_CLOCK_CFG, &clock_cfg);
+ 	if (rc)
+ 		return rc;
+@@ -459,6 +384,7 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ {
+ 	int rc;
+ 	u8 seq_slave = SPI_FSI_SEQUENCE_SEL_SLAVE(mesg->spi->chip_select + 1);
++	unsigned int len;
+ 	struct spi_transfer *transfer;
+ 	struct fsi_spi *ctx = spi_controller_get_devdata(ctlr);
+ 
+@@ -471,8 +397,7 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 		struct spi_transfer *next = NULL;
+ 
+ 		/* Sequencer must do shift out (tx) first. */
+-		if (!transfer->tx_buf ||
+-		    transfer->len > (ctx->max_xfr_size + 8)) {
++		if (!transfer->tx_buf || transfer->len > SPI_FSI_MAX_TX_SIZE) {
+ 			rc = -EINVAL;
+ 			goto error;
+ 		}
+@@ -486,9 +411,13 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 		fsi_spi_sequence_init(&seq);
+ 		fsi_spi_sequence_add(&seq, seq_slave);
+ 
+-		rc = fsi_spi_sequence_transfer(ctx, &seq, transfer);
+-		if (rc)
+-			goto error;
++		len = transfer->len;
++		while (len > 8) {
++			fsi_spi_sequence_add(&seq,
++					     SPI_FSI_SEQUENCE_SHIFT_OUT(8));
++			len -= 8;
++		}
++		fsi_spi_sequence_add(&seq, SPI_FSI_SEQUENCE_SHIFT_OUT(len));
+ 
+ 		if (!list_is_last(&transfer->transfer_list,
+ 				  &mesg->transfers)) {
+@@ -496,7 +425,9 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 
+ 			/* Sequencer can only do shift in (rx) after tx. */
+ 			if (next->rx_buf) {
+-				if (next->len > ctx->max_xfr_size) {
++				u8 shift;
++
++				if (next->len > SPI_FSI_MAX_RX_SIZE) {
+ 					rc = -EINVAL;
+ 					goto error;
+ 				}
+@@ -504,10 +435,8 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 				dev_dbg(ctx->dev, "Sequence rx of %d bytes.\n",
+ 					next->len);
+ 
+-				rc = fsi_spi_sequence_transfer(ctx, &seq,
+-							       next);
+-				if (rc)
+-					goto error;
++				shift = SPI_FSI_SEQUENCE_SHIFT_IN(next->len);
++				fsi_spi_sequence_add(&seq, shift);
+ 			} else {
+ 				next = NULL;
+ 			}
+@@ -541,9 +470,7 @@ error:
+ 
+ static size_t fsi_spi_max_transfer_size(struct spi_device *spi)
+ {
+-	struct fsi_spi *ctx = spi_controller_get_devdata(spi->controller);
+-
+-	return ctx->max_xfr_size;
++	return SPI_FSI_MAX_RX_SIZE;
+ }
+ 
+ static int fsi_spi_probe(struct device *dev)
+@@ -582,14 +509,6 @@ static int fsi_spi_probe(struct device *dev)
+ 		ctx->fsi = fsi;
+ 		ctx->base = base + SPI_FSI_BASE;
+ 
+-		if (of_device_is_compatible(np, "ibm,fsi2spi-restricted")) {
+-			ctx->restricted = true;
+-			ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE_RESTRICTED;
+-		} else {
+-			ctx->restricted = false;
+-			ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE;
+-		}
+-
+ 		rc = devm_spi_register_controller(dev, ctlr);
+ 		if (rc)
+ 			spi_controller_put(ctlr);
+diff --git a/drivers/staging/board/board.c b/drivers/staging/board/board.c
+index cb6feb34dd401..f980af0373452 100644
+--- a/drivers/staging/board/board.c
++++ b/drivers/staging/board/board.c
+@@ -136,6 +136,7 @@ int __init board_staging_register_clock(const struct board_staging_clk *bsc)
+ static int board_staging_add_dev_domain(struct platform_device *pdev,
+ 					const char *domain)
+ {
++	struct device *dev = &pdev->dev;
+ 	struct of_phandle_args pd_args;
+ 	struct device_node *np;
+ 
+@@ -148,7 +149,11 @@ static int board_staging_add_dev_domain(struct platform_device *pdev,
+ 	pd_args.np = np;
+ 	pd_args.args_count = 0;
+ 
+-	return of_genpd_add_device(&pd_args, &pdev->dev);
++	/* Initialization similar to device_pm_init_common() */
++	spin_lock_init(&dev->power.lock);
++	dev->power.early_init = true;
++
++	return of_genpd_add_device(&pd_args, dev);
+ }
+ #else
+ static inline int board_staging_add_dev_domain(struct platform_device *pdev,
+diff --git a/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml b/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml
+index 8e355cddd437d..6c348578e4a24 100644
+--- a/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml
++++ b/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml
+@@ -41,6 +41,8 @@ properties:
+   regulators:
+     type: object
+ 
++    additionalProperties: false
++
+     properties:
+       '#address-cells':
+         const: 1
+@@ -49,11 +51,13 @@ properties:
+         const: 0
+ 
+     patternProperties:
+-      '^ldo[0-9]+@[0-9a-f]$':
++      '^(ldo|LDO)[0-9]+$':
+         type: object
+ 
+         $ref: "/schemas/regulator/regulator.yaml#"
+ 
++        unevaluatedProperties: false
++
+ required:
+   - compatible
+   - reg
+diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c
+index cbc0032c16045..98d759e7cc957 100644
+--- a/drivers/staging/ks7010/ks7010_sdio.c
++++ b/drivers/staging/ks7010/ks7010_sdio.c
+@@ -939,9 +939,9 @@ static void ks7010_private_init(struct ks_wlan_private *priv,
+ 	memset(&priv->wstats, 0, sizeof(priv->wstats));
+ 
+ 	/* sleep mode */
++	atomic_set(&priv->sleepstatus.status, 0);
+ 	atomic_set(&priv->sleepstatus.doze_request, 0);
+ 	atomic_set(&priv->sleepstatus.wakeup_request, 0);
+-	atomic_set(&priv->sleepstatus.wakeup_request, 0);
+ 
+ 	trx_device_init(priv);
+ 	hostif_init(priv);
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+index 948769ca6539d..1e324f1f656e5 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+@@ -1763,7 +1763,8 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (err < 0)
+ 		goto register_entities_fail;
+ 	/* init atomisp wdts */
+-	if (init_atomisp_wdts(isp) != 0)
++	err = init_atomisp_wdts(isp);
++	if (err != 0)
+ 		goto wdt_work_queue_fail;
+ 
+ 	/* save the iunit context only once after all the values are init'ed. */
+@@ -1815,6 +1816,7 @@ request_irq_fail:
+ 	hmm_cleanup();
+ 	hmm_pool_unregister(HMM_POOL_TYPE_RESERVED);
+ hmm_pool_fail:
++	pm_runtime_get_noresume(&pdev->dev);
+ 	destroy_workqueue(isp->wdt_work_queue);
+ wdt_work_queue_fail:
+ 	atomisp_acc_cleanup(isp);
+diff --git a/drivers/staging/media/hantro/hantro_g1_vp8_dec.c b/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
+index 96622a7f8279e..2afd5996d75f8 100644
+--- a/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
++++ b/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
+@@ -376,12 +376,17 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vb2_dst = hantro_get_dst_buf(ctx);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->last_frame_ts);
+-	if (!ref)
++	if (!ref) {
++		vpu_debug(0, "failed to find last frame ts=%llu\n",
++			  hdr->last_frame_ts);
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
++	}
+ 	vdpu_write_relaxed(vpu, ref, G1_REG_ADDR_REF(0));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->golden_frame_ts);
+-	WARN_ON(!ref && hdr->golden_frame_ts);
++	if (!ref && hdr->golden_frame_ts)
++		vpu_debug(0, "failed to find golden frame ts=%llu\n",
++			  hdr->golden_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN)
+@@ -389,7 +394,9 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vdpu_write_relaxed(vpu, ref, G1_REG_ADDR_REF(4));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->alt_frame_ts);
+-	WARN_ON(!ref && hdr->alt_frame_ts);
++	if (!ref && hdr->alt_frame_ts)
++		vpu_debug(0, "failed to find alt frame ts=%llu\n",
++			  hdr->alt_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT)
+diff --git a/drivers/staging/media/hantro/rockchip_vpu2_hw_vp8_dec.c b/drivers/staging/media/hantro/rockchip_vpu2_hw_vp8_dec.c
+index 951b55f58a612..704607511b57f 100644
+--- a/drivers/staging/media/hantro/rockchip_vpu2_hw_vp8_dec.c
++++ b/drivers/staging/media/hantro/rockchip_vpu2_hw_vp8_dec.c
+@@ -453,12 +453,17 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vb2_dst = hantro_get_dst_buf(ctx);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->last_frame_ts);
+-	if (!ref)
++	if (!ref) {
++		vpu_debug(0, "failed to find last frame ts=%llu\n",
++			  hdr->last_frame_ts);
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
++	}
+ 	vdpu_write_relaxed(vpu, ref, VDPU_REG_VP8_ADDR_REF0);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->golden_frame_ts);
+-	WARN_ON(!ref && hdr->golden_frame_ts);
++	if (!ref && hdr->golden_frame_ts)
++		vpu_debug(0, "failed to find golden frame ts=%llu\n",
++			  hdr->golden_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN)
+@@ -466,7 +471,9 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vdpu_write_relaxed(vpu, ref, VDPU_REG_VP8_ADDR_REF2_5(2));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->alt_frame_ts);
+-	WARN_ON(!ref && hdr->alt_frame_ts);
++	if (!ref && hdr->alt_frame_ts)
++		vpu_debug(0, "failed to find alt frame ts=%llu\n",
++			  hdr->alt_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT)
+diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
+index 894c4de31790e..2882964b85136 100644
+--- a/drivers/staging/media/imx/imx7-media-csi.c
++++ b/drivers/staging/media/imx/imx7-media-csi.c
+@@ -361,6 +361,7 @@ static void imx7_csi_dma_unsetup_vb2_buf(struct imx7_csi *csi,
+ 
+ 			vb->timestamp = ktime_get_ns();
+ 			vb2_buffer_done(vb, return_status);
++			csi->active_vb2_buf[i] = NULL;
+ 		}
+ 	}
+ }
+@@ -386,9 +387,10 @@ static int imx7_csi_dma_setup(struct imx7_csi *csi)
+ 	return 0;
+ }
+ 
+-static void imx7_csi_dma_cleanup(struct imx7_csi *csi)
++static void imx7_csi_dma_cleanup(struct imx7_csi *csi,
++				 enum vb2_buffer_state return_status)
+ {
+-	imx7_csi_dma_unsetup_vb2_buf(csi, VB2_BUF_STATE_ERROR);
++	imx7_csi_dma_unsetup_vb2_buf(csi, return_status);
+ 	imx_media_free_dma_buf(csi->dev, &csi->underrun_buf);
+ }
+ 
+@@ -537,9 +539,10 @@ static int imx7_csi_init(struct imx7_csi *csi)
+ 	return 0;
+ }
+ 
+-static void imx7_csi_deinit(struct imx7_csi *csi)
++static void imx7_csi_deinit(struct imx7_csi *csi,
++			    enum vb2_buffer_state return_status)
+ {
+-	imx7_csi_dma_cleanup(csi);
++	imx7_csi_dma_cleanup(csi, return_status);
+ 	imx7_csi_init_default(csi);
+ 	imx7_csi_dmareq_rff_disable(csi);
+ 	clk_disable_unprepare(csi->mclk);
+@@ -702,7 +705,7 @@ static int imx7_csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 		ret = v4l2_subdev_call(csi->src_sd, video, s_stream, 1);
+ 		if (ret < 0) {
+-			imx7_csi_deinit(csi);
++			imx7_csi_deinit(csi, VB2_BUF_STATE_QUEUED);
+ 			goto out_unlock;
+ 		}
+ 
+@@ -712,7 +715,7 @@ static int imx7_csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 		v4l2_subdev_call(csi->src_sd, video, s_stream, 0);
+ 
+-		imx7_csi_deinit(csi);
++		imx7_csi_deinit(csi, VB2_BUF_STATE_ERROR);
+ 	}
+ 
+ 	csi->is_streaming = !!enable;
+diff --git a/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c b/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c
+index bb7941aee0c47..fcf31f6d4b36f 100644
+--- a/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c
++++ b/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c
+@@ -463,7 +463,7 @@ static void PHY_StoreTxPowerByRateNew(
+ 	if (RfPath > ODM_RF_PATH_D)
+ 		return;
+ 
+-	if (TxNum > ODM_RF_PATH_D)
++	if (TxNum > RF_MAX_TX_NUM)
+ 		return;
+ 
+ 	for (i = 0; i < rateNum; ++i) {
+diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c
+index 1deb74112ad43..11d9d9155eef2 100644
+--- a/drivers/staging/rts5208/rtsx_scsi.c
++++ b/drivers/staging/rts5208/rtsx_scsi.c
+@@ -2802,10 +2802,10 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 	}
+ 
+ 	if (dev_info_id == 0x15) {
+-		buf_len = 0x3A;
++		buf_len = 0x3C;
+ 		data_len = 0x3A;
+ 	} else {
+-		buf_len = 0x6A;
++		buf_len = 0x6C;
+ 		data_len = 0x6A;
+ 	}
+ 
+@@ -2855,11 +2855,7 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 	}
+ 
+ 	rtsx_stor_set_xfer_buf(buf, buf_len, srb);
+-
+-	if (dev_info_id == 0x15)
+-		scsi_set_resid(srb, scsi_bufflen(srb) - 0x3C);
+-	else
+-		scsi_set_resid(srb, scsi_bufflen(srb) - 0x6C);
++	scsi_set_resid(srb, scsi_bufflen(srb) - buf_len);
+ 
+ 	kfree(buf);
+ 	return STATUS_SUCCESS;
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 10d6b228cc941..eec59030c3a73 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2443,7 +2443,7 @@ static void tb_switch_default_link_ports(struct tb_switch *sw)
+ {
+ 	int i;
+ 
+-	for (i = 1; i <= sw->config.max_port_number; i += 2) {
++	for (i = 1; i <= sw->config.max_port_number; i++) {
+ 		struct tb_port *port = &sw->ports[i];
+ 		struct tb_port *subordinate;
+ 
+diff --git a/drivers/tty/hvc/hvsi.c b/drivers/tty/hvc/hvsi.c
+index bfc15279d5bc9..f0bc8e7800512 100644
+--- a/drivers/tty/hvc/hvsi.c
++++ b/drivers/tty/hvc/hvsi.c
+@@ -1038,7 +1038,7 @@ static const struct tty_operations hvsi_ops = {
+ 
+ static int __init hvsi_init(void)
+ {
+-	int i;
++	int i, ret;
+ 
+ 	hvsi_driver = alloc_tty_driver(hvsi_count);
+ 	if (!hvsi_driver)
+@@ -1069,12 +1069,25 @@ static int __init hvsi_init(void)
+ 	}
+ 	hvsi_wait = wait_for_state; /* irqs active now */
+ 
+-	if (tty_register_driver(hvsi_driver))
+-		panic("Couldn't register hvsi console driver\n");
++	ret = tty_register_driver(hvsi_driver);
++	if (ret) {
++		pr_err("Couldn't register hvsi console driver\n");
++		goto err_free_irq;
++	}
+ 
+ 	printk(KERN_DEBUG "HVSI: registered %i devices\n", hvsi_count);
+ 
+ 	return 0;
++err_free_irq:
++	hvsi_wait = poll_for_state;
++	for (i = 0; i < hvsi_count; i++) {
++		struct hvsi_struct *hp = &hvsi_ports[i];
++
++		free_irq(hp->virq, hp);
++	}
++	tty_driver_kref_put(hvsi_driver);
++
++	return ret;
+ }
+ device_initcall(hvsi_init);
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 79418d4beb48f..b6c731a267d26 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -617,7 +617,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	struct uart_port *port = dev_id;
+ 	struct omap8250_priv *priv = port->private_data;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+-	unsigned int iir;
++	unsigned int iir, lsr;
+ 	int ret;
+ 
+ #ifdef CONFIG_SERIAL_8250_DMA
+@@ -628,6 +628,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ #endif
+ 
+ 	serial8250_rpm_get(up);
++	lsr = serial_port_in(port, UART_LSR);
+ 	iir = serial_port_in(port, UART_IIR);
+ 	ret = serial8250_handle_irq(port, iir);
+ 
+@@ -642,6 +643,24 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 		serial_port_in(port, UART_RX);
+ 	}
+ 
++	/* Stop processing interrupts on input overrun */
++	if ((lsr & UART_LSR_OE) && up->overrun_backoff_time_ms > 0) {
++		unsigned long delay;
++
++		up->ier = port->serial_in(port, UART_IER);
++		if (up->ier & (UART_IER_RLSI | UART_IER_RDI)) {
++			port->ops->stop_rx(port);
++		} else {
++			/* Keep restarting the timer until
++			 * the input overrun subsides.
++			 */
++			cancel_delayed_work(&up->overrun_backoff);
++		}
++
++		delay = msecs_to_jiffies(up->overrun_backoff_time_ms);
++		schedule_delayed_work(&up->overrun_backoff, delay);
++	}
++
+ 	serial8250_rpm_put(up);
+ 
+ 	return IRQ_RETVAL(ret);
+@@ -1353,6 +1372,10 @@ static int omap8250_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	if (of_property_read_u32(np, "overrun-throttle-ms",
++				 &up.overrun_backoff_time_ms) != 0)
++		up.overrun_backoff_time_ms = 0;
++
+ 	priv->wakeirq = irq_of_parse_and_map(np, 1);
+ 
+ 	pdata = of_device_get_match_data(&pdev->dev);
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index a808c283883e0..726912b16a559 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -87,7 +87,7 @@ static void moan_device(const char *str, struct pci_dev *dev)
+ 
+ static int
+ setup_port(struct serial_private *priv, struct uart_8250_port *port,
+-	   int bar, int offset, int regshift)
++	   u8 bar, unsigned int offset, int regshift)
+ {
+ 	struct pci_dev *dev = priv->dev;
+ 
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 1da29a219842b..66374704747ec 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -122,7 +122,8 @@ static const struct serial8250_config uart_config[] = {
+ 		.name		= "16C950/954",
+ 		.fifo_size	= 128,
+ 		.tx_loadsz	= 128,
+-		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01,
++		.rxtrig_bytes	= {16, 32, 112, 120},
+ 		/* UART_CAP_EFR breaks billionon CF bluetooth card. */
+ 		.flags		= UART_CAP_FIFO | UART_CAP_SLEEP,
+ 	},
+diff --git a/drivers/tty/serial/jsm/jsm_neo.c b/drivers/tty/serial/jsm/jsm_neo.c
+index bf0e2a4cb0cef..c6f927a76c3be 100644
+--- a/drivers/tty/serial/jsm/jsm_neo.c
++++ b/drivers/tty/serial/jsm/jsm_neo.c
+@@ -815,7 +815,9 @@ static void neo_parse_isr(struct jsm_board *brd, u32 port)
+ 		/* Parse any modem signal changes */
+ 		jsm_dbg(INTR, &ch->ch_bd->pci_dev,
+ 			"MOD_STAT: sending to parse_modem_sigs\n");
++		spin_lock_irqsave(&ch->uart_port.lock, lock_flags);
+ 		neo_parse_modem(ch, readb(&ch->ch_neo_uart->msr));
++		spin_unlock_irqrestore(&ch->uart_port.lock, lock_flags);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/serial/jsm/jsm_tty.c b/drivers/tty/serial/jsm/jsm_tty.c
+index 8e42a7682c63d..d74cbbbf33c62 100644
+--- a/drivers/tty/serial/jsm/jsm_tty.c
++++ b/drivers/tty/serial/jsm/jsm_tty.c
+@@ -187,6 +187,7 @@ static void jsm_tty_break(struct uart_port *port, int break_state)
+ 
+ static int jsm_tty_open(struct uart_port *port)
+ {
++	unsigned long lock_flags;
+ 	struct jsm_board *brd;
+ 	struct jsm_channel *channel =
+ 		container_of(port, struct jsm_channel, uart_port);
+@@ -240,6 +241,7 @@ static int jsm_tty_open(struct uart_port *port)
+ 	channel->ch_cached_lsr = 0;
+ 	channel->ch_stops_sent = 0;
+ 
++	spin_lock_irqsave(&port->lock, lock_flags);
+ 	termios = &port->state->port.tty->termios;
+ 	channel->ch_c_cflag	= termios->c_cflag;
+ 	channel->ch_c_iflag	= termios->c_iflag;
+@@ -259,6 +261,7 @@ static int jsm_tty_open(struct uart_port *port)
+ 	jsm_carrier(channel);
+ 
+ 	channel->ch_open_count++;
++	spin_unlock_irqrestore(&port->lock, lock_flags);
+ 
+ 	jsm_dbg(OPEN, &channel->ch_bd->pci_dev, "finish\n");
+ 	return 0;
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index ef11860cd69e5..3df0788ddeb0f 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1271,18 +1271,13 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ 	/* Always ask for fixed clock rate from a property. */
+ 	device_property_read_u32(dev, "clock-frequency", &uartclk);
+ 
+-	s->clk = devm_clk_get_optional(dev, "osc");
++	xtal = device_property_match_string(dev, "clock-names", "osc") < 0;
++	if (xtal)
++		s->clk = devm_clk_get_optional(dev, "xtal");
++	else
++		s->clk = devm_clk_get_optional(dev, "osc");
+ 	if (IS_ERR(s->clk))
+ 		return PTR_ERR(s->clk);
+-	if (s->clk) {
+-		xtal = false;
+-	} else {
+-		s->clk = devm_clk_get_optional(dev, "xtal");
+-		if (IS_ERR(s->clk))
+-			return PTR_ERR(s->clk);
+-
+-		xtal = true;
+-	}
+ 
+ 	ret = clk_prepare_enable(s->clk);
+ 	if (ret)
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 07eb56294371b..89ee43061d3ae 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -1758,6 +1758,10 @@ static irqreturn_t sci_br_interrupt(int irq, void *ptr)
+ 
+ 	/* Handle BREAKs */
+ 	sci_handle_breaks(port);
++
++	/* drop invalid character received before break was detected */
++	serial_port_in(port, SCxRDR);
++
+ 	sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port));
+ 
+ 	return IRQ_HANDLED;
+@@ -1837,7 +1841,8 @@ static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr)
+ 		ret = sci_er_interrupt(irq, ptr);
+ 
+ 	/* Break Interrupt */
+-	if ((ssr_status & SCxSR_BRK(port)) && err_enabled)
++	if (s->irqs[SCIx_ERI_IRQ] != s->irqs[SCIx_BRI_IRQ] &&
++	    (ssr_status & SCxSR_BRK(port)) && err_enabled)
+ 		ret = sci_br_interrupt(irq, ptr);
+ 
+ 	/* Overrun Interrupt */
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 4b0d69042ceb6..bf6efebeb4bd8 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -1171,7 +1171,7 @@ static inline unsigned char getleds(void)
+  *
+  *	Check the status of a keyboard led flag and report it back
+  */
+-int vt_get_leds(int console, int flag)
++int vt_get_leds(unsigned int console, int flag)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	int ret;
+@@ -1193,7 +1193,7 @@ EXPORT_SYMBOL_GPL(vt_get_leds);
+  *	Set the LEDs on a console. This is a wrapper for the VT layer
+  *	so that we can keep kbd knowledge internal
+  */
+-void vt_set_led_state(int console, int leds)
++void vt_set_led_state(unsigned int console, int leds)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	setledstate(kb, leds);
+@@ -1212,7 +1212,7 @@ void vt_set_led_state(int console, int leds)
+  *	don't hold the lock. We probably need to split out an LED lock
+  *	but not during an -rc release!
+  */
+-void vt_kbd_con_start(int console)
++void vt_kbd_con_start(unsigned int console)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	unsigned long flags;
+@@ -1229,7 +1229,7 @@ void vt_kbd_con_start(int console)
+  *	Handle console stop. This is a wrapper for the VT layer
+  *	so that we can keep kbd knowledge internal
+  */
+-void vt_kbd_con_stop(int console)
++void vt_kbd_con_stop(unsigned int console)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	unsigned long flags;
+@@ -1825,7 +1825,7 @@ int vt_do_diacrit(unsigned int cmd, void __user *udp, int perm)
+  *	Update the keyboard mode bits while holding the correct locks.
+  *	Return 0 for success or an error code.
+  */
+-int vt_do_kdskbmode(int console, unsigned int arg)
++int vt_do_kdskbmode(unsigned int console, unsigned int arg)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	int ret = 0;
+@@ -1865,7 +1865,7 @@ int vt_do_kdskbmode(int console, unsigned int arg)
+  *	Update the keyboard meta bits while holding the correct locks.
+  *	Return 0 for success or an error code.
+  */
+-int vt_do_kdskbmeta(int console, unsigned int arg)
++int vt_do_kdskbmeta(unsigned int console, unsigned int arg)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	int ret = 0;
+@@ -2008,7 +2008,7 @@ out:
+ }
+ 
+ int vt_do_kdsk_ioctl(int cmd, struct kbentry __user *user_kbe, int perm,
+-						int console)
++						unsigned int console)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	struct kbentry kbe;
+@@ -2097,7 +2097,7 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 	return ret;
+ }
+ 
+-int vt_do_kdskled(int console, int cmd, unsigned long arg, int perm)
++int vt_do_kdskled(unsigned int console, int cmd, unsigned long arg, int perm)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+         unsigned long flags;
+@@ -2139,7 +2139,7 @@ int vt_do_kdskled(int console, int cmd, unsigned long arg, int perm)
+         return -ENOIOCTLCMD;
+ }
+ 
+-int vt_do_kdgkbmode(int console)
++int vt_do_kdgkbmode(unsigned int console)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	/* This is a spot read so needs no locking */
+@@ -2163,7 +2163,7 @@ int vt_do_kdgkbmode(int console)
+  *
+  *	Report the meta flag status of this console
+  */
+-int vt_do_kdgkbmeta(int console)
++int vt_do_kdgkbmeta(unsigned int console)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+         /* Again a spot read so no locking */
+@@ -2176,7 +2176,7 @@ int vt_do_kdgkbmeta(int console)
+  *
+  *	Restore the unicode console state to its default
+  */
+-void vt_reset_unicode(int console)
++void vt_reset_unicode(unsigned int console)
+ {
+ 	unsigned long flags;
+ 
+@@ -2204,7 +2204,7 @@ int vt_get_shift_state(void)
+  *	Reset the keyboard bits for a console as part of a general console
+  *	reset event
+  */
+-void vt_reset_keyboard(int console)
++void vt_reset_keyboard(unsigned int console)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	unsigned long flags;
+@@ -2234,7 +2234,7 @@ void vt_reset_keyboard(int console)
+  *	caller must be sure that there are no synchronization needs
+  */
+ 
+-int vt_get_kbd_mode_bit(int console, int bit)
++int vt_get_kbd_mode_bit(unsigned int console, int bit)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	return vc_kbd_mode(kb, bit);
+@@ -2249,7 +2249,7 @@ int vt_get_kbd_mode_bit(int console, int bit)
+  *	caller must be sure that there are no synchronization needs
+  */
+ 
+-void vt_set_kbd_mode_bit(int console, int bit)
++void vt_set_kbd_mode_bit(unsigned int console, int bit)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	unsigned long flags;
+@@ -2268,7 +2268,7 @@ void vt_set_kbd_mode_bit(int console, int bit)
+  *	caller must be sure that there are no synchronization needs
+  */
+ 
+-void vt_clr_kbd_mode_bit(int console, int bit)
++void vt_clr_kbd_mode_bit(unsigned int console, int bit)
+ {
+ 	struct kbd_struct *kb = kbd_table + console;
+ 	unsigned long flags;
+diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c
+index e86d13c04bdbe..bdc3885c0d493 100644
+--- a/drivers/usb/chipidea/host.c
++++ b/drivers/usb/chipidea/host.c
+@@ -240,15 +240,18 @@ static int ci_ehci_hub_control(
+ )
+ {
+ 	struct ehci_hcd	*ehci = hcd_to_ehci(hcd);
++	unsigned int	ports = HCS_N_PORTS(ehci->hcs_params);
+ 	u32 __iomem	*status_reg;
+-	u32		temp;
++	u32		temp, port_index;
+ 	unsigned long	flags;
+ 	int		retval = 0;
+ 	bool		done = false;
+ 	struct device *dev = hcd->self.controller;
+ 	struct ci_hdrc *ci = dev_get_drvdata(dev);
+ 
+-	status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1];
++	port_index = wIndex & 0xff;
++	port_index -= (port_index > 0);
++	status_reg = &ehci->regs->port_status[port_index];
+ 
+ 	spin_lock_irqsave(&ehci->lock, flags);
+ 
+@@ -260,6 +263,11 @@ static int ci_ehci_hub_control(
+ 	}
+ 
+ 	if (typeReq == SetPortFeature && wValue == USB_PORT_FEAT_SUSPEND) {
++		if (!wIndex || wIndex > ports) {
++			retval = -EPIPE;
++			goto done;
++		}
++
+ 		temp = ehci_readl(ehci, status_reg);
+ 		if ((temp & PORT_PE) == 0 || (temp & PORT_RESET) != 0) {
+ 			retval = -EPIPE;
+@@ -288,7 +296,7 @@ static int ci_ehci_hub_control(
+ 			ehci_writel(ehci, temp, status_reg);
+ 		}
+ 
+-		set_bit((wIndex & 0xff) - 1, &ehci->suspended_ports);
++		set_bit(port_index, &ehci->suspended_ports);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c
+index 756faa46d33a7..d328d20abfbc4 100644
+--- a/drivers/usb/dwc3/dwc3-imx8mp.c
++++ b/drivers/usb/dwc3/dwc3-imx8mp.c
+@@ -152,13 +152,6 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 	}
+ 	dwc3_imx->irq = irq;
+ 
+-	err = devm_request_threaded_irq(dev, irq, NULL, dwc3_imx8mp_interrupt,
+-					IRQF_ONESHOT, dev_name(dev), dwc3_imx);
+-	if (err) {
+-		dev_err(dev, "failed to request IRQ #%d --> %d\n", irq, err);
+-		goto disable_clks;
+-	}
+-
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+ 	err = pm_runtime_get_sync(dev);
+@@ -186,6 +179,13 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 	}
+ 	of_node_put(dwc3_np);
+ 
++	err = devm_request_threaded_irq(dev, irq, NULL, dwc3_imx8mp_interrupt,
++					IRQF_ONESHOT, dev_name(dev), dwc3_imx);
++	if (err) {
++		dev_err(dev, "failed to request IRQ #%d --> %d\n", irq, err);
++		goto depopulate;
++	}
++
+ 	device_set_wakeup_capable(dev, true);
+ 	pm_runtime_put(dev);
+ 
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 72a9797dbbae0..504c1cbc255d1 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -482,7 +482,7 @@ static u8 encode_bMaxPower(enum usb_device_speed speed,
+ {
+ 	unsigned val;
+ 
+-	if (c->MaxPower)
++	if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
+ 		val = c->MaxPower;
+ 	else
+ 		val = CONFIG_USB_GADGET_VBUS_DRAW;
+@@ -936,7 +936,11 @@ static int set_config(struct usb_composite_dev *cdev,
+ 	}
+ 
+ 	/* when we return, be sure our power usage is valid */
+-	power = c->MaxPower ? c->MaxPower : CONFIG_USB_GADGET_VBUS_DRAW;
++	if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
++		power = c->MaxPower;
++	else
++		power = CONFIG_USB_GADGET_VBUS_DRAW;
++
+ 	if (gadget->speed < USB_SPEED_SUPER)
+ 		power = min(power, 500U);
+ 	else
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index d1d044d9f8594..85a3f6d4b5af3 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -492,8 +492,9 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
+ 	}
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+-	if (skb && !in) {
+-		dev_kfree_skb_any(skb);
++	if (!in) {
++		if (skb)
++			dev_kfree_skb_any(skb);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+diff --git a/drivers/usb/host/ehci-mv.c b/drivers/usb/host/ehci-mv.c
+index cffdc8d01b2a8..8fd27249ad257 100644
+--- a/drivers/usb/host/ehci-mv.c
++++ b/drivers/usb/host/ehci-mv.c
+@@ -42,26 +42,25 @@ struct ehci_hcd_mv {
+ 	int (*set_vbus)(unsigned int vbus);
+ };
+ 
+-static void ehci_clock_enable(struct ehci_hcd_mv *ehci_mv)
++static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv)
+ {
+-	clk_prepare_enable(ehci_mv->clk);
+-}
++	int retval;
+ 
+-static void ehci_clock_disable(struct ehci_hcd_mv *ehci_mv)
+-{
+-	clk_disable_unprepare(ehci_mv->clk);
+-}
++	retval = clk_prepare_enable(ehci_mv->clk);
++	if (retval)
++		return retval;
+ 
+-static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv)
+-{
+-	ehci_clock_enable(ehci_mv);
+-	return phy_init(ehci_mv->phy);
++	retval = phy_init(ehci_mv->phy);
++	if (retval)
++		clk_disable_unprepare(ehci_mv->clk);
++
++	return retval;
+ }
+ 
+ static void mv_ehci_disable(struct ehci_hcd_mv *ehci_mv)
+ {
+ 	phy_exit(ehci_mv->phy);
+-	ehci_clock_disable(ehci_mv);
++	clk_disable_unprepare(ehci_mv->clk);
+ }
+ 
+ static int mv_ehci_reset(struct usb_hcd *hcd)
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 05fb8d97cf027..aeb235ce06c1c 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -2510,11 +2510,6 @@ retry_xacterr:
+ 	return count;
+ }
+ 
+-/* high bandwidth multiplier, as encoded in highspeed endpoint descriptors */
+-#define hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03))
+-/* ... and packet size, for any kind of endpoint descriptor */
+-#define max_packet(wMaxPacketSize) ((wMaxPacketSize) & 0x07ff)
+-
+ /* reverse of qh_urb_transaction:  free a list of TDs.
+  * used for cleanup after errors, before HC sees an URB's TDs.
+  */
+@@ -2600,7 +2595,7 @@ static struct list_head *qh_urb_transaction(struct fotg210_hcd *fotg210,
+ 		token |= (1 /* "in" */ << 8);
+ 	/* else it's already initted to "out" pid (0 << 8) */
+ 
+-	maxpacket = max_packet(usb_maxpacket(urb->dev, urb->pipe, !is_input));
++	maxpacket = usb_maxpacket(urb->dev, urb->pipe, !is_input);
+ 
+ 	/*
+ 	 * buffer gets wrapped in one or more qtds;
+@@ -2714,9 +2709,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 		gfp_t flags)
+ {
+ 	struct fotg210_qh *qh = fotg210_qh_alloc(fotg210, flags);
++	struct usb_host_endpoint *ep;
+ 	u32 info1 = 0, info2 = 0;
+ 	int is_input, type;
+ 	int maxp = 0;
++	int mult;
+ 	struct usb_tt *tt = urb->dev->tt;
+ 	struct fotg210_qh_hw *hw;
+ 
+@@ -2731,14 +2728,15 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 
+ 	is_input = usb_pipein(urb->pipe);
+ 	type = usb_pipetype(urb->pipe);
+-	maxp = usb_maxpacket(urb->dev, urb->pipe, !is_input);
++	ep = usb_pipe_endpoint(urb->dev, urb->pipe);
++	maxp = usb_endpoint_maxp(&ep->desc);
++	mult = usb_endpoint_maxp_mult(&ep->desc);
+ 
+ 	/* 1024 byte maxpacket is a hardware ceiling.  High bandwidth
+ 	 * acts like up to 3KB, but is built from smaller packets.
+ 	 */
+-	if (max_packet(maxp) > 1024) {
+-		fotg210_dbg(fotg210, "bogus qh maxpacket %d\n",
+-				max_packet(maxp));
++	if (maxp > 1024) {
++		fotg210_dbg(fotg210, "bogus qh maxpacket %d\n", maxp);
+ 		goto done;
+ 	}
+ 
+@@ -2752,8 +2750,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 	 */
+ 	if (type == PIPE_INTERRUPT) {
+ 		qh->usecs = NS_TO_US(usb_calc_bus_time(USB_SPEED_HIGH,
+-				is_input, 0,
+-				hb_mult(maxp) * max_packet(maxp)));
++				is_input, 0, mult * maxp));
+ 		qh->start = NO_FRAME;
+ 
+ 		if (urb->dev->speed == USB_SPEED_HIGH) {
+@@ -2790,7 +2787,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 			think_time = tt ? tt->think_time : 0;
+ 			qh->tt_usecs = NS_TO_US(think_time +
+ 					usb_calc_bus_time(urb->dev->speed,
+-					is_input, 0, max_packet(maxp)));
++					is_input, 0, maxp));
+ 			qh->period = urb->interval;
+ 			if (qh->period > fotg210->periodic_size) {
+ 				qh->period = fotg210->periodic_size;
+@@ -2853,11 +2850,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 			 * to help them do so.  So now people expect to use
+ 			 * such nonconformant devices with Linux too; sigh.
+ 			 */
+-			info1 |= max_packet(maxp) << 16;
++			info1 |= maxp << 16;
+ 			info2 |= (FOTG210_TUNE_MULT_HS << 30);
+ 		} else {		/* PIPE_INTERRUPT */
+-			info1 |= max_packet(maxp) << 16;
+-			info2 |= hb_mult(maxp) << 30;
++			info1 |= maxp << 16;
++			info2 |= mult << 30;
+ 		}
+ 		break;
+ 	default:
+@@ -3927,6 +3924,7 @@ static void iso_stream_init(struct fotg210_hcd *fotg210,
+ 	int is_input;
+ 	long bandwidth;
+ 	unsigned multi;
++	struct usb_host_endpoint *ep;
+ 
+ 	/*
+ 	 * this might be a "high bandwidth" highspeed endpoint,
+@@ -3934,14 +3932,14 @@ static void iso_stream_init(struct fotg210_hcd *fotg210,
+ 	 */
+ 	epnum = usb_pipeendpoint(pipe);
+ 	is_input = usb_pipein(pipe) ? USB_DIR_IN : 0;
+-	maxp = usb_maxpacket(dev, pipe, !is_input);
++	ep = usb_pipe_endpoint(dev, pipe);
++	maxp = usb_endpoint_maxp(&ep->desc);
+ 	if (is_input)
+ 		buf1 = (1 << 11);
+ 	else
+ 		buf1 = 0;
+ 
+-	maxp = max_packet(maxp);
+-	multi = hb_mult(maxp);
++	multi = usb_endpoint_maxp_mult(&ep->desc);
+ 	buf1 |= maxp;
+ 	maxp *= multi;
+ 
+@@ -4462,13 +4460,12 @@ static bool itd_complete(struct fotg210_hcd *fotg210, struct fotg210_itd *itd)
+ 
+ 			/* HC need not update length with this error */
+ 			if (!(t & FOTG210_ISOC_BABBLE)) {
+-				desc->actual_length =
+-					fotg210_itdlen(urb, desc, t);
++				desc->actual_length = FOTG210_ITD_LENGTH(t);
+ 				urb->actual_length += desc->actual_length;
+ 			}
+ 		} else if (likely((t & FOTG210_ISOC_ACTIVE) == 0)) {
+ 			desc->status = 0;
+-			desc->actual_length = fotg210_itdlen(urb, desc, t);
++			desc->actual_length = FOTG210_ITD_LENGTH(t);
+ 			urb->actual_length += desc->actual_length;
+ 		} else {
+ 			/* URB was too late */
+diff --git a/drivers/usb/host/fotg210.h b/drivers/usb/host/fotg210.h
+index 0a91061a0551d..0781442b7a24a 100644
+--- a/drivers/usb/host/fotg210.h
++++ b/drivers/usb/host/fotg210.h
+@@ -683,11 +683,6 @@ static inline unsigned fotg210_read_frame_index(struct fotg210_hcd *fotg210)
+ 	return fotg210_readl(fotg210, &fotg210->regs->frame_index);
+ }
+ 
+-#define fotg210_itdlen(urb, desc, t) ({			\
+-	usb_pipein((urb)->pipe) ?				\
+-	(desc)->length - FOTG210_ITD_LENGTH(t) :			\
+-	FOTG210_ITD_LENGTH(t);					\
+-})
+ /*-------------------------------------------------------------------------*/
+ 
+ #endif /* __LINUX_FOTG210_H */
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index 0bb1a6295d64a..f8adf393875f3 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -80,7 +80,7 @@ decode_ep(struct usb_host_endpoint *ep, enum usb_device_speed speed)
+ 		interval /= 1000;
+ 	}
+ 
+-	snprintf(buf, DBG_BUF_EN, "%s ep%d%s %s, mpkt:%d, interval:%d/%d%s\n",
++	snprintf(buf, DBG_BUF_EN, "%s ep%d%s %s, mpkt:%d, interval:%d/%d%s",
+ 		 usb_speed_string(speed), usb_endpoint_num(epd),
+ 		 usb_endpoint_dir_in(epd) ? "in" : "out",
+ 		 usb_ep_type_string(usb_endpoint_type(epd)),
+@@ -129,6 +129,10 @@ get_bw_info(struct xhci_hcd_mtk *mtk, struct usb_device *udev,
+ 	int bw_index;
+ 
+ 	virt_dev = xhci->devs[udev->slot_id];
++	if (!virt_dev->real_port) {
++		WARN_ONCE(1, "%s invalid real_port\n", dev_name(&udev->dev));
++		return NULL;
++	}
+ 
+ 	if (udev->speed >= USB_SPEED_SUPER) {
+ 		if (usb_endpoint_dir_out(&ep->desc))
+@@ -236,14 +240,20 @@ static void drop_tt(struct usb_device *udev)
+ 	}
+ }
+ 
+-static struct mu3h_sch_ep_info *create_sch_ep(struct usb_device *udev,
+-	struct usb_host_endpoint *ep, struct xhci_ep_ctx *ep_ctx)
++static struct mu3h_sch_ep_info *
++create_sch_ep(struct xhci_hcd_mtk *mtk, struct usb_device *udev,
++	      struct usb_host_endpoint *ep, struct xhci_ep_ctx *ep_ctx)
+ {
+ 	struct mu3h_sch_ep_info *sch_ep;
++	struct mu3h_sch_bw_info *bw_info;
+ 	struct mu3h_sch_tt *tt = NULL;
+ 	u32 len_bw_budget_table;
+ 	size_t mem_size;
+ 
++	bw_info = get_bw_info(mtk, udev, ep);
++	if (!bw_info)
++		return ERR_PTR(-ENODEV);
++
+ 	if (is_fs_or_ls(udev->speed))
+ 		len_bw_budget_table = TT_MICROFRAMES_MAX;
+ 	else if ((udev->speed >= USB_SPEED_SUPER)
+@@ -266,11 +276,13 @@ static struct mu3h_sch_ep_info *create_sch_ep(struct usb_device *udev,
+ 		}
+ 	}
+ 
++	sch_ep->bw_info = bw_info;
+ 	sch_ep->sch_tt = tt;
+ 	sch_ep->ep = ep;
+ 	sch_ep->speed = udev->speed;
+ 	INIT_LIST_HEAD(&sch_ep->endpoint);
+ 	INIT_LIST_HEAD(&sch_ep->tt_endpoint);
++	INIT_HLIST_NODE(&sch_ep->hentry);
+ 
+ 	return sch_ep;
+ }
+@@ -587,9 +599,9 @@ static u32 get_esit_boundary(struct mu3h_sch_ep_info *sch_ep)
+ 	return boundary;
+ }
+ 
+-static int check_sch_bw(struct mu3h_sch_bw_info *sch_bw,
+-			struct mu3h_sch_ep_info *sch_ep)
++static int check_sch_bw(struct mu3h_sch_ep_info *sch_ep)
+ {
++	struct mu3h_sch_bw_info *sch_bw = sch_ep->bw_info;
+ 	const u32 esit_boundary = get_esit_boundary(sch_ep);
+ 	const u32 bw_boundary = get_bw_boundary(sch_ep->speed);
+ 	u32 offset;
+@@ -635,23 +647,26 @@ static int check_sch_bw(struct mu3h_sch_bw_info *sch_bw,
+ 	return load_ep_bw(sch_bw, sch_ep, true);
+ }
+ 
+-static void destroy_sch_ep(struct usb_device *udev,
+-	struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
++static void destroy_sch_ep(struct xhci_hcd_mtk *mtk, struct usb_device *udev,
++			   struct mu3h_sch_ep_info *sch_ep)
+ {
+ 	/* only release ep bw check passed by check_sch_bw() */
+ 	if (sch_ep->allocated)
+-		load_ep_bw(sch_bw, sch_ep, false);
++		load_ep_bw(sch_ep->bw_info, sch_ep, false);
+ 
+ 	if (sch_ep->sch_tt)
+ 		drop_tt(udev);
+ 
+ 	list_del(&sch_ep->endpoint);
++	hlist_del(&sch_ep->hentry);
+ 	kfree(sch_ep);
+ }
+ 
+-static bool need_bw_sch(struct usb_host_endpoint *ep,
+-	enum usb_device_speed speed, int has_tt)
++static bool need_bw_sch(struct usb_device *udev,
++			struct usb_host_endpoint *ep)
+ {
++	bool has_tt = udev->tt && udev->tt->hub->parent;
++
+ 	/* only for periodic endpoints */
+ 	if (usb_endpoint_xfer_control(&ep->desc)
+ 		|| usb_endpoint_xfer_bulk(&ep->desc))
+@@ -662,7 +677,7 @@ static bool need_bw_sch(struct usb_host_endpoint *ep,
+ 	 * a TT are also ignored, root-hub will schedule them directly,
+ 	 * but need set @bpkts field of endpoint context to 1.
+ 	 */
+-	if (is_fs_or_ls(speed) && !has_tt)
++	if (is_fs_or_ls(udev->speed) && !has_tt)
+ 		return false;
+ 
+ 	/* skip endpoint with zero maxpkt */
+@@ -677,7 +692,6 @@ int xhci_mtk_sch_init(struct xhci_hcd_mtk *mtk)
+ 	struct xhci_hcd *xhci = hcd_to_xhci(mtk->hcd);
+ 	struct mu3h_sch_bw_info *sch_array;
+ 	int num_usb_bus;
+-	int i;
+ 
+ 	/* ss IN and OUT are separated */
+ 	num_usb_bus = xhci->usb3_rhub.num_ports * 2 + xhci->usb2_rhub.num_ports;
+@@ -686,12 +700,10 @@ int xhci_mtk_sch_init(struct xhci_hcd_mtk *mtk)
+ 	if (sch_array == NULL)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < num_usb_bus; i++)
+-		INIT_LIST_HEAD(&sch_array[i].bw_ep_list);
+-
+ 	mtk->sch_array = sch_array;
+ 
+ 	INIT_LIST_HEAD(&mtk->bw_ep_chk_list);
++	hash_init(mtk->sch_ep_hash);
+ 
+ 	return 0;
+ }
+@@ -715,9 +727,7 @@ static int add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 	ep_index = xhci_get_endpoint_index(&ep->desc);
+ 	ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+ 
+-	xhci_dbg(xhci, "%s %s\n", __func__, decode_ep(ep, udev->speed));
+-
+-	if (!need_bw_sch(ep, udev->speed, !!virt_dev->tt_info)) {
++	if (!need_bw_sch(udev, ep)) {
+ 		/*
+ 		 * set @bpkts to 1 if it is LS or FS periodic endpoint, and its
+ 		 * device does not connected through an external HS hub
+@@ -729,13 +739,16 @@ static int add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 		return 0;
+ 	}
+ 
+-	sch_ep = create_sch_ep(udev, ep, ep_ctx);
++	xhci_dbg(xhci, "%s %s\n", __func__, decode_ep(ep, udev->speed));
++
++	sch_ep = create_sch_ep(mtk, udev, ep, ep_ctx);
+ 	if (IS_ERR_OR_NULL(sch_ep))
+ 		return -ENOMEM;
+ 
+ 	setup_sch_info(ep_ctx, sch_ep);
+ 
+ 	list_add_tail(&sch_ep->endpoint, &mtk->bw_ep_chk_list);
++	hash_add(mtk->sch_ep_hash, &sch_ep->hentry, (unsigned long)ep);
+ 
+ 	return 0;
+ }
+@@ -745,22 +758,18 @@ static void drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ {
+ 	struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+-	struct xhci_virt_device *virt_dev;
+-	struct mu3h_sch_bw_info *sch_bw;
+-	struct mu3h_sch_ep_info *sch_ep, *tmp;
+-
+-	virt_dev = xhci->devs[udev->slot_id];
+-
+-	xhci_dbg(xhci, "%s %s\n", __func__, decode_ep(ep, udev->speed));
++	struct mu3h_sch_ep_info *sch_ep;
++	struct hlist_node *hn;
+ 
+-	if (!need_bw_sch(ep, udev->speed, !!virt_dev->tt_info))
++	if (!need_bw_sch(udev, ep))
+ 		return;
+ 
+-	sch_bw = get_bw_info(mtk, udev, ep);
++	xhci_err(xhci, "%s %s\n", __func__, decode_ep(ep, udev->speed));
+ 
+-	list_for_each_entry_safe(sch_ep, tmp, &sch_bw->bw_ep_list, endpoint) {
++	hash_for_each_possible_safe(mtk->sch_ep_hash, sch_ep,
++				    hn, hentry, (unsigned long)ep) {
+ 		if (sch_ep->ep == ep) {
+-			destroy_sch_ep(udev, sch_bw, sch_ep);
++			destroy_sch_ep(mtk, udev, sch_ep);
+ 			break;
+ 		}
+ 	}
+@@ -771,30 +780,22 @@ int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ 	struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 	struct xhci_virt_device *virt_dev = xhci->devs[udev->slot_id];
+-	struct mu3h_sch_bw_info *sch_bw;
+-	struct mu3h_sch_ep_info *sch_ep, *tmp;
++	struct mu3h_sch_ep_info *sch_ep;
+ 	int ret;
+ 
+ 	xhci_dbg(xhci, "%s() udev %s\n", __func__, dev_name(&udev->dev));
+ 
+ 	list_for_each_entry(sch_ep, &mtk->bw_ep_chk_list, endpoint) {
+-		sch_bw = get_bw_info(mtk, udev, sch_ep->ep);
++		struct xhci_ep_ctx *ep_ctx;
++		struct usb_host_endpoint *ep = sch_ep->ep;
++		unsigned int ep_index = xhci_get_endpoint_index(&ep->desc);
+ 
+-		ret = check_sch_bw(sch_bw, sch_ep);
++		ret = check_sch_bw(sch_ep);
+ 		if (ret) {
+ 			xhci_err(xhci, "Not enough bandwidth! (%s)\n",
+ 				 sch_error_string(-ret));
+ 			return -ENOSPC;
+ 		}
+-	}
+-
+-	list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint) {
+-		struct xhci_ep_ctx *ep_ctx;
+-		struct usb_host_endpoint *ep = sch_ep->ep;
+-		unsigned int ep_index = xhci_get_endpoint_index(&ep->desc);
+-
+-		sch_bw = get_bw_info(mtk, udev, ep);
+-		list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
+ 
+ 		ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+ 		ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(sch_ep->pkts)
+@@ -808,22 +809,23 @@ int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ 			sch_ep->offset, sch_ep->repeat);
+ 	}
+ 
+-	return xhci_check_bandwidth(hcd, udev);
++	ret = xhci_check_bandwidth(hcd, udev);
++	if (!ret)
++		INIT_LIST_HEAD(&mtk->bw_ep_chk_list);
++
++	return ret;
+ }
+ 
+ void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ 	struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+-	struct mu3h_sch_bw_info *sch_bw;
+ 	struct mu3h_sch_ep_info *sch_ep, *tmp;
+ 
+ 	xhci_dbg(xhci, "%s() udev %s\n", __func__, dev_name(&udev->dev));
+ 
+-	list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint) {
+-		sch_bw = get_bw_info(mtk, udev, sch_ep->ep);
+-		destroy_sch_ep(udev, sch_bw, sch_ep);
+-	}
++	list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint)
++		destroy_sch_ep(mtk, udev, sch_ep);
+ 
+ 	xhci_reset_bandwidth(hcd, udev);
+ }
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 2548976bcf05c..cb27569186a0d 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -569,7 +569,7 @@ disable_ldos:
+ 	xhci_mtk_ldos_disable(mtk);
+ 
+ disable_pm:
+-	pm_runtime_put_sync_autosuspend(dev);
++	pm_runtime_put_noidle(dev);
+ 	pm_runtime_disable(dev);
+ 	return ret;
+ }
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index ace432356c412..f87d199b08181 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -10,11 +10,15 @@
+ #define _XHCI_MTK_H_
+ 
+ #include <linux/clk.h>
++#include <linux/hashtable.h>
+ 
+ #include "xhci.h"
+ 
+ #define BULK_CLKS_NUM	5
+ 
++/* support at most 64 ep, use 32 size hash table */
++#define SCH_EP_HASH_BITS	5
++
+ /**
+  * To simplify scheduler algorithm, set a upper limit for ESIT,
+  * if a synchromous ep's ESIT is larger than @XHCI_MTK_MAX_ESIT,
+@@ -36,14 +40,12 @@ struct mu3h_sch_tt {
+  * struct mu3h_sch_bw_info: schedule information for bandwidth domain
+  *
+  * @bus_bw: array to keep track of bandwidth already used at each uframes
+- * @bw_ep_list: eps in the bandwidth domain
+  *
+  * treat a HS root port as a bandwidth domain, but treat a SS root port as
+  * two bandwidth domains, one for IN eps and another for OUT eps.
+  */
+ struct mu3h_sch_bw_info {
+ 	u32 bus_bw[XHCI_MTK_MAX_ESIT];
+-	struct list_head bw_ep_list;
+ };
+ 
+ /**
+@@ -53,8 +55,10 @@ struct mu3h_sch_bw_info {
+  * @num_budget_microframes: number of continuous uframes
+  *		(@repeat==1) scheduled within the interval
+  * @bw_cost_per_microframe: bandwidth cost per microframe
++ * @hentry: hash table entry
+  * @endpoint: linked into bandwidth domain which it belongs to
+  * @tt_endpoint: linked into mu3h_sch_tt's list which it belongs to
++ * @bw_info: bandwidth domain which this endpoint belongs
+  * @sch_tt: mu3h_sch_tt linked into
+  * @ep_type: endpoint type
+  * @maxpkt: max packet size of endpoint
+@@ -82,7 +86,9 @@ struct mu3h_sch_ep_info {
+ 	u32 num_budget_microframes;
+ 	u32 bw_cost_per_microframe;
+ 	struct list_head endpoint;
++	struct hlist_node hentry;
+ 	struct list_head tt_endpoint;
++	struct mu3h_sch_bw_info *bw_info;
+ 	struct mu3h_sch_tt *sch_tt;
+ 	u32 ep_type;
+ 	u32 maxpkt;
+@@ -135,6 +141,7 @@ struct xhci_hcd_mtk {
+ 	struct usb_hcd *hcd;
+ 	struct mu3h_sch_bw_info *sch_array;
+ 	struct list_head bw_ep_chk_list;
++	DECLARE_HASHTABLE(sch_ep_hash, SCH_EP_HASH_BITS);
+ 	struct mu3c_ippc_regs __iomem *ippc_regs;
+ 	int num_u2_ports;
+ 	int num_u3_ports;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 3618070eba786..18a203c9011eb 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -4705,19 +4705,19 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u1_params.sel;
+-
+ 	/* Prevent U1 if service interval is shorter than U1 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
++		if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) {
+ 			dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
++	else
++		timeout_ns = udev->u1_params.sel;
++
+ 	/* The U1 timeout is encoded in 1us intervals.
+ 	 * Don't return a timeout of zero, because that's USB3_LPM_DISABLED.
+ 	 */
+@@ -4769,19 +4769,19 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u2_params.sel;
+-
+ 	/* Prevent U2 if service interval is shorter than U2 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
++		if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) {
+ 			dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
++	else
++		timeout_ns = udev->u2_params.sel;
++
+ 	/* The U2 timeout is encoded in 256us intervals */
+ 	timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000);
+ 	/* If the necessary timeout value is bigger than what we can set in the
+diff --git a/drivers/usb/isp1760/isp1760-core.c b/drivers/usb/isp1760/isp1760-core.c
+index ff07e28906922..1f2ca22384b03 100644
+--- a/drivers/usb/isp1760/isp1760-core.c
++++ b/drivers/usb/isp1760/isp1760-core.c
+@@ -30,6 +30,7 @@ static int isp1760_init_core(struct isp1760_device *isp)
+ {
+ 	struct isp1760_hcd *hcd = &isp->hcd;
+ 	struct isp1760_udc *udc = &isp->udc;
++	u32 otg_ctrl;
+ 
+ 	/* Low-level chip reset */
+ 	if (isp->rst_gpio) {
+@@ -83,16 +84,17 @@ static int isp1760_init_core(struct isp1760_device *isp)
+ 	 *
+ 	 * TODO: Really support OTG. For now we configure port 1 in device mode
+ 	 */
+-	if (((isp->devflags & ISP1760_FLAG_ISP1761) ||
+-	     (isp->devflags & ISP1760_FLAG_ISP1763)) &&
+-	    (isp->devflags & ISP1760_FLAG_PERIPHERAL_EN)) {
+-		isp1760_field_set(hcd->fields, HW_DM_PULLDOWN);
+-		isp1760_field_set(hcd->fields, HW_DP_PULLDOWN);
+-		isp1760_field_set(hcd->fields, HW_OTG_DISABLE);
+-	} else {
+-		isp1760_field_set(hcd->fields, HW_SW_SEL_HC_DC);
+-		isp1760_field_set(hcd->fields, HW_VBUS_DRV);
+-		isp1760_field_set(hcd->fields, HW_SEL_CP_EXT);
++	if (isp->devflags & ISP1760_FLAG_ISP1761) {
++		if (isp->devflags & ISP1760_FLAG_PERIPHERAL_EN) {
++			otg_ctrl = (ISP176x_HW_DM_PULLDOWN_CLEAR |
++				    ISP176x_HW_DP_PULLDOWN_CLEAR |
++				    ISP176x_HW_OTG_DISABLE);
++		} else {
++			otg_ctrl = (ISP176x_HW_SW_SEL_HC_DC_CLEAR |
++				    ISP176x_HW_VBUS_DRV |
++				    ISP176x_HW_SEL_CP_EXT);
++		}
++		isp1760_reg_write(hcd->regs, ISP176x_HC_OTG_CTRL, otg_ctrl);
+ 	}
+ 
+ 	dev_info(isp->dev, "%s bus width: %u, oc: %s\n",
+@@ -235,20 +237,20 @@ static const struct reg_field isp1760_hc_reg_fields[] = {
+ 	[HC_ISO_IRQ_MASK_AND]	= REG_FIELD(ISP176x_HC_ISO_IRQ_MASK_AND, 0, 31),
+ 	[HC_INT_IRQ_MASK_AND]	= REG_FIELD(ISP176x_HC_INT_IRQ_MASK_AND, 0, 31),
+ 	[HC_ATL_IRQ_MASK_AND]	= REG_FIELD(ISP176x_HC_ATL_IRQ_MASK_AND, 0, 31),
+-	[HW_OTG_DISABLE]	= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 10, 10),
+-	[HW_SW_SEL_HC_DC]	= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 7, 7),
+-	[HW_VBUS_DRV]		= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 4, 4),
+-	[HW_SEL_CP_EXT]		= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 3, 3),
+-	[HW_DM_PULLDOWN]	= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 2, 2),
+-	[HW_DP_PULLDOWN]	= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 1, 1),
+-	[HW_DP_PULLUP]		= REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 0, 0),
+-	[HW_OTG_DISABLE_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 10, 10),
+-	[HW_SW_SEL_HC_DC_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 7, 7),
+-	[HW_VBUS_DRV_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 4, 4),
+-	[HW_SEL_CP_EXT_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 3, 3),
+-	[HW_DM_PULLDOWN_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 2, 2),
+-	[HW_DP_PULLDOWN_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 1, 1),
+-	[HW_DP_PULLUP_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 0, 0),
++	[HW_OTG_DISABLE_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 26, 26),
++	[HW_SW_SEL_HC_DC_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 23, 23),
++	[HW_VBUS_DRV_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 20, 20),
++	[HW_SEL_CP_EXT_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 19, 19),
++	[HW_DM_PULLDOWN_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 18, 18),
++	[HW_DP_PULLDOWN_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 17, 17),
++	[HW_DP_PULLUP_CLEAR]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 16, 16),
++	[HW_OTG_DISABLE]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 10, 10),
++	[HW_SW_SEL_HC_DC]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 7, 7),
++	[HW_VBUS_DRV]		= REG_FIELD(ISP176x_HC_OTG_CTRL, 4, 4),
++	[HW_SEL_CP_EXT]		= REG_FIELD(ISP176x_HC_OTG_CTRL, 3, 3),
++	[HW_DM_PULLDOWN]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 2, 2),
++	[HW_DP_PULLDOWN]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 1, 1),
++	[HW_DP_PULLUP]		= REG_FIELD(ISP176x_HC_OTG_CTRL, 0, 0),
+ };
+ 
+ static const struct reg_field isp1763_hc_reg_fields[] = {
+diff --git a/drivers/usb/isp1760/isp1760-hcd.c b/drivers/usb/isp1760/isp1760-hcd.c
+index 27168b4a4ef22..e517376c32917 100644
+--- a/drivers/usb/isp1760/isp1760-hcd.c
++++ b/drivers/usb/isp1760/isp1760-hcd.c
+@@ -182,7 +182,7 @@ struct urb_listitem {
+ 	struct urb *urb;
+ };
+ 
+-static const u32 isp1763_hc_portsc1_fields[] = {
++static const u32 isp176x_hc_portsc1_fields[] = {
+ 	[PORT_OWNER]		= BIT(13),
+ 	[PORT_POWER]		= BIT(12),
+ 	[PORT_LSTATUS]		= BIT(10),
+@@ -205,27 +205,28 @@ static u32 isp1760_hcd_read(struct usb_hcd *hcd, u32 field)
+ }
+ 
+ /*
+- * We need, in isp1763, to write directly the values to the portsc1
++ * We need, in isp176x, to write directly the values to the portsc1
+  * register so it will make the other values to trigger.
+  */
+ static void isp1760_hcd_portsc1_set_clear(struct isp1760_hcd *priv, u32 field,
+ 					  u32 val)
+ {
+-	u32 bit = isp1763_hc_portsc1_fields[field];
+-	u32 port_status = readl(priv->base + ISP1763_HC_PORTSC1);
++	u32 bit = isp176x_hc_portsc1_fields[field];
++	u16 portsc1_reg = priv->is_isp1763 ? ISP1763_HC_PORTSC1 :
++		ISP176x_HC_PORTSC1;
++	u32 port_status = readl(priv->base + portsc1_reg);
+ 
+ 	if (val)
+-		writel(port_status | bit, priv->base + ISP1763_HC_PORTSC1);
++		writel(port_status | bit, priv->base + portsc1_reg);
+ 	else
+-		writel(port_status & ~bit, priv->base + ISP1763_HC_PORTSC1);
++		writel(port_status & ~bit, priv->base + portsc1_reg);
+ }
+ 
+ static void isp1760_hcd_write(struct usb_hcd *hcd, u32 field, u32 val)
+ {
+ 	struct isp1760_hcd *priv = hcd_to_priv(hcd);
+ 
+-	if (unlikely(priv->is_isp1763 &&
+-		     (field >= PORT_OWNER && field <= PORT_CONNECT)))
++	if (unlikely((field >= PORT_OWNER && field <= PORT_CONNECT)))
+ 		return isp1760_hcd_portsc1_set_clear(priv, field, val);
+ 
+ 	isp1760_field_write(priv->fields, field, val);
+@@ -367,8 +368,7 @@ static void isp1760_mem_read(struct usb_hcd *hcd, u32 src_offset, void *dst,
+ {
+ 	struct isp1760_hcd *priv = hcd_to_priv(hcd);
+ 
+-	isp1760_hcd_write(hcd, MEM_BANK_SEL, ISP_BANK_0);
+-	isp1760_hcd_write(hcd, MEM_START_ADDR, src_offset);
++	isp1760_reg_write(priv->regs, ISP176x_HC_MEMORY, src_offset);
+ 	ndelay(100);
+ 
+ 	bank_reads8(priv->base, src_offset, ISP_BANK_0, dst, bytes);
+@@ -496,8 +496,7 @@ static void isp1760_ptd_read(struct usb_hcd *hcd, u32 ptd_offset, u32 slot,
+ 	u16 src_offset = ptd_offset + slot * sizeof(*ptd);
+ 	struct isp1760_hcd *priv = hcd_to_priv(hcd);
+ 
+-	isp1760_hcd_write(hcd, MEM_BANK_SEL, ISP_BANK_0);
+-	isp1760_hcd_write(hcd, MEM_START_ADDR, src_offset);
++	isp1760_reg_write(priv->regs, ISP176x_HC_MEMORY, src_offset);
+ 	ndelay(90);
+ 
+ 	bank_reads8(priv->base, src_offset, ISP_BANK_0, (void *)ptd,
+@@ -588,8 +587,8 @@ static void init_memory(struct isp1760_hcd *priv)
+ 
+ 	payload_addr = PAYLOAD_OFFSET;
+ 
+-	for (i = 0, curr = 0; i < ARRAY_SIZE(mem->blocks); i++) {
+-		for (j = 0; j < mem->blocks[i]; j++, curr++) {
++	for (i = 0, curr = 0; i < ARRAY_SIZE(mem->blocks); i++, curr += j) {
++		for (j = 0; j < mem->blocks[i]; j++) {
+ 			priv->memory_pool[curr + j].start = payload_addr;
+ 			priv->memory_pool[curr + j].size = mem->blocks_size[i];
+ 			priv->memory_pool[curr + j].free = 1;
+@@ -1826,9 +1825,11 @@ static void packetize_urb(struct usb_hcd *hcd,
+ 			goto cleanup;
+ 
+ 		if (len > mem->blocks_size[ISP176x_BLOCK_NUM - 1])
+-			len = mem->blocks_size[ISP176x_BLOCK_NUM - 1];
++			this_qtd_len = mem->blocks_size[ISP176x_BLOCK_NUM - 1];
++		else
++			this_qtd_len = len;
+ 
+-		this_qtd_len = qtd_fill(qtd, buf, len);
++		this_qtd_len = qtd_fill(qtd, buf, this_qtd_len);
+ 		list_add_tail(&qtd->qtd_list, head);
+ 
+ 		len -= this_qtd_len;
+diff --git a/drivers/usb/isp1760/isp1760-regs.h b/drivers/usb/isp1760/isp1760-regs.h
+index 94ea60c20b2a4..3a6751197e970 100644
+--- a/drivers/usb/isp1760/isp1760-regs.h
++++ b/drivers/usb/isp1760/isp1760-regs.h
+@@ -61,6 +61,7 @@
+ #define ISP176x_HC_INT_IRQ_MASK_AND	0x328
+ #define ISP176x_HC_ATL_IRQ_MASK_AND	0x32c
+ 
++#define ISP176x_HC_OTG_CTRL		0x374
+ #define ISP176x_HC_OTG_CTRL_SET		0x374
+ #define ISP176x_HC_OTG_CTRL_CLEAR	0x376
+ 
+@@ -179,6 +180,21 @@ enum isp176x_host_controller_fields {
+ #define ISP176x_DC_IESUSP		BIT(3)
+ #define ISP176x_DC_IEBRST		BIT(0)
+ 
++#define ISP176x_HW_OTG_DISABLE_CLEAR	BIT(26)
++#define ISP176x_HW_SW_SEL_HC_DC_CLEAR	BIT(23)
++#define ISP176x_HW_VBUS_DRV_CLEAR	BIT(20)
++#define ISP176x_HW_SEL_CP_EXT_CLEAR	BIT(19)
++#define ISP176x_HW_DM_PULLDOWN_CLEAR	BIT(18)
++#define ISP176x_HW_DP_PULLDOWN_CLEAR	BIT(17)
++#define ISP176x_HW_DP_PULLUP_CLEAR	BIT(16)
++#define ISP176x_HW_OTG_DISABLE		BIT(10)
++#define ISP176x_HW_SW_SEL_HC_DC		BIT(7)
++#define ISP176x_HW_VBUS_DRV		BIT(4)
++#define ISP176x_HW_SEL_CP_EXT		BIT(3)
++#define ISP176x_HW_DM_PULLDOWN		BIT(2)
++#define ISP176x_HW_DP_PULLDOWN		BIT(1)
++#define ISP176x_HW_DP_PULLUP		BIT(0)
++
+ #define ISP176x_DC_ENDPTYP_ISOC		0x01
+ #define ISP176x_DC_ENDPTYP_BULK		0x02
+ #define ISP176x_DC_ENDPTYP_INTERRUPT	0x03
+diff --git a/drivers/usb/isp1760/isp1760-udc.c b/drivers/usb/isp1760/isp1760-udc.c
+index a78da59d6417b..5cafd23345cad 100644
+--- a/drivers/usb/isp1760/isp1760-udc.c
++++ b/drivers/usb/isp1760/isp1760-udc.c
+@@ -1363,7 +1363,7 @@ static irqreturn_t isp1760_udc_irq(int irq, void *dev)
+ 
+ 	status = isp1760_udc_irq_get_status(udc);
+ 
+-	if (status & DC_IEVBUS) {
++	if (status & ISP176x_DC_IEVBUS) {
+ 		dev_dbg(udc->isp->dev, "%s(VBUS)\n", __func__);
+ 		/* The VBUS interrupt is only triggered when VBUS appears. */
+ 		spin_lock(&udc->lock);
+@@ -1371,7 +1371,7 @@ static irqreturn_t isp1760_udc_irq(int irq, void *dev)
+ 		spin_unlock(&udc->lock);
+ 	}
+ 
+-	if (status & DC_IEBRST) {
++	if (status & ISP176x_DC_IEBRST) {
+ 		dev_dbg(udc->isp->dev, "%s(BRST)\n", __func__);
+ 
+ 		isp1760_udc_reset(udc);
+@@ -1391,18 +1391,18 @@ static irqreturn_t isp1760_udc_irq(int irq, void *dev)
+ 		}
+ 	}
+ 
+-	if (status & DC_IEP0SETUP) {
++	if (status & ISP176x_DC_IEP0SETUP) {
+ 		dev_dbg(udc->isp->dev, "%s(EP0SETUP)\n", __func__);
+ 
+ 		isp1760_ep0_setup(udc);
+ 	}
+ 
+-	if (status & DC_IERESM) {
++	if (status & ISP176x_DC_IERESM) {
+ 		dev_dbg(udc->isp->dev, "%s(RESM)\n", __func__);
+ 		isp1760_udc_resume(udc);
+ 	}
+ 
+-	if (status & DC_IESUSP) {
++	if (status & ISP176x_DC_IESUSP) {
+ 		dev_dbg(udc->isp->dev, "%s(SUSP)\n", __func__);
+ 
+ 		spin_lock(&udc->lock);
+@@ -1413,7 +1413,7 @@ static irqreturn_t isp1760_udc_irq(int irq, void *dev)
+ 		spin_unlock(&udc->lock);
+ 	}
+ 
+-	if (status & DC_IEHS_STA) {
++	if (status & ISP176x_DC_IEHS_STA) {
+ 		dev_dbg(udc->isp->dev, "%s(HS_STA)\n", __func__);
+ 		udc->gadget.speed = USB_SPEED_HIGH;
+ 	}
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index 5892f3ce0cdc8..ce9fc46c92661 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -890,23 +890,22 @@ static int dsps_probe(struct platform_device *pdev)
+ 	if (!glue->usbss_base)
+ 		return -ENXIO;
+ 
+-	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
+-		ret = dsps_setup_optional_vbus_irq(pdev, glue);
+-		if (ret)
+-			goto err_iounmap;
+-	}
+-
+ 	platform_set_drvdata(pdev, glue);
+ 	pm_runtime_enable(&pdev->dev);
+ 	ret = dsps_create_musb_pdev(glue, pdev);
+ 	if (ret)
+ 		goto err;
+ 
++	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
++		ret = dsps_setup_optional_vbus_irq(pdev, glue);
++		if (ret)
++			goto err;
++	}
++
+ 	return 0;
+ 
+ err:
+ 	pm_runtime_disable(&pdev->dev);
+-err_iounmap:
+ 	iounmap(glue->usbss_base);
+ 	return ret;
+ }
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 4ba6bcdaa8e9d..b07b2925ff78b 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -455,8 +455,14 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			vhci_hcd->port_status[rhport] &= ~(1 << USB_PORT_FEAT_RESET);
+ 			vhci_hcd->re_timeout = 0;
+ 
++			/*
++			 * A few drivers do usb reset during probe when
++			 * the device could be in VDEV_ST_USED state
++			 */
+ 			if (vhci_hcd->vdev[rhport].ud.status ==
+-			    VDEV_ST_NOTASSIGNED) {
++				VDEV_ST_NOTASSIGNED ||
++			    vhci_hcd->vdev[rhport].ud.status ==
++				VDEV_ST_USED) {
+ 				usbip_dbg_vhci_rh(
+ 					" enable rhport %d (status %u)\n",
+ 					rhport,
+@@ -957,8 +963,32 @@ static void vhci_device_unlink_cleanup(struct vhci_device *vdev)
+ 	spin_lock(&vdev->priv_lock);
+ 
+ 	list_for_each_entry_safe(unlink, tmp, &vdev->unlink_tx, list) {
++		struct urb *urb;
++
++		/* give back urb of unsent unlink request */
+ 		pr_info("unlink cleanup tx %lu\n", unlink->unlink_seqnum);
++
++		urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum);
++		if (!urb) {
++			list_del(&unlink->list);
++			kfree(unlink);
++			continue;
++		}
++
++		urb->status = -ENODEV;
++
++		usb_hcd_unlink_urb_from_ep(hcd, urb);
++
+ 		list_del(&unlink->list);
++
++		spin_unlock(&vdev->priv_lock);
++		spin_unlock_irqrestore(&vhci->lock, flags);
++
++		usb_hcd_giveback_urb(hcd, urb, urb->status);
++
++		spin_lock_irqsave(&vhci->lock, flags);
++		spin_lock(&vdev->priv_lock);
++
+ 		kfree(unlink);
+ 	}
+ 
+diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
+index 67d0bf4efa160..e44bf736e2b22 100644
+--- a/drivers/vfio/Kconfig
++++ b/drivers/vfio/Kconfig
+@@ -29,7 +29,7 @@ menuconfig VFIO
+ 
+ 	  If you don't know what to do here, say N.
+ 
+-menuconfig VFIO_NOIOMMU
++config VFIO_NOIOMMU
+ 	bool "VFIO No-IOMMU support"
+ 	depends on VFIO
+ 	help
+diff --git a/drivers/video/fbdev/asiliantfb.c b/drivers/video/fbdev/asiliantfb.c
+index 3e006da477523..84c56f525889f 100644
+--- a/drivers/video/fbdev/asiliantfb.c
++++ b/drivers/video/fbdev/asiliantfb.c
+@@ -227,6 +227,9 @@ static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ {
+ 	unsigned long Ftarget, ratio, remainder;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	ratio = 1000000 / var->pixclock;
+ 	remainder = 1000000 % var->pixclock;
+ 	Ftarget = 1000000 * ratio + (1000000 * remainder) / var->pixclock;
+diff --git a/drivers/video/fbdev/kyro/fbdev.c b/drivers/video/fbdev/kyro/fbdev.c
+index 8fbde92ae8b9c..25801e8e3f74a 100644
+--- a/drivers/video/fbdev/kyro/fbdev.c
++++ b/drivers/video/fbdev/kyro/fbdev.c
+@@ -372,6 +372,11 @@ static int kyro_dev_overlay_viewport_set(u32 x, u32 y, u32 ulWidth, u32 ulHeight
+ 		/* probably haven't called CreateOverlay yet */
+ 		return -EINVAL;
+ 
++	if (ulWidth == 0 || ulWidth == 0xffffffff ||
++	    ulHeight == 0 || ulHeight == 0xffffffff ||
++	    (x < 2 && ulWidth + 2 == 0))
++		return -EINVAL;
++
+ 	/* Stop Ramdac Output */
+ 	DisableRamdacOutput(deviceInfo.pSTGReg);
+ 
+@@ -394,6 +399,9 @@ static int kyrofb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ {
+ 	struct kyrofb_info *par = info->par;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	if (var->bits_per_pixel != 16 && var->bits_per_pixel != 32) {
+ 		printk(KERN_WARNING "kyrofb: depth not supported: %u\n", var->bits_per_pixel);
+ 		return -EINVAL;
+diff --git a/drivers/video/fbdev/riva/fbdev.c b/drivers/video/fbdev/riva/fbdev.c
+index 55554b0433cb4..84d5e23ad7d38 100644
+--- a/drivers/video/fbdev/riva/fbdev.c
++++ b/drivers/video/fbdev/riva/fbdev.c
+@@ -1084,6 +1084,9 @@ static int rivafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 	int mode_valid = 0;
+ 	
+ 	NVTRACE_ENTER();
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	switch (var->bits_per_pixel) {
+ 	case 1 ... 8:
+ 		var->red.offset = var->green.offset = var->blue.offset = 0;
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index b3f604669e2c3..643c6c2d0b728 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -362,7 +362,7 @@ static int iTCO_wdt_set_timeout(struct watchdog_device *wd_dev, unsigned int t)
+ 	 * Otherwise, the BIOS generally reboots when the SMI triggers.
+ 	 */
+ 	if (p->smi_res &&
+-	    (SMI_EN(p) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
++	    (inl(SMI_EN(p)) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
+ 		tmrval /= 2;
+ 
+ 	/* from the specs: */
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 9e7d9d0c763dd..b1492cb5c6be5 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1561,7 +1561,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 				div64_u64(zone_unusable * 100, bg->length));
+ 		trace_btrfs_reclaim_block_group(bg);
+ 		ret = btrfs_relocate_chunk(fs_info, bg->start);
+-		if (ret)
++		if (ret && ret != -EAGAIN)
+ 			btrfs_err(fs_info, "error relocating chunk %llu",
+ 				  bg->start);
+ 
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index e5e53e592d4f9..4aa4f4760b726 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2781,10 +2781,11 @@ enum btrfs_flush_state {
+ 	FLUSH_DELAYED_REFS	=	4,
+ 	FLUSH_DELALLOC		=	5,
+ 	FLUSH_DELALLOC_WAIT	=	6,
+-	ALLOC_CHUNK		=	7,
+-	ALLOC_CHUNK_FORCE	=	8,
+-	RUN_DELAYED_IPUTS	=	9,
+-	COMMIT_TRANS		=	10,
++	FLUSH_DELALLOC_FULL	=	7,
++	ALLOC_CHUNK		=	8,
++	ALLOC_CHUNK_FORCE	=	9,
++	RUN_DELAYED_IPUTS	=	10,
++	COMMIT_TRANS		=	11,
+ };
+ 
+ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index a59ab7b9aea08..b2f713c759e87 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3314,6 +3314,30 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 	 */
+ 	fs_info->compress_type = BTRFS_COMPRESS_ZLIB;
+ 
++	/*
++	 * Flag our filesystem as having big metadata blocks if they are bigger
++	 * than the page size.
++	 */
++	if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
++		if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
++			btrfs_info(fs_info,
++				"flagging fs with big metadata feature");
++		features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
++	}
++
++	/* Set up fs_info before parsing mount options */
++	nodesize = btrfs_super_nodesize(disk_super);
++	sectorsize = btrfs_super_sectorsize(disk_super);
++	stripesize = sectorsize;
++	fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
++	fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
++
++	fs_info->nodesize = nodesize;
++	fs_info->sectorsize = sectorsize;
++	fs_info->sectorsize_bits = ilog2(sectorsize);
++	fs_info->csums_per_leaf = BTRFS_MAX_ITEM_SIZE(fs_info) / fs_info->csum_size;
++	fs_info->stripesize = stripesize;
++
+ 	ret = btrfs_parse_options(fs_info, options, sb->s_flags);
+ 	if (ret) {
+ 		err = ret;
+@@ -3340,30 +3364,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 	if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
+ 		btrfs_info(fs_info, "has skinny extents");
+ 
+-	/*
+-	 * flag our filesystem as having big metadata blocks if
+-	 * they are bigger than the page size
+-	 */
+-	if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
+-		if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
+-			btrfs_info(fs_info,
+-				"flagging fs with big metadata feature");
+-		features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
+-	}
+-
+-	nodesize = btrfs_super_nodesize(disk_super);
+-	sectorsize = btrfs_super_sectorsize(disk_super);
+-	stripesize = sectorsize;
+-	fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
+-	fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
+-
+-	/* Cache block sizes */
+-	fs_info->nodesize = nodesize;
+-	fs_info->sectorsize = sectorsize;
+-	fs_info->sectorsize_bits = ilog2(sectorsize);
+-	fs_info->csums_per_leaf = BTRFS_MAX_ITEM_SIZE(fs_info) / fs_info->csum_size;
+-	fs_info->stripesize = stripesize;
+-
+ 	/*
+ 	 * mixed block groups end up with duplicate but slightly offset
+ 	 * extent buffers for the same range.  It leads to corruptions
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 2131ae5b9ed78..c92643e4c6787 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2652,8 +2652,11 @@ int btrfs_remove_free_space(struct btrfs_block_group *block_group,
+ 		 * btrfs_pin_extent_for_log_replay() when replaying the log.
+ 		 * Advance the pointer not to overwrite the tree-log nodes.
+ 		 */
+-		if (block_group->alloc_offset < offset + bytes)
+-			block_group->alloc_offset = offset + bytes;
++		if (block_group->start + block_group->alloc_offset <
++		    offset + bytes) {
++			block_group->alloc_offset =
++				offset + bytes - block_group->start;
++		}
+ 		return 0;
+ 	}
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index bd5689fa290e7..8132d503c83d7 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1290,11 +1290,6 @@ static noinline void async_cow_submit(struct btrfs_work *work)
+ 	nr_pages = (async_chunk->end - async_chunk->start + PAGE_SIZE) >>
+ 		PAGE_SHIFT;
+ 
+-	/* atomic_sub_return implies a barrier */
+-	if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) <
+-	    5 * SZ_1M)
+-		cond_wake_up_nomb(&fs_info->async_submit_wait);
+-
+ 	/*
+ 	 * ->inode could be NULL if async_chunk_start has failed to compress,
+ 	 * in which case we don't have anything to submit, yet we need to
+@@ -1303,6 +1298,11 @@ static noinline void async_cow_submit(struct btrfs_work *work)
+ 	 */
+ 	if (async_chunk->inode)
+ 		submit_compressed_extents(async_chunk);
++
++	/* atomic_sub_return implies a barrier */
++	if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) <
++	    5 * SZ_1M)
++		cond_wake_up_nomb(&fs_info->async_submit_wait);
+ }
+ 
+ static noinline void async_cow_free(struct btrfs_work *work)
+@@ -5088,15 +5088,13 @@ static int maybe_insert_hole(struct btrfs_root *root, struct btrfs_inode *inode,
+ 	int ret;
+ 
+ 	/*
+-	 * Still need to make sure the inode looks like it's been updated so
+-	 * that any holes get logged if we fsync.
++	 * If NO_HOLES is enabled, we don't need to do anything.
++	 * Later, up in the call chain, either btrfs_set_inode_last_sub_trans()
++	 * or btrfs_update_inode() will be called, which guarantee that the next
++	 * fsync will know this inode was changed and needs to be logged.
+ 	 */
+-	if (btrfs_fs_incompat(fs_info, NO_HOLES)) {
+-		inode->last_trans = fs_info->generation;
+-		inode->last_sub_trans = root->log_transid;
+-		inode->last_log_commit = root->last_log_commit;
++	if (btrfs_fs_incompat(fs_info, NO_HOLES))
+ 		return 0;
+-	}
+ 
+ 	/*
+ 	 * 1 - for the one we're dropping
+@@ -9809,10 +9807,6 @@ static int start_delalloc_inodes(struct btrfs_root *root,
+ 					 &work->work);
+ 		} else {
+ 			ret = sync_inode(inode, wbc);
+-			if (!ret &&
+-			    test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
+-				     &BTRFS_I(inode)->runtime_flags))
+-				ret = sync_inode(inode, wbc);
+ 			btrfs_add_delayed_iput(inode);
+ 			if (ret || wbc->nr_to_write <= 0)
+ 				goto out;
+diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
+index 5c0f8481e25e0..182d9fb3f5e94 100644
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -1052,6 +1052,7 @@ static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos,
+ 				u64 len)
+ {
+ 	struct inode *inode = ordered->inode;
++	struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
+ 	u64 file_offset = ordered->file_offset + pos;
+ 	u64 disk_bytenr = ordered->disk_bytenr + pos;
+ 	u64 num_bytes = len;
+@@ -1069,6 +1070,13 @@ static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos,
+ 	else
+ 		type = __ffs(flags_masked);
+ 
++	/*
++	 * The splitting extent is already counted and will be added again
++	 * in btrfs_add_ordered_extent_*(). Subtract num_bytes to avoid
++	 * double counting.
++	 */
++	percpu_counter_add_batch(&fs_info->ordered_bytes, -num_bytes,
++				 fs_info->delalloc_batch);
+ 	if (test_bit(BTRFS_ORDERED_COMPRESSED, &ordered->flags)) {
+ 		WARN_ON_ONCE(1);
+ 		ret = btrfs_add_ordered_extent_compress(BTRFS_I(inode),
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index f79bf85f24399..46e8415fa2c55 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -493,6 +493,11 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
+ 	long time_left;
+ 	int loops;
+ 
++	delalloc_bytes = percpu_counter_sum_positive(&fs_info->delalloc_bytes);
++	ordered_bytes = percpu_counter_sum_positive(&fs_info->ordered_bytes);
++	if (delalloc_bytes == 0 && ordered_bytes == 0)
++		return;
++
+ 	/* Calc the number of the pages we need flush for space reservation */
+ 	if (to_reclaim == U64_MAX) {
+ 		items = U64_MAX;
+@@ -500,22 +505,21 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
+ 		/*
+ 		 * to_reclaim is set to however much metadata we need to
+ 		 * reclaim, but reclaiming that much data doesn't really track
+-		 * exactly, so increase the amount to reclaim by 2x in order to
+-		 * make sure we're flushing enough delalloc to hopefully reclaim
+-		 * some metadata reservations.
++		 * exactly.  What we really want to do is reclaim full inode's
++		 * worth of reservations, however that's not available to us
++		 * here.  We will take a fraction of the delalloc bytes for our
++		 * flushing loops and hope for the best.  Delalloc will expand
++		 * the amount we write to cover an entire dirty extent, which
++		 * will reclaim the metadata reservation for that range.  If
++		 * it's not enough subsequent flush stages will be more
++		 * aggressive.
+ 		 */
++		to_reclaim = max(to_reclaim, delalloc_bytes >> 3);
+ 		items = calc_reclaim_items_nr(fs_info, to_reclaim) * 2;
+-		to_reclaim = items * EXTENT_SIZE_PER_ITEM;
+ 	}
+ 
+ 	trans = (struct btrfs_trans_handle *)current->journal_info;
+ 
+-	delalloc_bytes = percpu_counter_sum_positive(
+-						&fs_info->delalloc_bytes);
+-	ordered_bytes = percpu_counter_sum_positive(&fs_info->ordered_bytes);
+-	if (delalloc_bytes == 0 && ordered_bytes == 0)
+-		return;
+-
+ 	/*
+ 	 * If we are doing more ordered than delalloc we need to just wait on
+ 	 * ordered extents, otherwise we'll waste time trying to flush delalloc
+@@ -528,9 +532,49 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
+ 	while ((delalloc_bytes || ordered_bytes) && loops < 3) {
+ 		u64 temp = min(delalloc_bytes, to_reclaim) >> PAGE_SHIFT;
+ 		long nr_pages = min_t(u64, temp, LONG_MAX);
++		int async_pages;
+ 
+ 		btrfs_start_delalloc_roots(fs_info, nr_pages, true);
+ 
++		/*
++		 * We need to make sure any outstanding async pages are now
++		 * processed before we continue.  This is because things like
++		 * sync_inode() try to be smart and skip writing if the inode is
++		 * marked clean.  We don't use filemap_fwrite for flushing
++		 * because we want to control how many pages we write out at a
++		 * time, thus this is the only safe way to make sure we've
++		 * waited for outstanding compressed workers to have started
++		 * their jobs and thus have ordered extents set up properly.
++		 *
++		 * This exists because we do not want to wait for each
++		 * individual inode to finish its async work, we simply want to
++		 * start the IO on everybody, and then come back here and wait
++		 * for all of the async work to catch up.  Once we're done with
++		 * that we know we'll have ordered extents for everything and we
++		 * can decide if we wait for that or not.
++		 *
++		 * If we choose to replace this in the future, make absolutely
++		 * sure that the proper waiting is being done in the async case,
++		 * as there have been bugs in that area before.
++		 */
++		async_pages = atomic_read(&fs_info->async_delalloc_pages);
++		if (!async_pages)
++			goto skip_async;
++
++		/*
++		 * We don't want to wait forever, if we wrote less pages in this
++		 * loop than we have outstanding, only wait for that number of
++		 * pages, otherwise we can wait for all async pages to finish
++		 * before continuing.
++		 */
++		if (async_pages > nr_pages)
++			async_pages -= nr_pages;
++		else
++			async_pages = 0;
++		wait_event(fs_info->async_submit_wait,
++			   atomic_read(&fs_info->async_delalloc_pages) <=
++			   async_pages);
++skip_async:
+ 		loops++;
+ 		if (wait_ordered && !trans) {
+ 			btrfs_wait_ordered_roots(fs_info, items, 0, (u64)-1);
+@@ -595,8 +639,11 @@ static void flush_space(struct btrfs_fs_info *fs_info,
+ 		break;
+ 	case FLUSH_DELALLOC:
+ 	case FLUSH_DELALLOC_WAIT:
++	case FLUSH_DELALLOC_FULL:
++		if (state == FLUSH_DELALLOC_FULL)
++			num_bytes = U64_MAX;
+ 		shrink_delalloc(fs_info, space_info, num_bytes,
+-				state == FLUSH_DELALLOC_WAIT, for_preempt);
++				state != FLUSH_DELALLOC, for_preempt);
+ 		break;
+ 	case FLUSH_DELAYED_REFS_NR:
+ 	case FLUSH_DELAYED_REFS:
+@@ -686,7 +733,7 @@ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ {
+ 	u64 global_rsv_size = fs_info->global_block_rsv.reserved;
+ 	u64 ordered, delalloc;
+-	u64 thresh = div_factor_fine(space_info->total_bytes, 98);
++	u64 thresh = div_factor_fine(space_info->total_bytes, 90);
+ 	u64 used;
+ 
+ 	/* If we're just plain full then async reclaim just slows us down. */
+@@ -694,6 +741,20 @@ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ 	     global_rsv_size) >= thresh)
+ 		return false;
+ 
++	used = space_info->bytes_may_use + space_info->bytes_pinned;
++
++	/* The total flushable belongs to the global rsv, don't flush. */
++	if (global_rsv_size >= used)
++		return false;
++
++	/*
++	 * 128MiB is 1/4 of the maximum global rsv size.  If we have less than
++	 * that devoted to other reservations then there's no sense in flushing,
++	 * we don't have a lot of things that need flushing.
++	 */
++	if (used - global_rsv_size <= SZ_128M)
++		return false;
++
+ 	/*
+ 	 * We have tickets queued, bail so we don't compete with the async
+ 	 * flushers.
+@@ -904,6 +965,14 @@ static void btrfs_async_reclaim_metadata_space(struct work_struct *work)
+ 				commit_cycles--;
+ 		}
+ 
++		/*
++		 * We do not want to empty the system of delalloc unless we're
++		 * under heavy pressure, so allow one trip through the flushing
++		 * logic before we start doing a FLUSH_DELALLOC_FULL.
++		 */
++		if (flush_state == FLUSH_DELALLOC_FULL && !commit_cycles)
++			flush_state++;
++
+ 		/*
+ 		 * We don't want to force a chunk allocation until we've tried
+ 		 * pretty hard to reclaim space.  Think of the case where we
+@@ -1067,7 +1136,7 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+  *   so if we now have space to allocate do the force chunk allocation.
+  */
+ static const enum btrfs_flush_state data_flush_states[] = {
+-	FLUSH_DELALLOC_WAIT,
++	FLUSH_DELALLOC_FULL,
+ 	RUN_DELAYED_IPUTS,
+ 	COMMIT_TRANS,
+ 	ALLOC_CHUNK_FORCE,
+@@ -1156,6 +1225,7 @@ static const enum btrfs_flush_state evict_flush_states[] = {
+ 	FLUSH_DELAYED_REFS,
+ 	FLUSH_DELALLOC,
+ 	FLUSH_DELALLOC_WAIT,
++	FLUSH_DELALLOC_FULL,
+ 	ALLOC_CHUNK,
+ 	COMMIT_TRANS,
+ };
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index e6430ac9bbe85..7037e5855d2a8 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -753,7 +753,9 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans,
+ 			 */
+ 			ret = btrfs_lookup_data_extent(fs_info, ins.objectid,
+ 						ins.offset);
+-			if (ret == 0) {
++			if (ret < 0) {
++				goto out;
++			} else if (ret == 0) {
+ 				btrfs_init_generic_ref(&ref,
+ 						BTRFS_ADD_DELAYED_REF,
+ 						ins.objectid, ins.offset, 0);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 354ffd8f81af9..10dd2d210b0f4 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1130,6 +1130,9 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 		fs_devices->rw_devices--;
+ 	}
+ 
++	if (device->devid == BTRFS_DEV_REPLACE_DEVID)
++		clear_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state);
++
+ 	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
+ 		fs_devices->missing_devices--;
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 39db97f149b9b..ba562efdf07b8 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1746,6 +1746,9 @@ struct ceph_cap_flush *ceph_alloc_cap_flush(void)
+ 	struct ceph_cap_flush *cf;
+ 
+ 	cf = kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	if (!cf)
++		return NULL;
++
+ 	cf->is_capsnap = false;
+ 	return cf;
+ }
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index c5785fd3f52e8..606fd7d6cb713 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -877,7 +877,7 @@ sess_alloc_buffer(struct sess_data *sess_data, int wct)
+ 	return 0;
+ 
+ out_free_smb_buf:
+-	kfree(smb_buf);
++	cifs_small_buf_release(smb_buf);
+ 	sess_data->iov[0].iov_base = NULL;
+ 	sess_data->iov[0].iov_len = 0;
+ 	sess_data->buf0_type = CIFS_NO_BUFFER;
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 455561826c7dc..b8b3f1160afa6 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1340,12 +1340,6 @@ out_destroy_crypt:
+ 
+ 	for (--i; i >= 0; i--)
+ 		fscrypt_finalize_bounce_page(&cc->cpages[i]);
+-	for (i = 0; i < cc->nr_cpages; i++) {
+-		if (!cc->cpages[i])
+-			continue;
+-		f2fs_compress_free_page(cc->cpages[i]);
+-		cc->cpages[i] = NULL;
+-	}
+ out_put_cic:
+ 	kmem_cache_free(cic_entry_slab, cic);
+ out_put_dnode:
+@@ -1356,6 +1350,12 @@ out_unlock_op:
+ 	else
+ 		f2fs_unlock_op(sbi);
+ out_free:
++	for (i = 0; i < cc->nr_cpages; i++) {
++		if (!cc->cpages[i])
++			continue;
++		f2fs_compress_free_page(cc->cpages[i]);
++		cc->cpages[i] = NULL;
++	}
+ 	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+ 	return -EAGAIN;
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index d2cf48c5a2e49..a86f004c0c07e 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -116,6 +116,7 @@ struct bio_post_read_ctx {
+ 	struct f2fs_sb_info *sbi;
+ 	struct work_struct work;
+ 	unsigned int enabled_steps;
++	block_t fs_blkaddr;
+ };
+ 
+ static void f2fs_finish_read_bio(struct bio *bio)
+@@ -228,7 +229,7 @@ static void f2fs_handle_step_decompress(struct bio_post_read_ctx *ctx)
+ 	struct bio_vec *bv;
+ 	struct bvec_iter_all iter_all;
+ 	bool all_compressed = true;
+-	block_t blkaddr = SECTOR_TO_BLOCK(ctx->bio->bi_iter.bi_sector);
++	block_t blkaddr = ctx->fs_blkaddr;
+ 
+ 	bio_for_each_segment_all(bv, ctx->bio, iter_all) {
+ 		struct page *page = bv->bv_page;
+@@ -1003,6 +1004,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
+ 		ctx->bio = bio;
+ 		ctx->sbi = sbi;
+ 		ctx->enabled_steps = post_read_steps;
++		ctx->fs_blkaddr = blkaddr;
+ 		bio->bi_private = ctx;
+ 	}
+ 
+@@ -1490,7 +1492,21 @@ next_dnode:
+ 	if (err) {
+ 		if (flag == F2FS_GET_BLOCK_BMAP)
+ 			map->m_pblk = 0;
++
+ 		if (err == -ENOENT) {
++			/*
++			 * There is one exceptional case that read_node_page()
++			 * may return -ENOENT due to filesystem has been
++			 * shutdown or cp_error, so force to convert error
++			 * number to EIO for such case.
++			 */
++			if (map->m_may_create &&
++				(is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) ||
++				f2fs_cp_error(sbi))) {
++				err = -EIO;
++				goto unlock_out;
++			}
++
+ 			err = 0;
+ 			if (map->m_next_pgofs)
+ 				*map->m_next_pgofs =
+@@ -2137,6 +2153,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 			continue;
+ 		}
+ 		unlock_page(page);
++		if (for_write)
++			put_page(page);
+ 		cc->rpages[i] = NULL;
+ 		cc->nr_rpages--;
+ 	}
+@@ -2498,6 +2516,8 @@ bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
+ 		return true;
+ 	if (f2fs_is_atomic_file(inode))
+ 		return true;
++	if (is_sbi_flag_set(sbi, SBI_NEED_FSCK))
++		return true;
+ 
+ 	/* swap file is migrating in aligned write mode */
+ 	if (is_inode_flag_set(inode, FI_ALIGNED_WRITE))
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 456651682daf4..c250bf46ef5ed 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -1000,6 +1000,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(d->inode);
+ 	struct blk_plug plug;
+ 	bool readdir_ra = sbi->readdir_ra == 1;
++	bool found_valid_dirent = false;
+ 	int err = 0;
+ 
+ 	bit_pos = ((unsigned long)ctx->pos % d->max);
+@@ -1014,13 +1015,15 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 
+ 		de = &d->dentry[bit_pos];
+ 		if (de->name_len == 0) {
++			if (found_valid_dirent || !bit_pos) {
++				printk_ratelimited(
++					"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.",
++					KERN_WARNING, sbi->sb->s_id,
++					le32_to_cpu(de->ino));
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
++			}
+ 			bit_pos++;
+ 			ctx->pos = start_pos + bit_pos;
+-			printk_ratelimited(
+-				"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.",
+-				KERN_WARNING, sbi->sb->s_id,
+-				le32_to_cpu(de->ino));
+-			set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 			continue;
+ 		}
+ 
+@@ -1063,6 +1066,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 			f2fs_ra_node_page(sbi, le32_to_cpu(de->ino));
+ 
+ 		ctx->pos = start_pos + bit_pos;
++		found_valid_dirent = true;
+ 	}
+ out:
+ 	if (readdir_ra)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index ee8eb33e2c25c..db95829904e5c 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -43,6 +43,7 @@ enum {
+ 	FAULT_KVMALLOC,
+ 	FAULT_PAGE_ALLOC,
+ 	FAULT_PAGE_GET,
++	FAULT_ALLOC_BIO,	/* it's obsolete due to bio_alloc() will never fail */
+ 	FAULT_ALLOC_NID,
+ 	FAULT_ORPHAN,
+ 	FAULT_BLOCK,
+@@ -4137,7 +4138,8 @@ static inline void set_compress_context(struct inode *inode)
+ 				1 << COMPRESS_CHKSUM : 0;
+ 	F2FS_I(inode)->i_cluster_size =
+ 			1 << F2FS_I(inode)->i_log_cluster_size;
+-	if (F2FS_I(inode)->i_compress_algorithm == COMPRESS_LZ4 &&
++	if ((F2FS_I(inode)->i_compress_algorithm == COMPRESS_LZ4 ||
++		F2FS_I(inode)->i_compress_algorithm == COMPRESS_ZSTD) &&
+ 			F2FS_OPTION(sbi).compress_level)
+ 		F2FS_I(inode)->i_compress_flag |=
+ 				F2FS_OPTION(sbi).compress_level <<
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 97d48c5bdebcb..74f934da825f2 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1084,7 +1084,6 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 		}
+ 
+ 		if (pg_start < pg_end) {
+-			struct address_space *mapping = inode->i_mapping;
+ 			loff_t blk_start, blk_end;
+ 			struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 
+@@ -1096,8 +1095,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 			down_write(&F2FS_I(inode)->i_mmap_sem);
+ 
+-			truncate_inode_pages_range(mapping, blk_start,
+-					blk_end - 1);
++			truncate_pagecache_range(inode, blk_start, blk_end - 1);
+ 
+ 			f2fs_lock_op(sbi);
+ 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 0e42ee5f77707..70234a7040c88 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1497,8 +1497,10 @@ next_step:
+ 			int err;
+ 
+ 			if (S_ISREG(inode->i_mode)) {
+-				if (!down_write_trylock(&fi->i_gc_rwsem[READ]))
++				if (!down_write_trylock(&fi->i_gc_rwsem[READ])) {
++					sbi->skipped_gc_rwsem++;
+ 					continue;
++				}
+ 				if (!down_write_trylock(
+ 						&fi->i_gc_rwsem[WRITE])) {
+ 					sbi->skipped_gc_rwsem++;
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 0be9e2d7120e3..1b0fe6e64b7d3 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1321,7 +1321,8 @@ static int read_node_page(struct page *page, int op_flags)
+ 	if (err)
+ 		return err;
+ 
+-	if (unlikely(ni.blk_addr == NULL_ADDR) ||
++	/* NEW_ADDR can be seen, after cp_error drops some dirty node pages */
++	if (unlikely(ni.blk_addr == NULL_ADDR || ni.blk_addr == NEW_ADDR) ||
+ 			is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN)) {
+ 		ClearPageUptodate(page);
+ 		return -ENOENT;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 15cc89eef28d6..f9b7fb785e1d7 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -3563,7 +3563,7 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio)
+ 		goto drop_bio;
+ 	}
+ 
+-	if (is_sbi_flag_set(sbi, SBI_NEED_FSCK) || f2fs_cp_error(sbi)) {
++	if (f2fs_cp_error(sbi)) {
+ 		err = -EIO;
+ 		goto drop_bio;
+ 	}
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index ce703e6fdafc0..2b093a209ae40 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2071,11 +2071,10 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 	bool need_restart_ckpt = false, need_stop_ckpt = false;
+ 	bool need_restart_flush = false, need_stop_flush = false;
+ 	bool no_extent_cache = !test_opt(sbi, EXTENT_CACHE);
+-	bool disable_checkpoint = test_opt(sbi, DISABLE_CHECKPOINT);
++	bool enable_checkpoint = !test_opt(sbi, DISABLE_CHECKPOINT);
+ 	bool no_io_align = !F2FS_IO_ALIGNED(sbi);
+ 	bool no_atgc = !test_opt(sbi, ATGC);
+ 	bool no_compress_cache = !test_opt(sbi, COMPRESS_CACHE);
+-	bool checkpoint_changed;
+ #ifdef CONFIG_QUOTA
+ 	int i, j;
+ #endif
+@@ -2120,8 +2119,6 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 	err = parse_options(sb, data, true);
+ 	if (err)
+ 		goto restore_opts;
+-	checkpoint_changed =
+-			disable_checkpoint != test_opt(sbi, DISABLE_CHECKPOINT);
+ 
+ 	/*
+ 	 * Previous and new state of filesystem is RO,
+@@ -2243,7 +2240,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 		need_stop_flush = true;
+ 	}
+ 
+-	if (checkpoint_changed) {
++	if (enable_checkpoint == !!test_opt(sbi, DISABLE_CHECKPOINT)) {
+ 		if (test_opt(sbi, DISABLE_CHECKPOINT)) {
+ 			err = f2fs_disable_checkpoint(sbi);
+ 			if (err)
+@@ -2527,6 +2524,33 @@ static int f2fs_enable_quotas(struct super_block *sb)
+ 	return 0;
+ }
+ 
++static int f2fs_quota_sync_file(struct f2fs_sb_info *sbi, int type)
++{
++	struct quota_info *dqopt = sb_dqopt(sbi->sb);
++	struct address_space *mapping = dqopt->files[type]->i_mapping;
++	int ret = 0;
++
++	ret = dquot_writeback_dquots(sbi->sb, type);
++	if (ret)
++		goto out;
++
++	ret = filemap_fdatawrite(mapping);
++	if (ret)
++		goto out;
++
++	/* if we are using journalled quota */
++	if (is_journalled_quota(sbi))
++		goto out;
++
++	ret = filemap_fdatawait(mapping);
++
++	truncate_inode_pages(&dqopt->files[type]->i_data, 0);
++out:
++	if (ret)
++		set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++	return ret;
++}
++
+ int f2fs_quota_sync(struct super_block *sb, int type)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -2534,57 +2558,42 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 	int cnt;
+ 	int ret;
+ 
+-	/*
+-	 * do_quotactl
+-	 *  f2fs_quota_sync
+-	 *  down_read(quota_sem)
+-	 *  dquot_writeback_dquots()
+-	 *  f2fs_dquot_commit
+-	 *                            block_operation
+-	 *                            down_read(quota_sem)
+-	 */
+-	f2fs_lock_op(sbi);
+-
+-	down_read(&sbi->quota_sem);
+-	ret = dquot_writeback_dquots(sb, type);
+-	if (ret)
+-		goto out;
+-
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+ 	 * that userspace sees the changes.
+ 	 */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		struct address_space *mapping;
+ 
+ 		if (type != -1 && cnt != type)
+ 			continue;
+-		if (!sb_has_quota_active(sb, cnt))
+-			continue;
+ 
+-		mapping = dqopt->files[cnt]->i_mapping;
++		if (!sb_has_quota_active(sb, type))
++			return 0;
+ 
+-		ret = filemap_fdatawrite(mapping);
+-		if (ret)
+-			goto out;
++		inode_lock(dqopt->files[cnt]);
+ 
+-		/* if we are using journalled quota */
+-		if (is_journalled_quota(sbi))
+-			continue;
++		/*
++		 * do_quotactl
++		 *  f2fs_quota_sync
++		 *  down_read(quota_sem)
++		 *  dquot_writeback_dquots()
++		 *  f2fs_dquot_commit
++		 *			      block_operation
++		 *			      down_read(quota_sem)
++		 */
++		f2fs_lock_op(sbi);
++		down_read(&sbi->quota_sem);
+ 
+-		ret = filemap_fdatawait(mapping);
+-		if (ret)
+-			set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR);
++		ret = f2fs_quota_sync_file(sbi, cnt);
++
++		up_read(&sbi->quota_sem);
++		f2fs_unlock_op(sbi);
+ 
+-		inode_lock(dqopt->files[cnt]);
+-		truncate_inode_pages(&dqopt->files[cnt]->i_data, 0);
+ 		inode_unlock(dqopt->files[cnt]);
++
++		if (ret)
++			break;
+ 	}
+-out:
+-	if (ret)
+-		set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR);
+-	up_read(&sbi->quota_sem);
+-	f2fs_unlock_op(sbi);
+ 	return ret;
+ }
+ 
+@@ -3217,11 +3226,13 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+-	if (le32_to_cpu(raw_super->cp_payload) >
+-				(blocks_per_seg - F2FS_CP_PACKS)) {
+-		f2fs_info(sbi, "Insane cp_payload (%u > %u)",
++	if (le32_to_cpu(raw_super->cp_payload) >=
++				(blocks_per_seg - F2FS_CP_PACKS -
++				NR_CURSEG_PERSIST_TYPE)) {
++		f2fs_info(sbi, "Insane cp_payload (%u >= %u)",
+ 			  le32_to_cpu(raw_super->cp_payload),
+-			  blocks_per_seg - F2FS_CP_PACKS);
++			  blocks_per_seg - F2FS_CP_PACKS -
++			  NR_CURSEG_PERSIST_TYPE);
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+@@ -3257,6 +3268,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	unsigned int cp_pack_start_sum, cp_payload;
+ 	block_t user_block_count, valid_user_blocks;
+ 	block_t avail_node_count, valid_node_count;
++	unsigned int nat_blocks, nat_bits_bytes, nat_bits_blocks;
+ 	int i, j;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+@@ -3387,6 +3399,17 @@ skip_cross:
+ 		return 1;
+ 	}
+ 
++	nat_blocks = nat_segs << log_blocks_per_seg;
++	nat_bits_bytes = nat_blocks / BITS_PER_BYTE;
++	nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
++	if (__is_set_ckpt_flags(ckpt, CP_NAT_BITS_FLAG) &&
++		(cp_payload + F2FS_CP_PACKS +
++		NR_CURSEG_PERSIST_TYPE + nat_bits_blocks >= blocks_per_seg)) {
++		f2fs_warn(sbi, "Insane cp_payload: %u, nat_bits_blocks: %u)",
++			  cp_payload, nat_bits_blocks);
++		return -EFSCORRUPTED;
++	}
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_err(sbi, "A bug case: need to run fsck");
+ 		return 1;
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 751bc5b1cddf9..6104f627cc712 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -74,10 +74,8 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
+ static int fscache_set_key(struct fscache_cookie *cookie,
+ 			   const void *index_key, size_t index_key_len)
+ {
+-	unsigned long long h;
+ 	u32 *buf;
+ 	int bufs;
+-	int i;
+ 
+ 	bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
+ 
+@@ -91,17 +89,7 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ 	}
+ 
+ 	memcpy(buf, index_key, index_key_len);
+-
+-	/* Calculate a hash and combine this with the length in the first word
+-	 * or first half word
+-	 */
+-	h = (unsigned long)cookie->parent;
+-	h += index_key_len + cookie->type;
+-
+-	for (i = 0; i < bufs; i++)
+-		h += buf[i];
+-
+-	cookie->key_hash = h ^ (h >> 32);
++	cookie->key_hash = fscache_hash(0, buf, bufs);
+ 	return 0;
+ }
+ 
+diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
+index c483863b740ad..aee639d980bad 100644
+--- a/fs/fscache/internal.h
++++ b/fs/fscache/internal.h
+@@ -97,6 +97,8 @@ extern struct workqueue_struct *fscache_object_wq;
+ extern struct workqueue_struct *fscache_op_wq;
+ DECLARE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait);
+ 
++extern unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n);
++
+ static inline bool fscache_object_congested(void)
+ {
+ 	return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq);
+diff --git a/fs/fscache/main.c b/fs/fscache/main.c
+index c1e6cc9091aac..4207f98e405fd 100644
+--- a/fs/fscache/main.c
++++ b/fs/fscache/main.c
+@@ -93,6 +93,45 @@ static struct ctl_table fscache_sysctls_root[] = {
+ };
+ #endif
+ 
++/*
++ * Mixing scores (in bits) for (7,20):
++ * Input delta: 1-bit      2-bit
++ * 1 round:     330.3     9201.6
++ * 2 rounds:   1246.4    25475.4
++ * 3 rounds:   1907.1    31295.1
++ * 4 rounds:   2042.3    31718.6
++ * Perfect:    2048      31744
++ *            (32*64)   (32*31/2 * 64)
++ */
++#define HASH_MIX(x, y, a)	\
++	(	x ^= (a),	\
++	y ^= x,	x = rol32(x, 7),\
++	x += y,	y = rol32(y,20),\
++	y *= 9			)
++
++static inline unsigned int fold_hash(unsigned long x, unsigned long y)
++{
++	/* Use arch-optimized multiply if one exists */
++	return __hash_32(y ^ __hash_32(x));
++}
++
++/*
++ * Generate a hash.  This is derived from full_name_hash(), but we want to be
++ * sure it is arch independent and that it doesn't change as bits of the
++ * computed hash value might appear on disk.  The caller also guarantees that
++ * the hashed data will be a series of aligned 32-bit words.
++ */
++unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n)
++{
++	unsigned int a, x = 0, y = salt;
++
++	for (; n; n--) {
++		a = *data++;
++		HASH_MIX(x, y, a);
++	}
++	return fold_hash(x, y);
++}
++
+ /*
+  * initialise the fs caching module
+  */
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 54d3fbeb3002f..384565d63eea8 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -610,16 +610,13 @@ static int freeze_go_xmote_bh(struct gfs2_glock *gl)
+ 		j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+ 
+ 		error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+-		if (error)
+-			gfs2_consist(sdp);
+-		if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT))
+-			gfs2_consist(sdp);
+-
+-		/*  Initialize some head of the log stuff  */
+-		if (!gfs2_withdrawn(sdp)) {
+-			sdp->sd_log_sequence = head.lh_sequence + 1;
+-			gfs2_log_pointers_init(sdp, head.lh_blkno);
+-		}
++		if (gfs2_assert_withdraw_delayed(sdp, !error))
++			return error;
++		if (gfs2_assert_withdraw_delayed(sdp, head.lh_flags &
++						 GFS2_LOG_HEAD_UNMOUNT))
++			return -EIO;
++		sdp->sd_log_sequence = head.lh_sequence + 1;
++		gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index dac040162ecc1..50578f881e6de 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -299,6 +299,11 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
+ 	gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT);
+ 	gfs2_update_request_times(gl);
+ 
++	/* don't want to call dlm if we've unmounted the lock protocol */
++	if (test_bit(DFL_UNMOUNT, &ls->ls_recover_flags)) {
++		gfs2_glock_free(gl);
++		return;
++	}
+ 	/* don't want to skip dlm_unlock writing the lvb when lock has one */
+ 
+ 	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 2cc7f75ff24d7..cb5d84f6b7693 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -236,9 +236,9 @@ static bool io_wqe_activate_free_worker(struct io_wqe *wqe)
+  * We need a worker. If we find a free one, we're good. If not, and we're
+  * below the max number of workers, create one.
+  */
+-static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
++static void io_wqe_create_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ {
+-	bool ret;
++	bool do_create = false, first = false;
+ 
+ 	/*
+ 	 * Most likely an attempt to queue unbounded work on an io_wq that
+@@ -247,25 +247,18 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ 	if (unlikely(!acct->max_workers))
+ 		pr_warn_once("io-wq is not configured for unbound workers");
+ 
+-	rcu_read_lock();
+-	ret = io_wqe_activate_free_worker(wqe);
+-	rcu_read_unlock();
+-
+-	if (!ret) {
+-		bool do_create = false, first = false;
+-
+-		raw_spin_lock_irq(&wqe->lock);
+-		if (acct->nr_workers < acct->max_workers) {
+-			atomic_inc(&acct->nr_running);
+-			atomic_inc(&wqe->wq->worker_refs);
+-			if (!acct->nr_workers)
+-				first = true;
+-			acct->nr_workers++;
+-			do_create = true;
+-		}
+-		raw_spin_unlock_irq(&wqe->lock);
+-		if (do_create)
+-			create_io_worker(wqe->wq, wqe, acct->index, first);
++	raw_spin_lock_irq(&wqe->lock);
++	if (acct->nr_workers < acct->max_workers) {
++		if (!acct->nr_workers)
++			first = true;
++		acct->nr_workers++;
++		do_create = true;
++	}
++	raw_spin_unlock_irq(&wqe->lock);
++	if (do_create) {
++		atomic_inc(&acct->nr_running);
++		atomic_inc(&wqe->wq->worker_refs);
++		create_io_worker(wqe->wq, wqe, acct->index, first);
+ 	}
+ }
+ 
+@@ -793,7 +786,8 @@ append:
+ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ {
+ 	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+-	int work_flags;
++	unsigned work_flags = work->flags;
++	bool do_create;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -806,15 +800,19 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ 		return;
+ 	}
+ 
+-	work_flags = work->flags;
+ 	raw_spin_lock_irqsave(&wqe->lock, flags);
+ 	io_wqe_insert_work(wqe, work);
+ 	wqe->flags &= ~IO_WQE_FLAG_STALLED;
++
++	rcu_read_lock();
++	do_create = !io_wqe_activate_free_worker(wqe);
++	rcu_read_unlock();
++
+ 	raw_spin_unlock_irqrestore(&wqe->lock, flags);
+ 
+-	if ((work_flags & IO_WQ_WORK_CONCURRENT) ||
+-	    !atomic_read(&acct->nr_running))
+-		io_wqe_wake_worker(wqe, acct);
++	if (do_create && ((work_flags & IO_WQ_WORK_CONCURRENT) ||
++	    !atomic_read(&acct->nr_running)))
++		io_wqe_create_worker(wqe, acct);
+ }
+ 
+ void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 14bebc62db2d4..c5d4638f6d7fd 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3484,7 +3484,7 @@ static int io_renameat_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3535,7 +3535,8 @@ static int io_unlinkat_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3581,8 +3582,8 @@ static int io_shutdown_prep(struct io_kiocb *req,
+ #if defined(CONFIG_NET)
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags ||
+-	    sqe->buf_index)
++	if (unlikely(sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags ||
++		     sqe->buf_index || sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->shutdown.how = READ_ONCE(sqe->len);
+@@ -3730,7 +3731,8 @@ static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->sync.flags = READ_ONCE(sqe->fsync_flags);
+@@ -3763,7 +3765,8 @@ static int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
+ static int io_fallocate_prep(struct io_kiocb *req,
+ 			     const struct io_uring_sqe *sqe)
+ {
+-	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags)
++	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -3794,7 +3797,7 @@ static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ 	const char __user *fname;
+ 	int ret;
+ 
+-	if (unlikely(sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->ioprio || sqe->buf_index || sqe->splice_fd_in))
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3918,7 +3921,8 @@ static int io_remove_buffers_prep(struct io_kiocb *req,
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+-	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off)
++	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tmp = READ_ONCE(sqe->fd);
+@@ -3989,7 +3993,7 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+-	if (sqe->ioprio || sqe->rw_flags)
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tmp = READ_ONCE(sqe->fd);
+@@ -4076,7 +4080,7 @@ static int io_epoll_ctl_prep(struct io_kiocb *req,
+ 			     const struct io_uring_sqe *sqe)
+ {
+ #if defined(CONFIG_EPOLL)
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4122,7 +4126,7 @@ static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags)
+ static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ #if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+-	if (sqe->ioprio || sqe->buf_index || sqe->off)
++	if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4157,7 +4161,7 @@ static int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+-	if (sqe->ioprio || sqe->buf_index || sqe->addr)
++	if (sqe->ioprio || sqe->buf_index || sqe->addr || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4195,7 +4199,7 @@ static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (req->flags & REQ_F_FIXED_FILE)
+ 		return -EBADF;
+@@ -4231,7 +4235,7 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+ 	if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
+-	    sqe->rw_flags || sqe->buf_index)
++	    sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (req->flags & REQ_F_FIXED_FILE)
+ 		return -EBADF;
+@@ -4292,7 +4296,8 @@ static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->sync.off = READ_ONCE(sqe->off);
+@@ -4719,7 +4724,7 @@ static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index)
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -4767,7 +4772,8 @@ static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags)
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -5375,7 +5381,7 @@ static int io_poll_update_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	flags = READ_ONCE(sqe->len);
+ 	if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
+@@ -5610,7 +5616,7 @@ static int io_timeout_remove_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len)
++	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tr->addr = READ_ONCE(sqe->addr);
+@@ -5669,7 +5675,8 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len != 1)
++	if (sqe->ioprio || sqe->buf_index || sqe->len != 1 ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (off && is_timeout_link)
+ 		return -EINVAL;
+@@ -5820,7 +5827,8 @@ static int io_async_cancel_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags)
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	req->cancel.addr = READ_ONCE(sqe->addr);
+@@ -5877,7 +5885,7 @@ static int io_rsrc_update_prep(struct io_kiocb *req,
+ {
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->rw_flags)
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	req->rsrc_update.offset = READ_ONCE(sqe->off);
+@@ -6302,6 +6310,7 @@ static void io_wq_submit_work(struct io_wq_work *work)
+ 	if (timeout)
+ 		io_queue_linked_timeout(timeout);
+ 
++	/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
+ 	if (work->flags & IO_WQ_WORK_CANCEL)
+ 		ret = -ECANCELED;
+ 
+@@ -7126,14 +7135,14 @@ static void **io_alloc_page_table(size_t size)
+ 	size_t init_size = size;
+ 	void **table;
+ 
+-	table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL);
++	table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL_ACCOUNT);
+ 	if (!table)
+ 		return NULL;
+ 
+ 	for (i = 0; i < nr_tables; i++) {
+ 		unsigned int this_size = min_t(size_t, size, PAGE_SIZE);
+ 
+-		table[i] = kzalloc(this_size, GFP_KERNEL);
++		table[i] = kzalloc(this_size, GFP_KERNEL_ACCOUNT);
+ 		if (!table[i]) {
+ 			io_free_page_table(table, init_size);
+ 			return NULL;
+@@ -9129,8 +9138,8 @@ static void io_uring_clean_tctx(struct io_uring_task *tctx)
+ 		 * Must be after io_uring_del_task_file() (removes nodes under
+ 		 * uring_lock) to avoid race with io_uring_try_cancel_iowq().
+ 		 */
+-		tctx->io_wq = NULL;
+ 		io_wq_put_and_exit(wq);
++		tctx->io_wq = NULL;
+ 	}
+ }
+ 
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 87ccb3438becd..b06138c6190be 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1016,7 +1016,7 @@ iomap_finish_page_writeback(struct inode *inode, struct page *page,
+ 
+ 	if (error) {
+ 		SetPageError(page);
+-		mapping_set_error(inode->i_mapping, -EIO);
++		mapping_set_error(inode->i_mapping, error);
+ 	}
+ 
+ 	WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop);
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 498cb70c2c0d0..273a81971ed57 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -395,28 +395,10 @@ nlmsvc_release_lockowner(struct nlm_lock *lock)
+ 		nlmsvc_put_lockowner(lock->fl.fl_owner);
+ }
+ 
+-static void nlmsvc_locks_copy_lock(struct file_lock *new, struct file_lock *fl)
+-{
+-	struct nlm_lockowner *nlm_lo = (struct nlm_lockowner *)fl->fl_owner;
+-	new->fl_owner = nlmsvc_get_lockowner(nlm_lo);
+-}
+-
+-static void nlmsvc_locks_release_private(struct file_lock *fl)
+-{
+-	nlmsvc_put_lockowner((struct nlm_lockowner *)fl->fl_owner);
+-}
+-
+-static const struct file_lock_operations nlmsvc_lock_ops = {
+-	.fl_copy_lock = nlmsvc_locks_copy_lock,
+-	.fl_release_private = nlmsvc_locks_release_private,
+-};
+-
+ void nlmsvc_locks_init_private(struct file_lock *fl, struct nlm_host *host,
+ 						pid_t pid)
+ {
+ 	fl->fl_owner = nlmsvc_find_lockowner(host, pid);
+-	if (fl->fl_owner != NULL)
+-		fl->fl_ops = &nlmsvc_lock_ops;
+ }
+ 
+ /*
+@@ -788,9 +770,21 @@ nlmsvc_notify_blocked(struct file_lock *fl)
+ 	printk(KERN_WARNING "lockd: notification for unknown block!\n");
+ }
+ 
++static fl_owner_t nlmsvc_get_owner(fl_owner_t owner)
++{
++	return nlmsvc_get_lockowner(owner);
++}
++
++static void nlmsvc_put_owner(fl_owner_t owner)
++{
++	nlmsvc_put_lockowner(owner);
++}
++
+ const struct lock_manager_operations nlmsvc_lock_operations = {
+ 	.lm_notify = nlmsvc_notify_blocked,
+ 	.lm_grant = nlmsvc_grant_deferred,
++	.lm_get_owner = nlmsvc_get_owner,
++	.lm_put_owner = nlmsvc_put_owner,
+ };
+ 
+ /*
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index 37a1a88df7717..d772c20bbfd15 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -180,5 +180,5 @@ const struct export_operations nfs_export_ops = {
+ 	.fetch_iversion = nfs_fetch_iversion,
+ 	.flags = EXPORT_OP_NOWCC|EXPORT_OP_NOSUBTREECHK|
+ 		EXPORT_OP_CLOSE_BEFORE_UNLINK|EXPORT_OP_REMOTE_FS|
+-		EXPORT_OP_NOATOMIC_ATTR,
++		EXPORT_OP_NOATOMIC_ATTR|EXPORT_OP_SYNC_LOCKS,
+ };
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index ef14ea0b6ab8d..51049499e98ff 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -335,7 +335,7 @@ static bool pnfs_seqid_is_newer(u32 s1, u32 s2)
+ 
+ static void pnfs_barrier_update(struct pnfs_layout_hdr *lo, u32 newseq)
+ {
+-	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier))
++	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier) || !lo->plh_barrier)
+ 		lo->plh_barrier = newseq;
+ }
+ 
+@@ -347,11 +347,15 @@ pnfs_set_plh_return_info(struct pnfs_layout_hdr *lo, enum pnfs_iomode iomode,
+ 		iomode = IOMODE_ANY;
+ 	lo->plh_return_iomode = iomode;
+ 	set_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags);
+-	if (seq != 0) {
+-		WARN_ON_ONCE(lo->plh_return_seq != 0 && lo->plh_return_seq != seq);
++	/*
++	 * We must set lo->plh_return_seq to avoid livelocks with
++	 * pnfs_layout_need_return()
++	 */
++	if (seq == 0)
++		seq = be32_to_cpu(lo->plh_stateid.seqid);
++	if (!lo->plh_return_seq || pnfs_seqid_is_newer(seq, lo->plh_return_seq))
+ 		lo->plh_return_seq = seq;
+-		pnfs_barrier_update(lo, seq);
+-	}
++	pnfs_barrier_update(lo, seq);
+ }
+ 
+ static void
+@@ -1000,7 +1004,7 @@ pnfs_layout_stateid_blocked(const struct pnfs_layout_hdr *lo,
+ {
+ 	u32 seqid = be32_to_cpu(stateid->seqid);
+ 
+-	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier) && lo->plh_barrier;
++	return lo->plh_barrier && pnfs_seqid_is_newer(lo->plh_barrier, seqid);
+ }
+ 
+ /* lget is set to 1 if called from inside send_layoutget call chain */
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 2bedc7839ec56..3d805f5b1f5d2 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6835,6 +6835,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	struct nfsd4_blocked_lock *nbl = NULL;
+ 	struct file_lock *file_lock = NULL;
+ 	struct file_lock *conflock = NULL;
++	struct super_block *sb;
+ 	__be32 status = 0;
+ 	int lkflg;
+ 	int err;
+@@ -6856,6 +6857,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_lock: permission denied!\n");
+ 		return status;
+ 	}
++	sb = cstate->current_fh.fh_dentry->d_sb;
+ 
+ 	if (lock->lk_is_new) {
+ 		if (nfsd4_has_session(cstate))
+@@ -6904,7 +6906,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	fp = lock_stp->st_stid.sc_file;
+ 	switch (lock->lk_type) {
+ 		case NFS4_READW_LT:
+-			if (nfsd4_has_session(cstate))
++			if (nfsd4_has_session(cstate) &&
++			    !(sb->s_export_op->flags & EXPORT_OP_SYNC_LOCKS))
+ 				fl_flags |= FL_SLEEP;
+ 			fallthrough;
+ 		case NFS4_READ_LT:
+@@ -6916,7 +6919,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			fl_type = F_RDLCK;
+ 			break;
+ 		case NFS4_WRITEW_LT:
+-			if (nfsd4_has_session(cstate))
++			if (nfsd4_has_session(cstate) &&
++			    !(sb->s_export_op->flags & EXPORT_OP_SYNC_LOCKS))
+ 				fl_flags |= FL_SLEEP;
+ 			fallthrough;
+ 		case NFS4_WRITE_LT:
+@@ -7036,8 +7040,7 @@ out:
+ /*
+  * The NFSv4 spec allows a client to do a LOCKT without holding an OPEN,
+  * so we do a temporary open here just to get an open file to pass to
+- * vfs_test_lock.  (Arguably perhaps test_lock should be done with an
+- * inode operation.)
++ * vfs_test_lock.
+  */
+ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
+ {
+@@ -7052,7 +7055,9 @@ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct
+ 							NFSD_MAY_READ));
+ 	if (err)
+ 		goto out;
++	lock->fl_file = nf->nf_file;
+ 	err = nfserrno(vfs_test_lock(nf->nf_file, lock));
++	lock->fl_file = NULL;
+ out:
+ 	fh_unlock(fhp);
+ 	nfsd_file_put(nf);
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 93efe7048a771..7c1850adec288 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -542,8 +542,10 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode,
+ 			goto out_cleanup;
+ 	}
+ 	err = ovl_instantiate(dentry, inode, newdentry, hardlink);
+-	if (err)
+-		goto out_cleanup;
++	if (err) {
++		ovl_cleanup(udir, newdentry);
++		dput(newdentry);
++	}
+ out_dput:
+ 	dput(upper);
+ out_unlock:
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 5c2d806e6ae53..c830cc4ea60fb 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -33,11 +33,6 @@ int sysctl_unprivileged_userfaultfd __read_mostly;
+ 
+ static struct kmem_cache *userfaultfd_ctx_cachep __read_mostly;
+ 
+-enum userfaultfd_state {
+-	UFFD_STATE_WAIT_API,
+-	UFFD_STATE_RUNNING,
+-};
+-
+ /*
+  * Start with fault_pending_wqh and fault_wqh so they're more likely
+  * to be in the same cacheline.
+@@ -69,8 +64,6 @@ struct userfaultfd_ctx {
+ 	unsigned int flags;
+ 	/* features requested from the userspace */
+ 	unsigned int features;
+-	/* state machine */
+-	enum userfaultfd_state state;
+ 	/* released */
+ 	bool released;
+ 	/* memory mappings are changing because of non-cooperative event */
+@@ -104,6 +97,14 @@ struct userfaultfd_wake_range {
+ 	unsigned long len;
+ };
+ 
++/* internal indication that UFFD_API ioctl was successfully executed */
++#define UFFD_FEATURE_INITIALIZED		(1u << 31)
++
++static bool userfaultfd_is_initialized(struct userfaultfd_ctx *ctx)
++{
++	return ctx->features & UFFD_FEATURE_INITIALIZED;
++}
++
+ static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode,
+ 				     int wake_flags, void *key)
+ {
+@@ -666,7 +667,6 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
+ 
+ 		refcount_set(&ctx->refcount, 1);
+ 		ctx->flags = octx->flags;
+-		ctx->state = UFFD_STATE_RUNNING;
+ 		ctx->features = octx->features;
+ 		ctx->released = false;
+ 		ctx->mmap_changing = false;
+@@ -943,38 +943,33 @@ static __poll_t userfaultfd_poll(struct file *file, poll_table *wait)
+ 
+ 	poll_wait(file, &ctx->fd_wqh, wait);
+ 
+-	switch (ctx->state) {
+-	case UFFD_STATE_WAIT_API:
++	if (!userfaultfd_is_initialized(ctx))
+ 		return EPOLLERR;
+-	case UFFD_STATE_RUNNING:
+-		/*
+-		 * poll() never guarantees that read won't block.
+-		 * userfaults can be waken before they're read().
+-		 */
+-		if (unlikely(!(file->f_flags & O_NONBLOCK)))
+-			return EPOLLERR;
+-		/*
+-		 * lockless access to see if there are pending faults
+-		 * __pollwait last action is the add_wait_queue but
+-		 * the spin_unlock would allow the waitqueue_active to
+-		 * pass above the actual list_add inside
+-		 * add_wait_queue critical section. So use a full
+-		 * memory barrier to serialize the list_add write of
+-		 * add_wait_queue() with the waitqueue_active read
+-		 * below.
+-		 */
+-		ret = 0;
+-		smp_mb();
+-		if (waitqueue_active(&ctx->fault_pending_wqh))
+-			ret = EPOLLIN;
+-		else if (waitqueue_active(&ctx->event_wqh))
+-			ret = EPOLLIN;
+ 
+-		return ret;
+-	default:
+-		WARN_ON_ONCE(1);
++	/*
++	 * poll() never guarantees that read won't block.
++	 * userfaults can be waken before they're read().
++	 */
++	if (unlikely(!(file->f_flags & O_NONBLOCK)))
+ 		return EPOLLERR;
+-	}
++	/*
++	 * lockless access to see if there are pending faults
++	 * __pollwait last action is the add_wait_queue but
++	 * the spin_unlock would allow the waitqueue_active to
++	 * pass above the actual list_add inside
++	 * add_wait_queue critical section. So use a full
++	 * memory barrier to serialize the list_add write of
++	 * add_wait_queue() with the waitqueue_active read
++	 * below.
++	 */
++	ret = 0;
++	smp_mb();
++	if (waitqueue_active(&ctx->fault_pending_wqh))
++		ret = EPOLLIN;
++	else if (waitqueue_active(&ctx->event_wqh))
++		ret = EPOLLIN;
++
++	return ret;
+ }
+ 
+ static const struct file_operations userfaultfd_fops;
+@@ -1169,7 +1164,7 @@ static ssize_t userfaultfd_read(struct file *file, char __user *buf,
+ 	int no_wait = file->f_flags & O_NONBLOCK;
+ 	struct inode *inode = file_inode(file);
+ 
+-	if (ctx->state == UFFD_STATE_WAIT_API)
++	if (!userfaultfd_is_initialized(ctx))
+ 		return -EINVAL;
+ 
+ 	for (;;) {
+@@ -1908,9 +1903,10 @@ out:
+ static inline unsigned int uffd_ctx_features(__u64 user_features)
+ {
+ 	/*
+-	 * For the current set of features the bits just coincide
++	 * For the current set of features the bits just coincide. Set
++	 * UFFD_FEATURE_INITIALIZED to mark the features as enabled.
+ 	 */
+-	return (unsigned int)user_features;
++	return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
+ }
+ 
+ /*
+@@ -1923,12 +1919,10 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ {
+ 	struct uffdio_api uffdio_api;
+ 	void __user *buf = (void __user *)arg;
++	unsigned int ctx_features;
+ 	int ret;
+ 	__u64 features;
+ 
+-	ret = -EINVAL;
+-	if (ctx->state != UFFD_STATE_WAIT_API)
+-		goto out;
+ 	ret = -EFAULT;
+ 	if (copy_from_user(&uffdio_api, buf, sizeof(uffdio_api)))
+ 		goto out;
+@@ -1952,9 +1946,13 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ 	ret = -EFAULT;
+ 	if (copy_to_user(buf, &uffdio_api, sizeof(uffdio_api)))
+ 		goto out;
+-	ctx->state = UFFD_STATE_RUNNING;
++
+ 	/* only enable the requested features for this uffd context */
+-	ctx->features = uffd_ctx_features(features);
++	ctx_features = uffd_ctx_features(features);
++	ret = -EINVAL;
++	if (cmpxchg(&ctx->features, 0, ctx_features) != 0)
++		goto err_out;
++
+ 	ret = 0;
+ out:
+ 	return ret;
+@@ -1971,7 +1969,7 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
+ 	int ret = -EINVAL;
+ 	struct userfaultfd_ctx *ctx = file->private_data;
+ 
+-	if (cmd != UFFDIO_API && ctx->state == UFFD_STATE_WAIT_API)
++	if (cmd != UFFDIO_API && !userfaultfd_is_initialized(ctx))
+ 		return -EINVAL;
+ 
+ 	switch(cmd) {
+@@ -2085,7 +2083,6 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
+ 	refcount_set(&ctx->refcount, 1);
+ 	ctx->flags = flags;
+ 	ctx->features = 0;
+-	ctx->state = UFFD_STATE_WAIT_API;
+ 	ctx->released = false;
+ 	ctx->mmap_changing = false;
+ 	ctx->mm = current->mm;
+diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h
+index 47accec68cb0f..f603325c0c30d 100644
+--- a/include/crypto/public_key.h
++++ b/include/crypto/public_key.h
+@@ -38,9 +38,9 @@ extern void public_key_free(struct public_key *key);
+ struct public_key_signature {
+ 	struct asymmetric_key_id *auth_ids[2];
+ 	u8 *s;			/* Signature */
+-	u32 s_size;		/* Number of bytes in signature */
+ 	u8 *digest;
+-	u8 digest_size;		/* Number of bytes in digest */
++	u32 s_size;		/* Number of bytes in signature */
++	u32 digest_size;	/* Number of bytes in digest */
+ 	const char *pkey_algo;
+ 	const char *hash_algo;
+ 	const char *encoding;
+diff --git a/include/drm/drm_auth.h b/include/drm/drm_auth.h
+index 6bf8b2b789919..f99d3417f3042 100644
+--- a/include/drm/drm_auth.h
++++ b/include/drm/drm_auth.h
+@@ -107,6 +107,7 @@ struct drm_master {
+ };
+ 
+ struct drm_master *drm_master_get(struct drm_master *master);
++struct drm_master *drm_file_get_master(struct drm_file *file_priv);
+ void drm_master_put(struct drm_master **master);
+ bool drm_is_current_master(struct drm_file *fpriv);
+ 
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index b81b3bfb08c8d..726cfe0ff5f5c 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -226,15 +226,27 @@ struct drm_file {
+ 	/**
+ 	 * @master:
+ 	 *
+-	 * Master this node is currently associated with. Only relevant if
+-	 * drm_is_primary_client() returns true. Note that this only
+-	 * matches &drm_device.master if the master is the currently active one.
++	 * Master this node is currently associated with. Protected by struct
++	 * &drm_device.master_mutex, and serialized by @master_lookup_lock.
++	 *
++	 * Only relevant if drm_is_primary_client() returns true. Note that
++	 * this only matches &drm_device.master if the master is the currently
++	 * active one.
++	 *
++	 * When dereferencing this pointer, either hold struct
++	 * &drm_device.master_mutex for the duration of the pointer's use, or
++	 * use drm_file_get_master() if struct &drm_device.master_mutex is not
++	 * currently held and there is no other need to hold it. This prevents
++	 * @master from being freed during use.
+ 	 *
+ 	 * See also @authentication and @is_master and the :ref:`section on
+ 	 * primary nodes and authentication <drm_primary_node>`.
+ 	 */
+ 	struct drm_master *master;
+ 
++	/** @master_lock: Serializes @master. */
++	spinlock_t master_lookup_lock;
++
+ 	/** @pid: Process that opened this file. */
+ 	struct pid *pid;
+ 
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 232daaec56e44..4711b96dae0c7 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -17,8 +17,6 @@
+ #include <linux/compat.h>
+ #include <uapi/linux/ethtool.h>
+ 
+-#ifdef CONFIG_COMPAT
+-
+ struct compat_ethtool_rx_flow_spec {
+ 	u32		flow_type;
+ 	union ethtool_flow_union h_u;
+@@ -38,8 +36,6 @@ struct compat_ethtool_rxnfc {
+ 	u32				rule_locs[];
+ };
+ 
+-#endif /* CONFIG_COMPAT */
+-
+ #include <linux/rculist.h>
+ 
+ /**
+diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
+index fe848901fcc3a..3260fe7148462 100644
+--- a/include/linux/exportfs.h
++++ b/include/linux/exportfs.h
+@@ -221,6 +221,8 @@ struct export_operations {
+ #define EXPORT_OP_NOATOMIC_ATTR		(0x10) /* Filesystem cannot supply
+ 						  atomic attribute updates
+ 						*/
++#define EXPORT_OP_SYNC_LOCKS		(0x20) /* Filesystem can't do
++						  asychronous blocking locks */
+ 	unsigned long	flags;
+ };
+ 
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index f7ca1a3870ea5..1faebe1cd0ed5 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -858,6 +858,11 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 
+ void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm);
+ 
++static inline void hugetlb_count_init(struct mm_struct *mm)
++{
++	atomic_long_set(&mm->hugetlb_usage, 0);
++}
++
+ static inline void hugetlb_count_add(long l, struct mm_struct *mm)
+ {
+ 	atomic_long_add(l, &mm->hugetlb_usage);
+@@ -1042,6 +1047,10 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 	return &mm->page_table_lock;
+ }
+ 
++static inline void hugetlb_count_init(struct mm_struct *mm)
++{
++}
++
+ static inline void hugetlb_report_usage(struct seq_file *f, struct mm_struct *m)
+ {
+ }
+diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
+index 0b8d1fdda3a11..c137396129db6 100644
+--- a/include/linux/hugetlb_cgroup.h
++++ b/include/linux/hugetlb_cgroup.h
+@@ -121,6 +121,13 @@ static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
+ 	css_put(&h_cg->css);
+ }
+ 
++static inline void resv_map_dup_hugetlb_cgroup_uncharge_info(
++						struct resv_map *resv_map)
++{
++	if (resv_map->css)
++		css_get(resv_map->css);
++}
++
+ extern int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					struct hugetlb_cgroup **ptr);
+ extern int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
+@@ -199,6 +206,11 @@ static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
+ {
+ }
+ 
++static inline void resv_map_dup_hugetlb_cgroup_uncharge_info(
++						struct resv_map *resv_map)
++{
++}
++
+ static inline int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					       struct hugetlb_cgroup **ptr)
+ {
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index d0fa0b31994d0..05a65eb155f76 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -124,9 +124,9 @@
+ #define DMAR_MTRR_PHYSMASK8_REG 0x208
+ #define DMAR_MTRR_PHYSBASE9_REG 0x210
+ #define DMAR_MTRR_PHYSMASK9_REG 0x218
+-#define DMAR_VCCAP_REG		0xe00 /* Virtual command capability register */
+-#define DMAR_VCMD_REG		0xe10 /* Virtual command register */
+-#define DMAR_VCRSP_REG		0xe20 /* Virtual command response register */
++#define DMAR_VCCAP_REG		0xe30 /* Virtual command capability register */
++#define DMAR_VCMD_REG		0xe00 /* Virtual command register */
++#define DMAR_VCRSP_REG		0xe10 /* Virtual command response register */
+ 
+ #define DMAR_IQER_REG_IQEI(reg)		FIELD_GET(GENMASK_ULL(3, 0), reg)
+ #define DMAR_IQER_REG_ITESID(reg)	FIELD_GET(GENMASK_ULL(47, 32), reg)
+diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
+index a7fd2c3ccb777..d01b504ce06fe 100644
+--- a/include/linux/memory_hotplug.h
++++ b/include/linux/memory_hotplug.h
+@@ -339,8 +339,8 @@ extern void sparse_remove_section(struct mem_section *ms,
+ 		unsigned long map_offset, struct vmem_altmap *altmap);
+ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
+ 					  unsigned long pnum);
+-extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
+-		unsigned long nr_pages);
++extern struct zone *zone_for_pfn_range(int online_type, int nid,
++		unsigned long start_pfn, unsigned long nr_pages);
+ extern int arch_create_linear_mapping(int nid, u64 start, u64 size,
+ 				      struct mhp_params *params);
+ void arch_remove_linear_mapping(u64 start, u64 size);
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index d9680b798b211..955c82b4737c5 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -167,7 +167,7 @@ void synchronize_rcu_tasks(void);
+ # define synchronize_rcu_tasks synchronize_rcu
+ # endif
+ 
+-# ifdef CONFIG_TASKS_RCU_TRACE
++# ifdef CONFIG_TASKS_TRACE_RCU
+ # define rcu_tasks_trace_qs(t)						\
+ 	do {								\
+ 		if (!likely(READ_ONCE((t)->trc_reader_checked)) &&	\
+diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
+index d1672de9ca89e..87b325aec5085 100644
+--- a/include/linux/rtmutex.h
++++ b/include/linux/rtmutex.h
+@@ -52,17 +52,22 @@ do { \
+ } while (0)
+ 
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+-#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) \
+-	, .dep_map = { .name = #mutexname }
++#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)	\
++	.dep_map = {					\
++		.name = #mutexname,			\
++		.wait_type_inner = LD_WAIT_SLEEP,	\
++	}
+ #else
+ #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
+ #endif
+ 
+-#define __RT_MUTEX_INITIALIZER(mutexname) \
+-	{ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
+-	, .waiters = RB_ROOT_CACHED \
+-	, .owner = NULL \
+-	__DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)}
++#define __RT_MUTEX_INITIALIZER(mutexname)				\
++{									\
++	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock),	\
++	.waiters = RB_ROOT_CACHED,					\
++	.owner = NULL,							\
++	__DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)			\
++}
+ 
+ #define DEFINE_RT_MUTEX(mutexname) \
+ 	struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index c8c39f22d3b16..59cd97da895b7 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -432,6 +432,7 @@ void			xprt_release_write(struct rpc_xprt *, struct rpc_task *);
+ #define XPRT_CONGESTED		(9)
+ #define XPRT_CWND_WAIT		(10)
+ #define XPRT_WRITE_SPACE	(11)
++#define XPRT_SND_IS_COOKIE	(12)
+ 
+ static inline void xprt_set_connected(struct rpc_xprt *xprt)
+ {
+diff --git a/include/linux/vt_kern.h b/include/linux/vt_kern.h
+index 0da94a6dee15e..b5ab452fca5b5 100644
+--- a/include/linux/vt_kern.h
++++ b/include/linux/vt_kern.h
+@@ -148,26 +148,26 @@ void hide_boot_cursor(bool hide);
+ 
+ /* keyboard  provided interfaces */
+ int vt_do_diacrit(unsigned int cmd, void __user *up, int eperm);
+-int vt_do_kdskbmode(int console, unsigned int arg);
+-int vt_do_kdskbmeta(int console, unsigned int arg);
++int vt_do_kdskbmode(unsigned int console, unsigned int arg);
++int vt_do_kdskbmeta(unsigned int console, unsigned int arg);
+ int vt_do_kbkeycode_ioctl(int cmd, struct kbkeycode __user *user_kbkc,
+ 			  int perm);
+ int vt_do_kdsk_ioctl(int cmd, struct kbentry __user *user_kbe, int perm,
+-		     int console);
++		     unsigned int console);
+ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm);
+-int vt_do_kdskled(int console, int cmd, unsigned long arg, int perm);
+-int vt_do_kdgkbmode(int console);
+-int vt_do_kdgkbmeta(int console);
+-void vt_reset_unicode(int console);
++int vt_do_kdskled(unsigned int console, int cmd, unsigned long arg, int perm);
++int vt_do_kdgkbmode(unsigned int console);
++int vt_do_kdgkbmeta(unsigned int console);
++void vt_reset_unicode(unsigned int console);
+ int vt_get_shift_state(void);
+-void vt_reset_keyboard(int console);
+-int vt_get_leds(int console, int flag);
+-int vt_get_kbd_mode_bit(int console, int bit);
+-void vt_set_kbd_mode_bit(int console, int bit);
+-void vt_clr_kbd_mode_bit(int console, int bit);
+-void vt_set_led_state(int console, int leds);
+-void vt_kbd_con_start(int console);
+-void vt_kbd_con_stop(int console);
++void vt_reset_keyboard(unsigned int console);
++int vt_get_leds(unsigned int console, int flag);
++int vt_get_kbd_mode_bit(unsigned int console, int bit);
++void vt_set_kbd_mode_bit(unsigned int console, int bit);
++void vt_clr_kbd_mode_bit(unsigned int console, int bit);
++void vt_set_led_state(unsigned int console, int leds);
++void vt_kbd_con_start(unsigned int console);
++void vt_kbd_con_stop(unsigned int console);
+ 
+ void vc_scrolldelta_helper(struct vc_data *c, int lines,
+ 		unsigned int rolled_over, void *_base, unsigned int size);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index db4312e44d470..c17e5557a0078 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1412,6 +1412,10 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ 				!hci_dev_test_flag(dev, HCI_AUTO_OFF))
+ #define bredr_sc_enabled(dev)  (lmp_sc_capable(dev) && \
+ 				hci_dev_test_flag(dev, HCI_SC_ENABLED))
++#define rpa_valid(dev)         (bacmp(&dev->rpa, BDADDR_ANY) && \
++				!hci_dev_test_flag(dev, HCI_RPA_EXPIRED))
++#define adv_rpa_valid(adv)     (bacmp(&adv->random_addr, BDADDR_ANY) && \
++				!adv->rpa_expired)
+ 
+ #define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \
+ 		      ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M))
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 1b9d75aedb225..3961461d9c8bc 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -451,6 +451,7 @@ struct flow_block_offload {
+ 	struct list_head *driver_block_list;
+ 	struct netlink_ext_ack *extack;
+ 	struct Qdisc *sch;
++	struct list_head *cb_list_head;
+ };
+ 
+ enum tc_setup_type;
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index b671b1f2ce0fd..3b93509af246c 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -94,6 +94,7 @@ struct btrfs_space_info;
+ 	EM( FLUSH_DELAYED_ITEMS,	"FLUSH_DELAYED_ITEMS")		\
+ 	EM( FLUSH_DELALLOC,		"FLUSH_DELALLOC")		\
+ 	EM( FLUSH_DELALLOC_WAIT,	"FLUSH_DELALLOC_WAIT")		\
++	EM( FLUSH_DELALLOC_FULL,	"FLUSH_DELALLOC_FULL")		\
+ 	EM( FLUSH_DELAYED_REFS_NR,	"FLUSH_DELAYED_REFS_NR")	\
+ 	EM( FLUSH_DELAYED_REFS,		"FLUSH_ELAYED_REFS")		\
+ 	EM( ALLOC_CHUNK,		"ALLOC_CHUNK")			\
+diff --git a/include/uapi/linux/serial_reg.h b/include/uapi/linux/serial_reg.h
+index be07b5470f4bb..f51bc8f368134 100644
+--- a/include/uapi/linux/serial_reg.h
++++ b/include/uapi/linux/serial_reg.h
+@@ -62,6 +62,7 @@
+  * ST16C654:	 8  16  56  60		 8  16  32  56	PORT_16654
+  * TI16C750:	 1  16  32  56		xx  xx  xx  xx	PORT_16750
+  * TI16C752:	 8  16  56  60		 8  16  32  56
++ * OX16C950:	16  32 112 120		16  32  64 112	PORT_16C950
+  * Tegra:	 1   4   8  14		16   8   4   1	PORT_TEGRA
+  */
+ #define UART_FCR_R_TRIG_00	0x00
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index dadae6255d055..f2faa13534e57 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -792,7 +792,7 @@ static int dump_show(struct seq_file *seq, void *v)
+ }
+ DEFINE_SHOW_ATTRIBUTE(dump);
+ 
+-static void dma_debug_fs_init(void)
++static int __init dma_debug_fs_init(void)
+ {
+ 	struct dentry *dentry = debugfs_create_dir("dma-api", NULL);
+ 
+@@ -805,7 +805,10 @@ static void dma_debug_fs_init(void)
+ 	debugfs_create_u32("nr_total_entries", 0444, dentry, &nr_total_entries);
+ 	debugfs_create_file("driver_filter", 0644, dentry, NULL, &filter_fops);
+ 	debugfs_create_file("dump", 0444, dentry, NULL, &dump_fops);
++
++	return 0;
+ }
++core_initcall_sync(dma_debug_fs_init);
+ 
+ static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry)
+ {
+@@ -890,8 +893,6 @@ static int dma_debug_init(void)
+ 		spin_lock_init(&dma_entry_hash[i].lock);
+ 	}
+ 
+-	dma_debug_fs_init();
+-
+ 	nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES);
+ 	for (i = 0; i < nr_pages; ++i)
+ 		dma_debug_create_entries(GFP_KERNEL);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 44f4c2d83763f..cbba21e3a58df 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1050,6 +1050,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
+ 	mm->pmd_huge_pte = NULL;
+ #endif
+ 	mm_init_uprobes_state(mm);
++	hugetlb_count_init(mm);
+ 
+ 	if (current->mm) {
+ 		mm->flags = current->mm->flags & MMF_INIT_MASK;
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index ad0db322ed3b4..1a7e3f838077b 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -1556,7 +1556,7 @@ void __sched __rt_mutex_init(struct rt_mutex *lock, const char *name,
+ 		     struct lock_class_key *key)
+ {
+ 	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+-	lockdep_init_map(&lock->dep_map, name, key, 0);
++	lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP);
+ 
+ 	__rt_mutex_basic_init(lock);
+ }
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index ca43239a255ad..cb5a25a8a0cc7 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -51,7 +51,8 @@ static struct kmem_cache *create_pid_cachep(unsigned int level)
+ 	mutex_lock(&pid_caches_mutex);
+ 	/* Name collision forces to do allocation under mutex. */
+ 	if (!*pkc)
+-		*pkc = kmem_cache_create(name, len, 0, SLAB_HWCACHE_ALIGN, 0);
++		*pkc = kmem_cache_create(name, len, 0,
++					 SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, 0);
+ 	mutex_unlock(&pid_caches_mutex);
+ 	/* current can fail, but someone else can succeed. */
+ 	return READ_ONCE(*pkc);
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 142a58d124d95..6dad7da8f383e 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2545,6 +2545,7 @@ void console_unlock(void)
+ 	bool do_cond_resched, retry;
+ 	struct printk_info info;
+ 	struct printk_record r;
++	u64 __maybe_unused next_seq;
+ 
+ 	if (console_suspended) {
+ 		up_console_sem();
+@@ -2654,8 +2655,10 @@ skip:
+ 			cond_resched();
+ 	}
+ 
+-	console_locked = 0;
++	/* Get consistent value of the next-to-be-used sequence number. */
++	next_seq = console_seq;
+ 
++	console_locked = 0;
+ 	up_console_sem();
+ 
+ 	/*
+@@ -2664,7 +2667,7 @@ skip:
+ 	 * there's a new owner and the console_unlock() from them will do the
+ 	 * flush, no worries.
+ 	 */
+-	retry = prb_read_valid(prb, console_seq, NULL);
++	retry = prb_read_valid(prb, next_seq, NULL);
+ 	printk_safe_exit_irqrestore(flags);
+ 
+ 	if (retry && console_trylock())
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index de1dc3bb7f701..6ce104242b23d 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2982,17 +2982,17 @@ static void noinstr rcu_dynticks_task_exit(void)
+ /* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
+ static void rcu_dynticks_task_trace_enter(void)
+ {
+-#ifdef CONFIG_TASKS_RCU_TRACE
++#ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+ 		current->trc_reader_special.b.need_mb = true;
+-#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
++#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
+ }
+ 
+ /* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
+ static void rcu_dynticks_task_trace_exit(void)
+ {
+-#ifdef CONFIG_TASKS_RCU_TRACE
++#ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+ 		current->trc_reader_special.b.need_mb = false;
+-#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
++#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a2403432f3abb..399c37c95392e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8536,7 +8536,6 @@ static void balance_push(struct rq *rq)
+ 	struct task_struct *push_task = rq->curr;
+ 
+ 	lockdep_assert_rq_held(rq);
+-	SCHED_WARN_ON(rq->cpu != smp_processor_id());
+ 
+ 	/*
+ 	 * Ensure the thing is persistent until balance_push_set(.on = false);
+@@ -8544,9 +8543,10 @@ static void balance_push(struct rq *rq)
+ 	rq->balance_callback = &balance_push_callback;
+ 
+ 	/*
+-	 * Only active while going offline.
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
+ 	 */
+-	if (!cpu_dying(rq->cpu))
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
+ 		return;
+ 
+ 	/*
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index b61eefe5ccf53..7b3c754821e55 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1548,7 +1548,7 @@ static int start_kthread(unsigned int cpu)
+ static int start_per_cpu_kthreads(struct trace_array *tr)
+ {
+ 	struct cpumask *current_mask = &save_cpumask;
+-	int retval;
++	int retval = 0;
+ 	int cpu;
+ 
+ 	get_online_cpus();
+@@ -1568,13 +1568,13 @@ static int start_per_cpu_kthreads(struct trace_array *tr)
+ 		retval = start_kthread(cpu);
+ 		if (retval) {
+ 			stop_per_cpu_kthreads();
+-			return retval;
++			break;
+ 		}
+ 	}
+ 
+ 	put_online_cpus();
+ 
+-	return 0;
++	return retval;
+ }
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index f148eacda55a9..542c2d03dab65 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5902,6 +5902,13 @@ static void __init wq_numa_init(void)
+ 		return;
+ 	}
+ 
++	for_each_possible_cpu(cpu) {
++		if (WARN_ON(cpu_to_node(cpu) == NUMA_NO_NODE)) {
++			pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
++			return;
++		}
++	}
++
+ 	wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs();
+ 	BUG_ON(!wq_update_unbound_numa_attrs_buf);
+ 
+@@ -5919,11 +5926,6 @@ static void __init wq_numa_init(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		node = cpu_to_node(cpu);
+-		if (WARN_ON(node == NUMA_NO_NODE)) {
+-			pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
+-			/* happens iff arch is bonkers, let's just proceed */
+-			return;
+-		}
+ 		cpumask_set_cpu(cpu, tbl[node]);
+ 	}
+ 
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index d500320778c76..f6d5d30d01bf2 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -4286,8 +4286,8 @@ static struct bpf_test tests[] = {
+ 		.u.insns_int = {
+ 			BPF_LD_IMM64(R0, 0),
+ 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
+-			BPF_STX_MEM(BPF_W, R10, R1, -40),
+-			BPF_LDX_MEM(BPF_W, R0, R10, -40),
++			BPF_STX_MEM(BPF_DW, R10, R1, -40),
++			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		INTERNAL,
+@@ -6659,7 +6659,14 @@ static int run_one(const struct bpf_prog *fp, struct bpf_test *test)
+ 		u64 duration;
+ 		u32 ret;
+ 
+-		if (test->test[i].data_size == 0 &&
++		/*
++		 * NOTE: Several sub-tests may be present, in which case
++		 * a zero {data_size, result} tuple indicates the end of
++		 * the sub-test array. The first test is always run,
++		 * even if both data_size and result happen to be zero.
++		 */
++		if (i > 0 &&
++		    test->test[i].data_size == 0 &&
+ 		    test->test[i].result == 0)
+ 			break;
+ 
+diff --git a/lib/test_stackinit.c b/lib/test_stackinit.c
+index f93b1e145ada7..16b1d3a3a4975 100644
+--- a/lib/test_stackinit.c
++++ b/lib/test_stackinit.c
+@@ -67,10 +67,10 @@ static bool range_contains(char *haystack_start, size_t haystack_size,
+ #define INIT_STRUCT_none		/**/
+ #define INIT_STRUCT_zero		= { }
+ #define INIT_STRUCT_static_partial	= { .two = 0, }
+-#define INIT_STRUCT_static_all		= { .one = arg->one,		\
+-					    .two = arg->two,		\
+-					    .three = arg->three,	\
+-					    .four = arg->four,		\
++#define INIT_STRUCT_static_all		= { .one = 0,			\
++					    .two = 0,			\
++					    .three = 0,			\
++					    .four = 0,			\
+ 					}
+ #define INIT_STRUCT_dynamic_partial	= { .two = arg->two, }
+ #define INIT_STRUCT_dynamic_all		= { .one = arg->one,		\
+@@ -84,8 +84,7 @@ static bool range_contains(char *haystack_start, size_t haystack_size,
+ 					var.one = 0;			\
+ 					var.two = 0;			\
+ 					var.three = 0;			\
+-					memset(&var.four, 0,		\
+-					       sizeof(var.four))
++					var.four = 0
+ 
+ /*
+  * @name: unique string name for the test
+@@ -210,18 +209,13 @@ struct test_small_hole {
+ 	unsigned long four;
+ };
+ 
+-/* Try to trigger unhandled padding in a structure. */
+-struct test_aligned {
+-	u32 internal1;
+-	u64 internal2;
+-} __aligned(64);
+-
++/* Trigger unhandled padding in a structure. */
+ struct test_big_hole {
+ 	u8 one;
+ 	u8 two;
+ 	u8 three;
+ 	/* 61 byte padding hole here. */
+-	struct test_aligned four;
++	u8 four __aligned(64);
+ } __aligned(64);
+ 
+ struct test_trailing_hole {
+diff --git a/mm/hmm.c b/mm/hmm.c
+index fad6be2bf0727..842e265992380 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -295,10 +295,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
+ 		goto fault;
+ 
+ 	/*
++	 * Bypass devmap pte such as DAX page when all pfn requested
++	 * flags(pfn_req_flags) are fulfilled.
+ 	 * Since each architecture defines a struct page for the zero page, just
+ 	 * fall through and treat it like a normal page.
+ 	 */
+-	if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
++	if (pte_special(pte) && !pte_devmap(pte) &&
++	    !is_zero_pfn(pte_pfn(pte))) {
+ 		if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
+ 			pte_unmap(ptep);
+ 			return -EFAULT;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 8ea35ba6699f2..6c583ef079e3d 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4033,8 +4033,10 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
+ 	 * after this open call completes.  It is therefore safe to take a
+ 	 * new reference here without additional locking.
+ 	 */
+-	if (resv && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
++	if (resv && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) {
++		resv_map_dup_hugetlb_cgroup_uncharge_info(resv);
+ 		kref_get(&resv->refs);
++	}
+ }
+ 
+ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 470400cc75136..83811c976c0cb 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -68,7 +68,7 @@ atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
+ 
+ static bool __page_handle_poison(struct page *page)
+ {
+-	bool ret;
++	int ret;
+ 
+ 	zone_pcp_disable(page_zone(page));
+ 	ret = dissolve_free_huge_page(page);
+@@ -76,7 +76,7 @@ static bool __page_handle_poison(struct page *page)
+ 		ret = take_page_off_buddy(page);
+ 	zone_pcp_enable(page_zone(page));
+ 
+-	return ret;
++	return ret > 0;
+ }
+ 
+ static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, bool release)
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 86c3af79e874e..97698a761221e 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -708,8 +708,8 @@ static inline struct zone *default_zone_for_pfn(int nid, unsigned long start_pfn
+ 	return movable_node_enabled ? movable_zone : kernel_zone;
+ }
+ 
+-struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
+-		unsigned long nr_pages)
++struct zone *zone_for_pfn_range(int online_type, int nid,
++		unsigned long start_pfn, unsigned long nr_pages)
+ {
+ 	if (online_type == MMOP_ONLINE_KERNEL)
+ 		return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages);
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index e32360e902744..54f6eaff18c52 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1965,17 +1965,26 @@ unsigned int mempolicy_slab_node(void)
+  */
+ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
+ {
+-	unsigned nnodes = nodes_weight(pol->nodes);
+-	unsigned target;
++	nodemask_t nodemask = pol->nodes;
++	unsigned int target, nnodes;
+ 	int i;
+ 	int nid;
++	/*
++	 * The barrier will stabilize the nodemask in a register or on
++	 * the stack so that it will stop changing under the code.
++	 *
++	 * Between first_node() and next_node(), pol->nodes could be changed
++	 * by other threads. So we put pol->nodes in a local stack.
++	 */
++	barrier();
+ 
++	nnodes = nodes_weight(nodemask);
+ 	if (!nnodes)
+ 		return numa_node_id();
+ 	target = (unsigned int)n % nnodes;
+-	nid = first_node(pol->nodes);
++	nid = first_node(nodemask);
+ 	for (i = 0; i < target; i++)
+-		nid = next_node(nid, pol->nodes);
++		nid = next_node(nid, nodemask);
+ 	return nid;
+ }
+ 
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index eeb3a9cb36bb4..7a28f7db7d286 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3445,8 +3445,10 @@ void free_unref_page_list(struct list_head *list)
+ 	/* Prepare pages for freeing */
+ 	list_for_each_entry_safe(page, next, list, lru) {
+ 		pfn = page_to_pfn(page);
+-		if (!free_unref_page_prepare(page, pfn, 0))
++		if (!free_unref_page_prepare(page, pfn, 0)) {
+ 			list_del(&page->lru);
++			continue;
++		}
+ 
+ 		/*
+ 		 * Free isolated pages directly to the allocator, see
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index eeae2f6bc5320..f1782b816c983 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2592,7 +2592,7 @@ out:
+ 			cgroup_size = max(cgroup_size, protection);
+ 
+ 			scan = lruvec_size - lruvec_size * protection /
+-				cgroup_size;
++				(cgroup_size + 1);
+ 
+ 			/*
+ 			 * Minimally target SWAP_CLUSTER_MAX pages to keep
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index f4fea28e05da6..3ec1a51a6944e 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -138,7 +138,7 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size)
+ 
+ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+ {
+-	struct xen_9pfs_front_priv *priv = NULL;
++	struct xen_9pfs_front_priv *priv;
+ 	RING_IDX cons, prod, masked_cons, masked_prod;
+ 	unsigned long flags;
+ 	u32 size = p9_req->tc.size;
+@@ -151,7 +151,7 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+ 			break;
+ 	}
+ 	read_unlock(&xen_9pfs_lock);
+-	if (!priv || priv->client != client)
++	if (list_entry_is_head(priv, &xen_9pfs_devs, list))
+ 		return -EINVAL;
+ 
+ 	num = p9_req->tc.tag % priv->num_rings;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 1c30182025645..0d0b958b7fe7e 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -40,6 +40,8 @@
+ #define ZERO_KEY "\x00\x00\x00\x00\x00\x00\x00\x00" \
+ 		 "\x00\x00\x00\x00\x00\x00\x00\x00"
+ 
++#define secs_to_jiffies(_secs) msecs_to_jiffies((_secs) * 1000)
++
+ /* Handle HCI Event packets */
+ 
+ static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb,
+@@ -1171,6 +1173,12 @@ static void hci_cc_le_set_random_addr(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	bacpy(&hdev->random_addr, sent);
+ 
++	if (!bacmp(&hdev->rpa, sent)) {
++		hci_dev_clear_flag(hdev, HCI_RPA_EXPIRED);
++		queue_delayed_work(hdev->workqueue, &hdev->rpa_expired,
++				   secs_to_jiffies(hdev->rpa_timeout));
++	}
++
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -1201,24 +1209,30 @@ static void hci_cc_le_set_adv_set_random_addr(struct hci_dev *hdev,
+ {
+ 	__u8 status = *((__u8 *) skb->data);
+ 	struct hci_cp_le_set_adv_set_rand_addr *cp;
+-	struct adv_info *adv_instance;
++	struct adv_info *adv;
+ 
+ 	if (status)
+ 		return;
+ 
+ 	cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_ADV_SET_RAND_ADDR);
+-	if (!cp)
++	/* Update only in case the adv instance since handle 0x00 shall be using
++	 * HCI_OP_LE_SET_RANDOM_ADDR since that allows both extended and
++	 * non-extended adverting.
++	 */
++	if (!cp || !cp->handle)
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	if (!cp->handle) {
+-		/* Store in hdev for instance 0 (Set adv and Directed advs) */
+-		bacpy(&hdev->random_addr, &cp->bdaddr);
+-	} else {
+-		adv_instance = hci_find_adv_instance(hdev, cp->handle);
+-		if (adv_instance)
+-			bacpy(&adv_instance->random_addr, &cp->bdaddr);
++	adv = hci_find_adv_instance(hdev, cp->handle);
++	if (adv) {
++		bacpy(&adv->random_addr, &cp->bdaddr);
++		if (!bacmp(&hdev->rpa, &cp->bdaddr)) {
++			adv->rpa_expired = false;
++			queue_delayed_work(hdev->workqueue,
++					   &adv->rpa_expired_cb,
++					   secs_to_jiffies(hdev->rpa_timeout));
++		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+@@ -3268,11 +3282,9 @@ unlock:
+ 	hci_dev_unlock(hdev);
+ }
+ 
+-static inline void handle_cmd_cnt_and_timer(struct hci_dev *hdev,
+-					    u16 opcode, u8 ncmd)
++static inline void handle_cmd_cnt_and_timer(struct hci_dev *hdev, u8 ncmd)
+ {
+-	if (opcode != HCI_OP_NOP)
+-		cancel_delayed_work(&hdev->cmd_timer);
++	cancel_delayed_work(&hdev->cmd_timer);
+ 
+ 	if (!test_bit(HCI_RESET, &hdev->flags)) {
+ 		if (ncmd) {
+@@ -3647,7 +3659,7 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 		break;
+ 	}
+ 
+-	handle_cmd_cnt_and_timer(hdev, *opcode, ev->ncmd);
++	handle_cmd_cnt_and_timer(hdev, ev->ncmd);
+ 
+ 	hci_req_cmd_complete(hdev, *opcode, *status, req_complete,
+ 			     req_complete_skb);
+@@ -3748,7 +3760,7 @@ static void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 		break;
+ 	}
+ 
+-	handle_cmd_cnt_and_timer(hdev, *opcode, ev->ncmd);
++	handle_cmd_cnt_and_timer(hdev, ev->ncmd);
+ 
+ 	/* Indicate request completion if the command failed. Also, if
+ 	 * we're not waiting for a special event and we get a success
+@@ -4382,6 +4394,21 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 
+ 	switch (ev->status) {
+ 	case 0x00:
++		/* The synchronous connection complete event should only be
++		 * sent once per new connection. Receiving a successful
++		 * complete event when the connection status is already
++		 * BT_CONNECTED means that the device is misbehaving and sent
++		 * multiple complete event packets for the same new connection.
++		 *
++		 * Registering the device more than once can corrupt kernel
++		 * memory, hence upon detecting this invalid event, we report
++		 * an error and ignore the packet.
++		 */
++		if (conn->state == BT_CONNECTED) {
++			bt_dev_err(hdev, "Ignoring connect complete event for existing connection");
++			goto unlock;
++		}
++
+ 		conn->handle = __le16_to_cpu(ev->handle);
+ 		conn->state  = BT_CONNECTED;
+ 		conn->type   = ev->link_type;
+@@ -5104,9 +5131,64 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev,
+ }
+ #endif
+ 
++static void le_conn_update_addr(struct hci_conn *conn, bdaddr_t *bdaddr,
++				u8 bdaddr_type, bdaddr_t *local_rpa)
++{
++	if (conn->out) {
++		conn->dst_type = bdaddr_type;
++		conn->resp_addr_type = bdaddr_type;
++		bacpy(&conn->resp_addr, bdaddr);
++
++		/* Check if the controller has set a Local RPA then it must be
++		 * used instead or hdev->rpa.
++		 */
++		if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) {
++			conn->init_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->init_addr, local_rpa);
++		} else if (hci_dev_test_flag(conn->hdev, HCI_PRIVACY)) {
++			conn->init_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->init_addr, &conn->hdev->rpa);
++		} else {
++			hci_copy_identity_address(conn->hdev, &conn->init_addr,
++						  &conn->init_addr_type);
++		}
++	} else {
++		conn->resp_addr_type = conn->hdev->adv_addr_type;
++		/* Check if the controller has set a Local RPA then it must be
++		 * used instead or hdev->rpa.
++		 */
++		if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) {
++			conn->resp_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->resp_addr, local_rpa);
++		} else if (conn->hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) {
++			/* In case of ext adv, resp_addr will be updated in
++			 * Adv Terminated event.
++			 */
++			if (!ext_adv_capable(conn->hdev))
++				bacpy(&conn->resp_addr,
++				      &conn->hdev->random_addr);
++		} else {
++			bacpy(&conn->resp_addr, &conn->hdev->bdaddr);
++		}
++
++		conn->init_addr_type = bdaddr_type;
++		bacpy(&conn->init_addr, bdaddr);
++
++		/* For incoming connections, set the default minimum
++		 * and maximum connection interval. They will be used
++		 * to check if the parameters are in range and if not
++		 * trigger the connection update procedure.
++		 */
++		conn->le_conn_min_interval = conn->hdev->le_conn_min_interval;
++		conn->le_conn_max_interval = conn->hdev->le_conn_max_interval;
++	}
++}
++
+ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+-			bdaddr_t *bdaddr, u8 bdaddr_type, u8 role, u16 handle,
+-			u16 interval, u16 latency, u16 supervision_timeout)
++				 bdaddr_t *bdaddr, u8 bdaddr_type,
++				 bdaddr_t *local_rpa, u8 role, u16 handle,
++				 u16 interval, u16 latency,
++				 u16 supervision_timeout)
+ {
+ 	struct hci_conn_params *params;
+ 	struct hci_conn *conn;
+@@ -5154,32 +5236,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		cancel_delayed_work(&conn->le_conn_timeout);
+ 	}
+ 
+-	if (!conn->out) {
+-		/* Set the responder (our side) address type based on
+-		 * the advertising address type.
+-		 */
+-		conn->resp_addr_type = hdev->adv_addr_type;
+-		if (hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) {
+-			/* In case of ext adv, resp_addr will be updated in
+-			 * Adv Terminated event.
+-			 */
+-			if (!ext_adv_capable(hdev))
+-				bacpy(&conn->resp_addr, &hdev->random_addr);
+-		} else {
+-			bacpy(&conn->resp_addr, &hdev->bdaddr);
+-		}
+-
+-		conn->init_addr_type = bdaddr_type;
+-		bacpy(&conn->init_addr, bdaddr);
+-
+-		/* For incoming connections, set the default minimum
+-		 * and maximum connection interval. They will be used
+-		 * to check if the parameters are in range and if not
+-		 * trigger the connection update procedure.
+-		 */
+-		conn->le_conn_min_interval = hdev->le_conn_min_interval;
+-		conn->le_conn_max_interval = hdev->le_conn_max_interval;
+-	}
++	le_conn_update_addr(conn, bdaddr, bdaddr_type, local_rpa);
+ 
+ 	/* Lookup the identity address from the stored connection
+ 	 * address and address type.
+@@ -5290,7 +5347,7 @@ static void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type,
+-			     ev->role, le16_to_cpu(ev->handle),
++			     NULL, ev->role, le16_to_cpu(ev->handle),
+ 			     le16_to_cpu(ev->interval),
+ 			     le16_to_cpu(ev->latency),
+ 			     le16_to_cpu(ev->supervision_timeout));
+@@ -5304,7 +5361,7 @@ static void hci_le_enh_conn_complete_evt(struct hci_dev *hdev,
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type,
+-			     ev->role, le16_to_cpu(ev->handle),
++			     &ev->local_rpa, ev->role, le16_to_cpu(ev->handle),
+ 			     le16_to_cpu(ev->interval),
+ 			     le16_to_cpu(ev->latency),
+ 			     le16_to_cpu(ev->supervision_timeout));
+@@ -5340,7 +5397,8 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	if (conn) {
+ 		struct adv_info *adv_instance;
+ 
+-		if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM)
++		if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM ||
++		    bacmp(&conn->resp_addr, BDADDR_ANY))
+ 			return;
+ 
+ 		if (!ev->handle) {
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 1d14adc023e96..f15626607b2d6 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -2072,8 +2072,6 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 	 * current RPA has expired then generate a new one.
+ 	 */
+ 	if (use_rpa) {
+-		int to;
+-
+ 		/* If Controller supports LL Privacy use own address type is
+ 		 * 0x03
+ 		 */
+@@ -2084,14 +2082,10 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 			*own_addr_type = ADDR_LE_DEV_RANDOM;
+ 
+ 		if (adv_instance) {
+-			if (!adv_instance->rpa_expired &&
+-			    !bacmp(&adv_instance->random_addr, &hdev->rpa))
++			if (adv_rpa_valid(adv_instance))
+ 				return 0;
+-
+-			adv_instance->rpa_expired = false;
+ 		} else {
+-			if (!hci_dev_test_and_clear_flag(hdev, HCI_RPA_EXPIRED) &&
+-			    !bacmp(&hdev->random_addr, &hdev->rpa))
++			if (rpa_valid(hdev))
+ 				return 0;
+ 		}
+ 
+@@ -2103,14 +2097,6 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 
+ 		bacpy(rand_addr, &hdev->rpa);
+ 
+-		to = msecs_to_jiffies(hdev->rpa_timeout * 1000);
+-		if (adv_instance)
+-			queue_delayed_work(hdev->workqueue,
+-					   &adv_instance->rpa_expired_cb, to);
+-		else
+-			queue_delayed_work(hdev->workqueue,
+-					   &hdev->rpa_expired, to);
+-
+ 		return 0;
+ 	}
+ 
+@@ -2153,6 +2139,30 @@ void __hci_req_clear_ext_adv_sets(struct hci_request *req)
+ 	hci_req_add(req, HCI_OP_LE_CLEAR_ADV_SETS, 0, NULL);
+ }
+ 
++static void set_random_addr(struct hci_request *req, bdaddr_t *rpa)
++{
++	struct hci_dev *hdev = req->hdev;
++
++	/* If we're advertising or initiating an LE connection we can't
++	 * go ahead and change the random address at this time. This is
++	 * because the eventual initiator address used for the
++	 * subsequently created connection will be undefined (some
++	 * controllers use the new address and others the one we had
++	 * when the operation started).
++	 *
++	 * In this kind of scenario skip the update and let the random
++	 * address be updated at the next cycle.
++	 */
++	if (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
++	    hci_lookup_le_connect(hdev)) {
++		bt_dev_dbg(hdev, "Deferring random address update");
++		hci_dev_set_flag(hdev, HCI_RPA_EXPIRED);
++		return;
++	}
++
++	hci_req_add(req, HCI_OP_LE_SET_RANDOM_ADDR, 6, rpa);
++}
++
+ int __hci_req_setup_ext_adv_instance(struct hci_request *req, u8 instance)
+ {
+ 	struct hci_cp_le_set_ext_adv_params cp;
+@@ -2255,6 +2265,13 @@ int __hci_req_setup_ext_adv_instance(struct hci_request *req, u8 instance)
+ 		} else {
+ 			if (!bacmp(&random_addr, &hdev->random_addr))
+ 				return 0;
++			/* Instance 0x00 doesn't have an adv_info, instead it
++			 * uses hdev->random_addr to track its address so
++			 * whenever it needs to be updated this also set the
++			 * random address since hdev->random_addr is shared with
++			 * scan state machine.
++			 */
++			set_random_addr(req, &random_addr);
+ 		}
+ 
+ 		memset(&cp, 0, sizeof(cp));
+@@ -2512,30 +2529,6 @@ void hci_req_clear_adv_instance(struct hci_dev *hdev, struct sock *sk,
+ 						false);
+ }
+ 
+-static void set_random_addr(struct hci_request *req, bdaddr_t *rpa)
+-{
+-	struct hci_dev *hdev = req->hdev;
+-
+-	/* If we're advertising or initiating an LE connection we can't
+-	 * go ahead and change the random address at this time. This is
+-	 * because the eventual initiator address used for the
+-	 * subsequently created connection will be undefined (some
+-	 * controllers use the new address and others the one we had
+-	 * when the operation started).
+-	 *
+-	 * In this kind of scenario skip the update and let the random
+-	 * address be updated at the next cycle.
+-	 */
+-	if (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
+-	    hci_lookup_le_connect(hdev)) {
+-		bt_dev_dbg(hdev, "Deferring random address update");
+-		hci_dev_set_flag(hdev, HCI_RPA_EXPIRED);
+-		return;
+-	}
+-
+-	hci_req_add(req, HCI_OP_LE_SET_RANDOM_ADDR, 6, rpa);
+-}
+-
+ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 			      bool use_rpa, u8 *own_addr_type)
+ {
+@@ -2547,8 +2540,6 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 	 * the current RPA in use, then generate a new one.
+ 	 */
+ 	if (use_rpa) {
+-		int to;
+-
+ 		/* If Controller supports LL Privacy use own address type is
+ 		 * 0x03
+ 		 */
+@@ -2558,8 +2549,7 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 		else
+ 			*own_addr_type = ADDR_LE_DEV_RANDOM;
+ 
+-		if (!hci_dev_test_and_clear_flag(hdev, HCI_RPA_EXPIRED) &&
+-		    !bacmp(&hdev->random_addr, &hdev->rpa))
++		if (rpa_valid(hdev))
+ 			return 0;
+ 
+ 		err = smp_generate_rpa(hdev, hdev->irk, &hdev->rpa);
+@@ -2570,9 +2560,6 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 
+ 		set_random_addr(req, &hdev->rpa);
+ 
+-		to = msecs_to_jiffies(hdev->rpa_timeout * 1000);
+-		queue_delayed_work(hdev->workqueue, &hdev->rpa_expired, to);
+-
+ 		return 0;
+ 	}
+ 
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index b5ab842c7c4a8..110cfd6aa2b77 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -48,6 +48,8 @@ struct sco_conn {
+ 	spinlock_t	lock;
+ 	struct sock	*sk;
+ 
++	struct delayed_work	timeout_work;
++
+ 	unsigned int    mtu;
+ };
+ 
+@@ -74,9 +76,20 @@ struct sco_pinfo {
+ #define SCO_CONN_TIMEOUT	(HZ * 40)
+ #define SCO_DISCONN_TIMEOUT	(HZ * 2)
+ 
+-static void sco_sock_timeout(struct timer_list *t)
++static void sco_sock_timeout(struct work_struct *work)
+ {
+-	struct sock *sk = from_timer(sk, t, sk_timer);
++	struct sco_conn *conn = container_of(work, struct sco_conn,
++					     timeout_work.work);
++	struct sock *sk;
++
++	sco_conn_lock(conn);
++	sk = conn->sk;
++	if (sk)
++		sock_hold(sk);
++	sco_conn_unlock(conn);
++
++	if (!sk)
++		return;
+ 
+ 	BT_DBG("sock %p state %d", sk, sk->sk_state);
+ 
+@@ -90,14 +103,21 @@ static void sco_sock_timeout(struct timer_list *t)
+ 
+ static void sco_sock_set_timer(struct sock *sk, long timeout)
+ {
++	if (!sco_pi(sk)->conn)
++		return;
++
+ 	BT_DBG("sock %p state %d timeout %ld", sk, sk->sk_state, timeout);
+-	sk_reset_timer(sk, &sk->sk_timer, jiffies + timeout);
++	cancel_delayed_work(&sco_pi(sk)->conn->timeout_work);
++	schedule_delayed_work(&sco_pi(sk)->conn->timeout_work, timeout);
+ }
+ 
+ static void sco_sock_clear_timer(struct sock *sk)
+ {
++	if (!sco_pi(sk)->conn)
++		return;
++
+ 	BT_DBG("sock %p state %d", sk, sk->sk_state);
+-	sk_stop_timer(sk, &sk->sk_timer);
++	cancel_delayed_work(&sco_pi(sk)->conn->timeout_work);
+ }
+ 
+ /* ---- SCO connections ---- */
+@@ -177,6 +197,9 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+ 		sock_put(sk);
++
++		/* Ensure no more work items will run before freeing conn. */
++		cancel_delayed_work_sync(&conn->timeout_work);
+ 	}
+ 
+ 	hcon->sco_data = NULL;
+@@ -191,6 +214,8 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	sco_pi(sk)->conn = conn;
+ 	conn->sk = sk;
+ 
++	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
++
+ 	if (parent)
+ 		bt_accept_enqueue(parent, sk, true);
+ }
+@@ -210,44 +235,32 @@ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	return err;
+ }
+ 
+-static int sco_connect(struct sock *sk)
++static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+ {
+ 	struct sco_conn *conn;
+ 	struct hci_conn *hcon;
+-	struct hci_dev  *hdev;
+ 	int err, type;
+ 
+ 	BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst);
+ 
+-	hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR);
+-	if (!hdev)
+-		return -EHOSTUNREACH;
+-
+-	hci_dev_lock(hdev);
+-
+ 	if (lmp_esco_capable(hdev) && !disable_esco)
+ 		type = ESCO_LINK;
+ 	else
+ 		type = SCO_LINK;
+ 
+ 	if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
+-	    (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
+-		err = -EOPNOTSUPP;
+-		goto done;
+-	}
++	    (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)))
++		return -EOPNOTSUPP;
+ 
+ 	hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
+ 			       sco_pi(sk)->setting);
+-	if (IS_ERR(hcon)) {
+-		err = PTR_ERR(hcon);
+-		goto done;
+-	}
++	if (IS_ERR(hcon))
++		return PTR_ERR(hcon);
+ 
+ 	conn = sco_conn_add(hcon);
+ 	if (!conn) {
+ 		hci_conn_drop(hcon);
+-		err = -ENOMEM;
+-		goto done;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Update source addr of the socket */
+@@ -255,7 +268,7 @@ static int sco_connect(struct sock *sk)
+ 
+ 	err = sco_chan_add(conn, sk, NULL);
+ 	if (err)
+-		goto done;
++		return err;
+ 
+ 	if (hcon->state == BT_CONNECTED) {
+ 		sco_sock_clear_timer(sk);
+@@ -265,9 +278,6 @@ static int sco_connect(struct sock *sk)
+ 		sco_sock_set_timer(sk, sk->sk_sndtimeo);
+ 	}
+ 
+-done:
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
+ 	return err;
+ }
+ 
+@@ -496,8 +506,6 @@ static struct sock *sco_sock_alloc(struct net *net, struct socket *sock,
+ 
+ 	sco_pi(sk)->setting = BT_VOICE_CVSD_16BIT;
+ 
+-	timer_setup(&sk->sk_timer, sco_sock_timeout, 0);
+-
+ 	bt_sock_link(&sco_sk_list, sk);
+ 	return sk;
+ }
+@@ -562,6 +570,7 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ {
+ 	struct sockaddr_sco *sa = (struct sockaddr_sco *) addr;
+ 	struct sock *sk = sock->sk;
++	struct hci_dev  *hdev;
+ 	int err;
+ 
+ 	BT_DBG("sk %p", sk);
+@@ -576,12 +585,19 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ 	if (sk->sk_type != SOCK_SEQPACKET)
+ 		return -EINVAL;
+ 
++	hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
++	if (!hdev)
++		return -EHOSTUNREACH;
++	hci_dev_lock(hdev);
++
+ 	lock_sock(sk);
+ 
+ 	/* Set destination address and psm */
+ 	bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+ 
+-	err = sco_connect(sk);
++	err = sco_connect(hdev, sk);
++	hci_dev_unlock(hdev);
++	hci_dev_put(hdev);
+ 	if (err)
+ 		goto done;
+ 
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 4b2415d34873a..bac0184cf3de7 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1056,8 +1056,10 @@ proto_again:
+ 							      FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+ 							      target_container);
+ 
+-			memcpy(&key_addrs->v4addrs, &iph->saddr,
+-			       sizeof(key_addrs->v4addrs));
++			memcpy(&key_addrs->v4addrs.src, &iph->saddr,
++			       sizeof(key_addrs->v4addrs.src));
++			memcpy(&key_addrs->v4addrs.dst, &iph->daddr,
++			       sizeof(key_addrs->v4addrs.dst));
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ 		}
+ 
+@@ -1101,8 +1103,10 @@ proto_again:
+ 							      FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+ 							      target_container);
+ 
+-			memcpy(&key_addrs->v6addrs, &iph->saddr,
+-			       sizeof(key_addrs->v6addrs));
++			memcpy(&key_addrs->v6addrs.src, &iph->saddr,
++			       sizeof(key_addrs->v6addrs.src));
++			memcpy(&key_addrs->v6addrs.dst, &iph->daddr,
++			       sizeof(key_addrs->v6addrs.dst));
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ 		}
+ 
+diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
+index 715b67f6c62f3..e3f0d59068117 100644
+--- a/net/core/flow_offload.c
++++ b/net/core/flow_offload.c
+@@ -321,6 +321,7 @@ EXPORT_SYMBOL(flow_block_cb_setup_simple);
+ static DEFINE_MUTEX(flow_indr_block_lock);
+ static LIST_HEAD(flow_block_indr_list);
+ static LIST_HEAD(flow_block_indr_dev_list);
++static LIST_HEAD(flow_indir_dev_list);
+ 
+ struct flow_indr_dev {
+ 	struct list_head		list;
+@@ -346,6 +347,33 @@ static struct flow_indr_dev *flow_indr_dev_alloc(flow_indr_block_bind_cb_t *cb,
+ 	return indr_dev;
+ }
+ 
++struct flow_indir_dev_info {
++	void *data;
++	struct net_device *dev;
++	struct Qdisc *sch;
++	enum tc_setup_type type;
++	void (*cleanup)(struct flow_block_cb *block_cb);
++	struct list_head list;
++	enum flow_block_command command;
++	enum flow_block_binder_type binder_type;
++	struct list_head *cb_list;
++};
++
++static void existing_qdiscs_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
++{
++	struct flow_block_offload bo;
++	struct flow_indir_dev_info *cur;
++
++	list_for_each_entry(cur, &flow_indir_dev_list, list) {
++		memset(&bo, 0, sizeof(bo));
++		bo.command = cur->command;
++		bo.binder_type = cur->binder_type;
++		INIT_LIST_HEAD(&bo.cb_list);
++		cb(cur->dev, cur->sch, cb_priv, cur->type, &bo, cur->data, cur->cleanup);
++		list_splice(&bo.cb_list, cur->cb_list);
++	}
++}
++
+ int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
+ {
+ 	struct flow_indr_dev *indr_dev;
+@@ -367,6 +395,7 @@ int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
+ 	}
+ 
+ 	list_add(&indr_dev->list, &flow_block_indr_dev_list);
++	existing_qdiscs_register(cb, cb_priv);
+ 	mutex_unlock(&flow_indr_block_lock);
+ 
+ 	return 0;
+@@ -463,7 +492,59 @@ out:
+ }
+ EXPORT_SYMBOL(flow_indr_block_cb_alloc);
+ 
+-int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
++static struct flow_indir_dev_info *find_indir_dev(void *data)
++{
++	struct flow_indir_dev_info *cur;
++
++	list_for_each_entry(cur, &flow_indir_dev_list, list) {
++		if (cur->data == data)
++			return cur;
++	}
++	return NULL;
++}
++
++static int indir_dev_add(void *data, struct net_device *dev, struct Qdisc *sch,
++			 enum tc_setup_type type, void (*cleanup)(struct flow_block_cb *block_cb),
++			 struct flow_block_offload *bo)
++{
++	struct flow_indir_dev_info *info;
++
++	info = find_indir_dev(data);
++	if (info)
++		return -EEXIST;
++
++	info = kzalloc(sizeof(*info), GFP_KERNEL);
++	if (!info)
++		return -ENOMEM;
++
++	info->data = data;
++	info->dev = dev;
++	info->sch = sch;
++	info->type = type;
++	info->cleanup = cleanup;
++	info->command = bo->command;
++	info->binder_type = bo->binder_type;
++	info->cb_list = bo->cb_list_head;
++
++	list_add(&info->list, &flow_indir_dev_list);
++	return 0;
++}
++
++static int indir_dev_remove(void *data)
++{
++	struct flow_indir_dev_info *info;
++
++	info = find_indir_dev(data);
++	if (!info)
++		return -ENOENT;
++
++	list_del(&info->list);
++
++	kfree(info);
++	return 0;
++}
++
++int flow_indr_dev_setup_offload(struct net_device *dev,	struct Qdisc *sch,
+ 				enum tc_setup_type type, void *data,
+ 				struct flow_block_offload *bo,
+ 				void (*cleanup)(struct flow_block_cb *block_cb))
+@@ -471,6 +552,12 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
+ 	struct flow_indr_dev *this;
+ 
+ 	mutex_lock(&flow_indr_block_lock);
++
++	if (bo->command == FLOW_BLOCK_BIND)
++		indir_dev_add(data, dev, sch, type, cleanup, bo);
++	else if (bo->command == FLOW_BLOCK_UNBIND)
++		indir_dev_remove(data);
++
+ 	list_for_each_entry(this, &flow_block_indr_dev_list, list)
+ 		this->cb(dev, sch, this->cb_priv, type, bo, data, cleanup);
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index baa5d10043cb0..6134b180f59f8 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -7,6 +7,7 @@
+  * the information ethtool needs.
+  */
+ 
++#include <linux/compat.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+ #include <linux/capability.h>
+@@ -807,6 +808,120 @@ out:
+ 	return ret;
+ }
+ 
++static noinline_for_stack int
++ethtool_rxnfc_copy_from_compat(struct ethtool_rxnfc *rxnfc,
++			       const struct compat_ethtool_rxnfc __user *useraddr,
++			       size_t size)
++{
++	struct compat_ethtool_rxnfc crxnfc = {};
++
++	/* We expect there to be holes between fs.m_ext and
++	 * fs.ring_cookie and at the end of fs, but nowhere else.
++	 * On non-x86, no conversion should be needed.
++	 */
++	BUILD_BUG_ON(!IS_ENABLED(CONFIG_X86_64) &&
++		     sizeof(struct compat_ethtool_rxnfc) !=
++		     sizeof(struct ethtool_rxnfc));
++	BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.m_ext) +
++		     sizeof(useraddr->fs.m_ext) !=
++		     offsetof(struct ethtool_rxnfc, fs.m_ext) +
++		     sizeof(rxnfc->fs.m_ext));
++	BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.location) -
++		     offsetof(struct compat_ethtool_rxnfc, fs.ring_cookie) !=
++		     offsetof(struct ethtool_rxnfc, fs.location) -
++		     offsetof(struct ethtool_rxnfc, fs.ring_cookie));
++
++	if (copy_from_user(&crxnfc, useraddr, min(size, sizeof(crxnfc))))
++		return -EFAULT;
++
++	*rxnfc = (struct ethtool_rxnfc) {
++		.cmd		= crxnfc.cmd,
++		.flow_type	= crxnfc.flow_type,
++		.data		= crxnfc.data,
++		.fs		= {
++			.flow_type	= crxnfc.fs.flow_type,
++			.h_u		= crxnfc.fs.h_u,
++			.h_ext		= crxnfc.fs.h_ext,
++			.m_u		= crxnfc.fs.m_u,
++			.m_ext		= crxnfc.fs.m_ext,
++			.ring_cookie	= crxnfc.fs.ring_cookie,
++			.location	= crxnfc.fs.location,
++		},
++		.rule_cnt	= crxnfc.rule_cnt,
++	};
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_from_user(struct ethtool_rxnfc *rxnfc,
++					const void __user *useraddr,
++					size_t size)
++{
++	if (compat_need_64bit_alignment_fixup())
++		return ethtool_rxnfc_copy_from_compat(rxnfc, useraddr, size);
++
++	if (copy_from_user(rxnfc, useraddr, size))
++		return -EFAULT;
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_to_compat(void __user *useraddr,
++					const struct ethtool_rxnfc *rxnfc,
++					size_t size, const u32 *rule_buf)
++{
++	struct compat_ethtool_rxnfc crxnfc;
++
++	memset(&crxnfc, 0, sizeof(crxnfc));
++	crxnfc = (struct compat_ethtool_rxnfc) {
++		.cmd		= rxnfc->cmd,
++		.flow_type	= rxnfc->flow_type,
++		.data		= rxnfc->data,
++		.fs		= {
++			.flow_type	= rxnfc->fs.flow_type,
++			.h_u		= rxnfc->fs.h_u,
++			.h_ext		= rxnfc->fs.h_ext,
++			.m_u		= rxnfc->fs.m_u,
++			.m_ext		= rxnfc->fs.m_ext,
++			.ring_cookie	= rxnfc->fs.ring_cookie,
++			.location	= rxnfc->fs.location,
++		},
++		.rule_cnt	= rxnfc->rule_cnt,
++	};
++
++	if (copy_to_user(useraddr, &crxnfc, min(size, sizeof(crxnfc))))
++		return -EFAULT;
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_to_user(void __user *useraddr,
++				      const struct ethtool_rxnfc *rxnfc,
++				      size_t size, const u32 *rule_buf)
++{
++	int ret;
++
++	if (compat_need_64bit_alignment_fixup()) {
++		ret = ethtool_rxnfc_copy_to_compat(useraddr, rxnfc, size,
++						   rule_buf);
++		useraddr += offsetof(struct compat_ethtool_rxnfc, rule_locs);
++	} else {
++		ret = copy_to_user(useraddr, &rxnfc, size);
++		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
++	}
++
++	if (ret)
++		return -EFAULT;
++
++	if (rule_buf) {
++		if (copy_to_user(useraddr, rule_buf,
++				 rxnfc->rule_cnt * sizeof(u32)))
++			return -EFAULT;
++	}
++
++	return 0;
++}
++
+ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 						u32 cmd, void __user *useraddr)
+ {
+@@ -825,7 +940,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		info_size = (offsetof(struct ethtool_rxnfc, data) +
+ 			     sizeof(info.data));
+ 
+-	if (copy_from_user(&info, useraddr, info_size))
++	if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 		return -EFAULT;
+ 
+ 	rc = dev->ethtool_ops->set_rxnfc(dev, &info);
+@@ -833,7 +948,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		return rc;
+ 
+ 	if (cmd == ETHTOOL_SRXCLSRLINS &&
+-	    copy_to_user(useraddr, &info, info_size))
++	    ethtool_rxnfc_copy_to_user(useraddr, &info, info_size, NULL))
+ 		return -EFAULT;
+ 
+ 	return 0;
+@@ -859,7 +974,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 		info_size = (offsetof(struct ethtool_rxnfc, data) +
+ 			     sizeof(info.data));
+ 
+-	if (copy_from_user(&info, useraddr, info_size))
++	if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 		return -EFAULT;
+ 
+ 	/* If FLOW_RSS was requested then user-space must be using the
+@@ -867,7 +982,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 	 */
+ 	if (cmd == ETHTOOL_GRXFH && info.flow_type & FLOW_RSS) {
+ 		info_size = sizeof(info);
+-		if (copy_from_user(&info, useraddr, info_size))
++		if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 			return -EFAULT;
+ 		/* Since malicious users may modify the original data,
+ 		 * we need to check whether FLOW_RSS is still requested.
+@@ -893,18 +1008,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 	if (ret < 0)
+ 		goto err_out;
+ 
+-	ret = -EFAULT;
+-	if (copy_to_user(useraddr, &info, info_size))
+-		goto err_out;
+-
+-	if (rule_buf) {
+-		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
+-		if (copy_to_user(useraddr, rule_buf,
+-				 info.rule_cnt * sizeof(u32)))
+-			goto err_out;
+-	}
+-	ret = 0;
+-
++	ret = ethtool_rxnfc_copy_to_user(useraddr, &info, info_size, rule_buf);
+ err_out:
+ 	kfree(rule_buf);
+ 
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 8d8a8da3ae7e0..a202dcec0dc27 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -446,8 +446,9 @@ static void ip_copy_addrs(struct iphdr *iph, const struct flowi4 *fl4)
+ {
+ 	BUILD_BUG_ON(offsetof(typeof(*fl4), daddr) !=
+ 		     offsetof(typeof(*fl4), saddr) + sizeof(fl4->saddr));
+-	memcpy(&iph->saddr, &fl4->saddr,
+-	       sizeof(fl4->saddr) + sizeof(fl4->daddr));
++
++	iph->saddr = fl4->saddr;
++	iph->daddr = fl4->daddr;
+ }
+ 
+ /* Note: skb->sk can be different from sk, in case of tunnels */
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 25fa4c01a17f6..f1e90fc1cd187 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -379,8 +379,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
+ 		return NULL;
+ 	}
+ 
+-	if (syn_data &&
+-	    tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD))
++	if (tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD))
+ 		goto fastopen;
+ 
+ 	if (foc->len == 0) {
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 1e5e9fc455230..cd96cd337aa89 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -2001,9 +2001,16 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		netdev_set_default_ethtool_ops(ndev, &ieee80211_ethtool_ops);
+ 
+-		/* MTU range: 256 - 2304 */
++		/* MTU range is normally 256 - 2304, where the upper limit is
++		 * the maximum MSDU size. Monitor interfaces send and receive
++		 * MPDU and A-MSDU frames which may be much larger so we do
++		 * not impose an upper limit in that case.
++		 */
+ 		ndev->min_mtu = 256;
+-		ndev->max_mtu = local->hw.max_mtu;
++		if (type == NL80211_IFTYPE_MONITOR)
++			ndev->max_mtu = 0;
++		else
++			ndev->max_mtu = local->hw.max_mtu;
+ 
+ 		ret = cfg80211_register_netdevice(ndev);
+ 		if (ret) {
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index f92006cec94c4..cbd9f59098b74 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -1097,6 +1097,7 @@ static void nf_flow_table_block_offload_init(struct flow_block_offload *bo,
+ 	bo->command	= cmd;
+ 	bo->binder_type	= FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+ 	bo->extack	= extack;
++	bo->cb_list_head = &flowtable->flow_block.cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index b58d73a965232..9656c16462222 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -353,6 +353,7 @@ static void nft_flow_block_offload_init(struct flow_block_offload *bo,
+ 	bo->command	= cmd;
+ 	bo->binder_type	= FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+ 	bo->extack	= extack;
++	bo->cb_list_head = &basechain->flow_block.cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 639c337c885b1..272bcdb1392df 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -683,14 +683,12 @@ static int nfnl_compat_get_rcu(struct sk_buff *skb,
+ 		goto out_put;
+ 	}
+ 
+-	ret = netlink_unicast(info->sk, skb2, NETLINK_CB(skb).portid,
+-			      MSG_DONTWAIT);
+-	if (ret > 0)
+-		ret = 0;
++	ret = nfnetlink_unicast(skb2, info->net, NETLINK_CB(skb).portid);
+ out_put:
+ 	rcu_read_lock();
+ 	module_put(THIS_MODULE);
+-	return ret == -EAGAIN ? -ENOBUFS : ret;
++
++	return ret;
+ }
+ 
+ static const struct nla_policy nfnl_compat_policy_get[NFTA_COMPAT_MAX+1] = {
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index 000bb3da4f77f..894e6b8f1a868 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -144,8 +144,8 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		return -ENOMEM;
+ 	doi_def->map.std = kzalloc(sizeof(*doi_def->map.std), GFP_KERNEL);
+ 	if (doi_def->map.std == NULL) {
+-		ret_val = -ENOMEM;
+-		goto add_std_failure;
++		kfree(doi_def);
++		return -ENOMEM;
+ 	}
+ 	doi_def->type = CIPSO_V4_MAP_TRANS;
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 380f95aacdec9..24b7cf447bc55 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2545,13 +2545,15 @@ int nlmsg_notify(struct sock *sk, struct sk_buff *skb, u32 portid,
+ 		/* errors reported via destination sk->sk_err, but propagate
+ 		 * delivery errors if NETLINK_BROADCAST_ERROR flag is set */
+ 		err = nlmsg_multicast(sk, skb, exclude_portid, group, flags);
++		if (err == -ESRCH)
++			err = 0;
+ 	}
+ 
+ 	if (report) {
+ 		int err2;
+ 
+ 		err2 = nlmsg_unicast(sk, skb, portid);
+-		if (!err || err == -ESRCH)
++		if (!err)
+ 			err = err2;
+ 	}
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index e3e79e9bd7067..9b276d14be4c4 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -634,6 +634,7 @@ static void tcf_block_offload_init(struct flow_block_offload *bo,
+ 	bo->block_shared = shared;
+ 	bo->extack = extack;
+ 	bo->sch = sch;
++	bo->cb_list_head = &flow_block->cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 9c79374457a00..1ab2fc933a214 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1513,7 +1513,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ 	taprio_set_picos_per_byte(dev, q);
+ 
+ 	if (mqprio) {
+-		netdev_set_num_tc(dev, mqprio->num_tc);
++		err = netdev_set_num_tc(dev, mqprio->num_tc);
++		if (err)
++			goto free_sched;
+ 		for (i = 0; i < mqprio->num_tc; i++)
+ 			netdev_set_tc_queue(dev, i,
+ 					    mqprio->count[i],
+diff --git a/net/socket.c b/net/socket.c
+index 8808b3617dac9..c5b6f5c5cad98 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -3154,128 +3154,6 @@ static int compat_dev_ifconf(struct net *net, struct compat_ifconf __user *uifc3
+ 	return 0;
+ }
+ 
+-static int ethtool_ioctl(struct net *net, struct compat_ifreq __user *ifr32)
+-{
+-	struct compat_ethtool_rxnfc __user *compat_rxnfc;
+-	bool convert_in = false, convert_out = false;
+-	size_t buf_size = 0;
+-	struct ethtool_rxnfc __user *rxnfc = NULL;
+-	struct ifreq ifr;
+-	u32 rule_cnt = 0, actual_rule_cnt;
+-	u32 ethcmd;
+-	u32 data;
+-	int ret;
+-
+-	if (get_user(data, &ifr32->ifr_ifru.ifru_data))
+-		return -EFAULT;
+-
+-	compat_rxnfc = compat_ptr(data);
+-
+-	if (get_user(ethcmd, &compat_rxnfc->cmd))
+-		return -EFAULT;
+-
+-	/* Most ethtool structures are defined without padding.
+-	 * Unfortunately struct ethtool_rxnfc is an exception.
+-	 */
+-	switch (ethcmd) {
+-	default:
+-		break;
+-	case ETHTOOL_GRXCLSRLALL:
+-		/* Buffer size is variable */
+-		if (get_user(rule_cnt, &compat_rxnfc->rule_cnt))
+-			return -EFAULT;
+-		if (rule_cnt > KMALLOC_MAX_SIZE / sizeof(u32))
+-			return -ENOMEM;
+-		buf_size += rule_cnt * sizeof(u32);
+-		fallthrough;
+-	case ETHTOOL_GRXRINGS:
+-	case ETHTOOL_GRXCLSRLCNT:
+-	case ETHTOOL_GRXCLSRULE:
+-	case ETHTOOL_SRXCLSRLINS:
+-		convert_out = true;
+-		fallthrough;
+-	case ETHTOOL_SRXCLSRLDEL:
+-		buf_size += sizeof(struct ethtool_rxnfc);
+-		convert_in = true;
+-		rxnfc = compat_alloc_user_space(buf_size);
+-		break;
+-	}
+-
+-	if (copy_from_user(&ifr.ifr_name, &ifr32->ifr_name, IFNAMSIZ))
+-		return -EFAULT;
+-
+-	ifr.ifr_data = convert_in ? rxnfc : (void __user *)compat_rxnfc;
+-
+-	if (convert_in) {
+-		/* We expect there to be holes between fs.m_ext and
+-		 * fs.ring_cookie and at the end of fs, but nowhere else.
+-		 */
+-		BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.m_ext) +
+-			     sizeof(compat_rxnfc->fs.m_ext) !=
+-			     offsetof(struct ethtool_rxnfc, fs.m_ext) +
+-			     sizeof(rxnfc->fs.m_ext));
+-		BUILD_BUG_ON(
+-			offsetof(struct compat_ethtool_rxnfc, fs.location) -
+-			offsetof(struct compat_ethtool_rxnfc, fs.ring_cookie) !=
+-			offsetof(struct ethtool_rxnfc, fs.location) -
+-			offsetof(struct ethtool_rxnfc, fs.ring_cookie));
+-
+-		if (copy_in_user(rxnfc, compat_rxnfc,
+-				 (void __user *)(&rxnfc->fs.m_ext + 1) -
+-				 (void __user *)rxnfc) ||
+-		    copy_in_user(&rxnfc->fs.ring_cookie,
+-				 &compat_rxnfc->fs.ring_cookie,
+-				 (void __user *)(&rxnfc->fs.location + 1) -
+-				 (void __user *)&rxnfc->fs.ring_cookie))
+-			return -EFAULT;
+-		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
+-			if (put_user(rule_cnt, &rxnfc->rule_cnt))
+-				return -EFAULT;
+-		} else if (copy_in_user(&rxnfc->rule_cnt,
+-					&compat_rxnfc->rule_cnt,
+-					sizeof(rxnfc->rule_cnt)))
+-			return -EFAULT;
+-	}
+-
+-	ret = dev_ioctl(net, SIOCETHTOOL, &ifr, NULL);
+-	if (ret)
+-		return ret;
+-
+-	if (convert_out) {
+-		if (copy_in_user(compat_rxnfc, rxnfc,
+-				 (const void __user *)(&rxnfc->fs.m_ext + 1) -
+-				 (const void __user *)rxnfc) ||
+-		    copy_in_user(&compat_rxnfc->fs.ring_cookie,
+-				 &rxnfc->fs.ring_cookie,
+-				 (const void __user *)(&rxnfc->fs.location + 1) -
+-				 (const void __user *)&rxnfc->fs.ring_cookie) ||
+-		    copy_in_user(&compat_rxnfc->rule_cnt, &rxnfc->rule_cnt,
+-				 sizeof(rxnfc->rule_cnt)))
+-			return -EFAULT;
+-
+-		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
+-			/* As an optimisation, we only copy the actual
+-			 * number of rules that the underlying
+-			 * function returned.  Since Mallory might
+-			 * change the rule count in user memory, we
+-			 * check that it is less than the rule count
+-			 * originally given (as the user buffer size),
+-			 * which has been range-checked.
+-			 */
+-			if (get_user(actual_rule_cnt, &rxnfc->rule_cnt))
+-				return -EFAULT;
+-			if (actual_rule_cnt < rule_cnt)
+-				rule_cnt = actual_rule_cnt;
+-			if (copy_in_user(&compat_rxnfc->rule_locs[0],
+-					 &rxnfc->rule_locs[0],
+-					 rule_cnt * sizeof(u32)))
+-				return -EFAULT;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ static int compat_siocwandev(struct net *net, struct compat_ifreq __user *uifr32)
+ {
+ 	compat_uptr_t uptr32;
+@@ -3432,8 +3310,6 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 		return old_bridge_ioctl(argp);
+ 	case SIOCGIFCONF:
+ 		return compat_dev_ifconf(net, argp);
+-	case SIOCETHTOOL:
+-		return ethtool_ioctl(net, argp);
+ 	case SIOCWANDEV:
+ 		return compat_siocwandev(net, argp);
+ 	case SIOCGIFMAP:
+@@ -3446,6 +3322,7 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 		return sock->ops->gettstamp(sock, argp, cmd == SIOCGSTAMP_OLD,
+ 					    !COMPAT_USE_64BIT_TIME);
+ 
++	case SIOCETHTOOL:
+ 	case SIOCBONDSLAVEINFOQUERY:
+ 	case SIOCBONDINFOQUERY:
+ 	case SIOCSHWTSTAMP:
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index a81be45f40d9f..3d685fe328fad 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1980,7 +1980,7 @@ gss_svc_init_net(struct net *net)
+ 		goto out2;
+ 	return 0;
+ out2:
+-	destroy_use_gss_proxy_proc_entry(net);
++	rsi_cache_destroy_net(net);
+ out1:
+ 	rsc_cache_destroy_net(net);
+ 	return rv;
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index fb6db09725c76..d55e980521da8 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -775,9 +775,9 @@ void xprt_force_disconnect(struct rpc_xprt *xprt)
+ 	/* Try to schedule an autoclose RPC call */
+ 	if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
+ 		queue_work(xprtiod_workqueue, &xprt->task_cleanup);
+-	else if (xprt->snd_task)
++	else if (xprt->snd_task && !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
+ 		rpc_wake_up_queued_task_set_status(&xprt->pending,
+-				xprt->snd_task, -ENOTCONN);
++						   xprt->snd_task, -ENOTCONN);
+ 	spin_unlock(&xprt->transport_lock);
+ }
+ EXPORT_SYMBOL_GPL(xprt_force_disconnect);
+@@ -866,12 +866,14 @@ bool xprt_lock_connect(struct rpc_xprt *xprt,
+ 		goto out;
+ 	if (xprt->snd_task != task)
+ 		goto out;
++	set_bit(XPRT_SND_IS_COOKIE, &xprt->state);
+ 	xprt->snd_task = cookie;
+ 	ret = true;
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(xprt_lock_connect);
+ 
+ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
+ {
+@@ -881,12 +883,14 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
+ 	if (!test_bit(XPRT_LOCKED, &xprt->state))
+ 		goto out;
+ 	xprt->snd_task =NULL;
++	clear_bit(XPRT_SND_IS_COOKIE, &xprt->state);
+ 	xprt->ops->release_xprt(xprt, NULL);
+ 	xprt_schedule_autodisconnect(xprt);
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ 	wake_up_bit(&xprt->state, XPRT_LOCKED);
+ }
++EXPORT_SYMBOL_GPL(xprt_unlock_connect);
+ 
+ /**
+  * xprt_connect - schedule a transport connect operation
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 9c2ffc67c0fde..975aef16ad345 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -250,12 +250,9 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ 					   xprt->stat.connect_start;
+ 		xprt_set_connected(xprt);
+ 		rc = -EAGAIN;
+-	} else {
+-		/* Force a call to xprt_rdma_close to clean up */
+-		spin_lock(&xprt->transport_lock);
+-		set_bit(XPRT_CLOSE_WAIT, &xprt->state);
+-		spin_unlock(&xprt->transport_lock);
+-	}
++	} else
++		rpcrdma_xprt_disconnect(r_xprt);
++	xprt_unlock_connect(xprt, r_xprt);
+ 	xprt_wake_pending_tasks(xprt, rc);
+ }
+ 
+@@ -489,6 +486,8 @@ xprt_rdma_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ 	struct rpcrdma_ep *ep = r_xprt->rx_ep;
+ 	unsigned long delay;
+ 
++	WARN_ON_ONCE(!xprt_lock_connect(xprt, task, r_xprt));
++
+ 	delay = 0;
+ 	if (ep && ep->re_connect_status != 0) {
+ 		delay = xprt_reconnect_delay(xprt);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 649c23518ec04..5a11e318a0d99 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -1416,11 +1416,6 @@ void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp)
+ 
+ 	rc = ib_post_recv(ep->re_id->qp, wr,
+ 			  (const struct ib_recv_wr **)&bad_wr);
+-	if (atomic_dec_return(&ep->re_receiving) > 0)
+-		complete(&ep->re_done);
+-
+-out:
+-	trace_xprtrdma_post_recvs(r_xprt, count, rc);
+ 	if (rc) {
+ 		for (wr = bad_wr; wr;) {
+ 			struct rpcrdma_rep *rep;
+@@ -1431,6 +1426,11 @@ out:
+ 			--count;
+ 		}
+ 	}
++	if (atomic_dec_return(&ep->re_receiving) > 0)
++		complete(&ep->re_done);
++
++out:
++	trace_xprtrdma_post_recvs(r_xprt, count, rc);
+ 	ep->re_receive_count += count;
+ 	return;
+ }
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index e573dcecdd66f..02b071dbdd225 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1656,7 +1656,7 @@ static int xs_get_srcport(struct sock_xprt *transport)
+ unsigned short get_srcport(struct rpc_xprt *xprt)
+ {
+ 	struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
+-	return sock->srcport;
++	return xs_sock_getport(sock->sock);
+ }
+ EXPORT_SYMBOL(get_srcport);
+ 
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 8754bd885169d..a155cfaf01f2e 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1886,6 +1886,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 	bool connected = !tipc_sk_type_connectionless(sk);
+ 	struct tipc_sock *tsk = tipc_sk(sk);
+ 	int rc, err, hlen, dlen, copy;
++	struct tipc_skb_cb *skb_cb;
+ 	struct sk_buff_head xmitq;
+ 	struct tipc_msg *hdr;
+ 	struct sk_buff *skb;
+@@ -1909,6 +1910,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		if (unlikely(rc))
+ 			goto exit;
+ 		skb = skb_peek(&sk->sk_receive_queue);
++		skb_cb = TIPC_SKB_CB(skb);
+ 		hdr = buf_msg(skb);
+ 		dlen = msg_data_sz(hdr);
+ 		hlen = msg_hdr_sz(hdr);
+@@ -1928,18 +1930,33 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 
+ 	/* Capture data if non-error msg, otherwise just set return value */
+ 	if (likely(!err)) {
+-		copy = min_t(int, dlen, buflen);
+-		if (unlikely(copy != dlen))
+-			m->msg_flags |= MSG_TRUNC;
+-		rc = skb_copy_datagram_msg(skb, hlen, m, copy);
++		int offset = skb_cb->bytes_read;
++
++		copy = min_t(int, dlen - offset, buflen);
++		rc = skb_copy_datagram_msg(skb, hlen + offset, m, copy);
++		if (unlikely(rc))
++			goto exit;
++		if (unlikely(offset + copy < dlen)) {
++			if (flags & MSG_EOR) {
++				if (!(flags & MSG_PEEK))
++					skb_cb->bytes_read = offset + copy;
++			} else {
++				m->msg_flags |= MSG_TRUNC;
++				skb_cb->bytes_read = 0;
++			}
++		} else {
++			if (flags & MSG_EOR)
++				m->msg_flags |= MSG_EOR;
++			skb_cb->bytes_read = 0;
++		}
+ 	} else {
+ 		copy = 0;
+ 		rc = 0;
+-		if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control)
++		if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) {
+ 			rc = -ECONNRESET;
++			goto exit;
++		}
+ 	}
+-	if (unlikely(rc))
+-		goto exit;
+ 
+ 	/* Mark message as group event if applicable */
+ 	if (unlikely(grp_evt)) {
+@@ -1962,9 +1979,10 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		tipc_node_distr_xmit(sock_net(sk), &xmitq);
+ 	}
+ 
+-	tsk_advance_rx_queue(sk);
++	if (!skb_cb->bytes_read)
++		tsk_advance_rx_queue(sk);
+ 
+-	if (likely(!connected))
++	if (likely(!connected) || skb_cb->bytes_read)
+ 		goto exit;
+ 
+ 	/* Send connection flow control advertisement when applicable */
+diff --git a/samples/bpf/test_override_return.sh b/samples/bpf/test_override_return.sh
+index e68b9ee6814b8..35db26f736b9d 100755
+--- a/samples/bpf/test_override_return.sh
++++ b/samples/bpf/test_override_return.sh
+@@ -1,5 +1,6 @@
+ #!/bin/bash
+ 
++rm -r tmpmnt
+ rm -f testfile.img
+ dd if=/dev/zero of=testfile.img bs=1M seek=1000 count=1
+ DEVICE=$(losetup --show -f testfile.img)
+diff --git a/samples/bpf/tracex7_user.c b/samples/bpf/tracex7_user.c
+index fdcd6580dd736..8be7ce18d3ba0 100644
+--- a/samples/bpf/tracex7_user.c
++++ b/samples/bpf/tracex7_user.c
+@@ -14,6 +14,11 @@ int main(int argc, char **argv)
+ 	int ret = 0;
+ 	FILE *f;
+ 
++	if (!argv[1]) {
++		fprintf(stderr, "ERROR: Run with the btrfs device argument!\n");
++		return 0;
++	}
++
+ 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+ 	obj = bpf_object__open_file(filename, NULL);
+ 	if (libbpf_get_error(obj)) {
+diff --git a/samples/pktgen/pktgen_sample03_burst_single_flow.sh b/samples/pktgen/pktgen_sample03_burst_single_flow.sh
+index ab87de4402772..8bf2fdffba16e 100755
+--- a/samples/pktgen/pktgen_sample03_burst_single_flow.sh
++++ b/samples/pktgen/pktgen_sample03_burst_single_flow.sh
+@@ -85,7 +85,7 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ done
+ 
+ # Run if user hits control-c
+-function control_c() {
++function print_result() {
+     # Print results
+     for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 	dev=${DEV}@${thread}
+@@ -94,11 +94,13 @@ function control_c() {
+     done
+ }
+ # trap keyboard interrupt (Ctrl-C)
+-trap control_c SIGINT
++trap true SIGINT
+ 
+ if [ -z "$APPEND" ]; then
+     echo "Running... ctrl^C to stop" >&2
+     pg_ctrl "start"
++
++    print_result
+ else
+     echo "Append mode: config done. Do more or use 'pg_ctrl start' to run"
+ fi
+diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c
+index 6c0f229db36a1..b4aaeab377545 100644
+--- a/samples/vfio-mdev/mbochs.c
++++ b/samples/vfio-mdev/mbochs.c
+@@ -129,7 +129,7 @@ static dev_t		mbochs_devt;
+ static struct class	*mbochs_class;
+ static struct cdev	mbochs_cdev;
+ static struct device	mbochs_dev;
+-static int		mbochs_used_mbytes;
++static atomic_t mbochs_avail_mbytes;
+ static const struct vfio_device_ops mbochs_dev_ops;
+ 
+ struct vfio_region_info_ext {
+@@ -507,18 +507,22 @@ static int mbochs_reset(struct mdev_state *mdev_state)
+ 
+ static int mbochs_probe(struct mdev_device *mdev)
+ {
++	int avail_mbytes = atomic_read(&mbochs_avail_mbytes);
+ 	const struct mbochs_type *type =
+ 		&mbochs_types[mdev_get_type_group_id(mdev)];
+ 	struct device *dev = mdev_dev(mdev);
+ 	struct mdev_state *mdev_state;
+ 	int ret = -ENOMEM;
+ 
+-	if (type->mbytes + mbochs_used_mbytes > max_mbytes)
+-		return -ENOMEM;
++	do {
++		if (avail_mbytes < type->mbytes)
++			return -ENOSPC;
++	} while (!atomic_try_cmpxchg(&mbochs_avail_mbytes, &avail_mbytes,
++				     avail_mbytes - type->mbytes));
+ 
+ 	mdev_state = kzalloc(sizeof(struct mdev_state), GFP_KERNEL);
+ 	if (mdev_state == NULL)
+-		return -ENOMEM;
++		goto err_avail;
+ 	vfio_init_group_dev(&mdev_state->vdev, &mdev->dev, &mbochs_dev_ops);
+ 
+ 	mdev_state->vconfig = kzalloc(MBOCHS_CONFIG_SPACE_SIZE, GFP_KERNEL);
+@@ -549,17 +553,17 @@ static int mbochs_probe(struct mdev_device *mdev)
+ 	mbochs_create_config_space(mdev_state);
+ 	mbochs_reset(mdev_state);
+ 
+-	mbochs_used_mbytes += type->mbytes;
+-
+ 	ret = vfio_register_group_dev(&mdev_state->vdev);
+ 	if (ret)
+ 		goto err_mem;
+ 	dev_set_drvdata(&mdev->dev, mdev_state);
+ 	return 0;
+-
+ err_mem:
++	kfree(mdev_state->pages);
+ 	kfree(mdev_state->vconfig);
+ 	kfree(mdev_state);
++err_avail:
++	atomic_add(type->mbytes, &mbochs_avail_mbytes);
+ 	return ret;
+ }
+ 
+@@ -567,8 +571,8 @@ static void mbochs_remove(struct mdev_device *mdev)
+ {
+ 	struct mdev_state *mdev_state = dev_get_drvdata(&mdev->dev);
+ 
+-	mbochs_used_mbytes -= mdev_state->type->mbytes;
+ 	vfio_unregister_group_dev(&mdev_state->vdev);
++	atomic_add(mdev_state->type->mbytes, &mbochs_avail_mbytes);
+ 	kfree(mdev_state->pages);
+ 	kfree(mdev_state->vconfig);
+ 	kfree(mdev_state);
+@@ -1355,7 +1359,7 @@ static ssize_t available_instances_show(struct mdev_type *mtype,
+ {
+ 	const struct mbochs_type *type =
+ 		&mbochs_types[mtype_get_type_group_id(mtype)];
+-	int count = (max_mbytes - mbochs_used_mbytes) / type->mbytes;
++	int count = atomic_read(&mbochs_avail_mbytes) / type->mbytes;
+ 
+ 	return sprintf(buf, "%d\n", count);
+ }
+@@ -1437,6 +1441,8 @@ static int __init mbochs_dev_init(void)
+ {
+ 	int ret = 0;
+ 
++	atomic_set(&mbochs_avail_mbytes, max_mbytes);
++
+ 	ret = alloc_chrdev_region(&mbochs_devt, 0, MINORMASK + 1, MBOCHS_NAME);
+ 	if (ret < 0) {
+ 		pr_err("Error: failed to register mbochs_dev, err: %d\n", ret);
+diff --git a/scripts/gen_ksymdeps.sh b/scripts/gen_ksymdeps.sh
+index 1324986e1362c..725e8c9c1b53f 100755
+--- a/scripts/gen_ksymdeps.sh
++++ b/scripts/gen_ksymdeps.sh
+@@ -4,7 +4,13 @@
+ set -e
+ 
+ # List of exported symbols
+-ksyms=$($NM $1 | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z)
++#
++# If the object has no symbol, $NM warns 'no symbols'.
++# Suppress the stderr.
++# TODO:
++#   Use -q instead of 2>/dev/null when we upgrade the minimum version of
++#   binutils to 2.37, llvm to 13.0.0.
++ksyms=$($NM $1 2>/dev/null | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z)
+ 
+ if [ -z "$ksyms" ]; then
+ 	exit 0
+diff --git a/scripts/subarch.include b/scripts/subarch.include
+index 650682821126c..776849a3c500f 100644
+--- a/scripts/subarch.include
++++ b/scripts/subarch.include
+@@ -7,7 +7,7 @@
+ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+ 				  -e s/sun4u/sparc64/ \
+ 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
+-				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
++				  -e s/s390x/s390/ \
+ 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
+ 				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+ 				  -e s/riscv.*/riscv/)
+diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c
+index 1f391f6a3d470..d2186e2757be8 100644
+--- a/security/smack/smack_access.c
++++ b/security/smack/smack_access.c
+@@ -81,23 +81,22 @@ int log_policy = SMACK_AUDIT_DENIED;
+ int smk_access_entry(char *subject_label, char *object_label,
+ 			struct list_head *rule_list)
+ {
+-	int may = -ENOENT;
+ 	struct smack_rule *srp;
+ 
+ 	list_for_each_entry_rcu(srp, rule_list, list) {
+ 		if (srp->smk_object->smk_known == object_label &&
+ 		    srp->smk_subject->smk_known == subject_label) {
+-			may = srp->smk_access;
+-			break;
++			int may = srp->smk_access;
++			/*
++			 * MAY_WRITE implies MAY_LOCK.
++			 */
++			if ((may & MAY_WRITE) == MAY_WRITE)
++				may |= MAY_LOCK;
++			return may;
+ 		}
+ 	}
+ 
+-	/*
+-	 * MAY_WRITE implies MAY_LOCK.
+-	 */
+-	if ((may & MAY_WRITE) == MAY_WRITE)
+-		may |= MAY_LOCK;
+-	return may;
++	return -ENOENT;
+ }
+ 
+ /**
+diff --git a/sound/soc/atmel/Kconfig b/sound/soc/atmel/Kconfig
+index ec04e3386bc0e..8617793ed9557 100644
+--- a/sound/soc/atmel/Kconfig
++++ b/sound/soc/atmel/Kconfig
+@@ -11,7 +11,6 @@ if SND_ATMEL_SOC
+ 
+ config SND_ATMEL_SOC_PDC
+ 	bool
+-	depends on HAS_DMA
+ 
+ config SND_ATMEL_SOC_DMA
+ 	bool
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 91a6d712eb585..c403fb6725944 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -290,9 +290,6 @@ static const struct snd_soc_dapm_widget byt_rt5640_widgets[] = {
+ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = {
+ 	{"Headphone", NULL, "Platform Clock"},
+ 	{"Headset Mic", NULL, "Platform Clock"},
+-	{"Internal Mic", NULL, "Platform Clock"},
+-	{"Speaker", NULL, "Platform Clock"},
+-
+ 	{"Headset Mic", NULL, "MICBIAS1"},
+ 	{"IN2P", NULL, "Headset Mic"},
+ 	{"Headphone", NULL, "HPOL"},
+@@ -300,19 +297,23 @@ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic1_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"DMIC1", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic2_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"DMIC2", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_in1_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"Internal Mic", NULL, "MICBIAS1"},
+ 	{"IN1P", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_in3_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"Internal Mic", NULL, "MICBIAS1"},
+ 	{"IN3P", NULL, "Internal Mic"},
+ };
+@@ -354,6 +355,7 @@ static const struct snd_soc_dapm_route byt_rt5640_ssp0_aif2_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = {
++	{"Speaker", NULL, "Platform Clock"},
+ 	{"Speaker", NULL, "SPOLP"},
+ 	{"Speaker", NULL, "SPOLN"},
+ 	{"Speaker", NULL, "SPORP"},
+@@ -361,6 +363,7 @@ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_mono_spk_map[] = {
++	{"Speaker", NULL, "Platform Clock"},
+ 	{"Speaker", NULL, "SPOLP"},
+ 	{"Speaker", NULL, "SPOLN"},
+ };
+diff --git a/sound/soc/intel/boards/sof_pcm512x.c b/sound/soc/intel/boards/sof_pcm512x.c
+index 2ec9c62366e2e..6815204e58d58 100644
+--- a/sound/soc/intel/boards/sof_pcm512x.c
++++ b/sound/soc/intel/boards/sof_pcm512x.c
+@@ -26,11 +26,16 @@
+ 
+ #define SOF_PCM512X_SSP_CODEC(quirk)		((quirk) & GENMASK(3, 0))
+ #define SOF_PCM512X_SSP_CODEC_MASK			(GENMASK(3, 0))
++#define SOF_PCM512X_ENABLE_SSP_CAPTURE		BIT(4)
++#define SOF_PCM512X_ENABLE_DMIC			BIT(5)
+ 
+ #define IDISP_CODEC_MASK	0x4
+ 
+ /* Default: SSP5 */
+-static unsigned long sof_pcm512x_quirk = SOF_PCM512X_SSP_CODEC(5);
++static unsigned long sof_pcm512x_quirk =
++	SOF_PCM512X_SSP_CODEC(5) |
++	SOF_PCM512X_ENABLE_SSP_CAPTURE |
++	SOF_PCM512X_ENABLE_DMIC;
+ 
+ static bool is_legacy_cpu;
+ 
+@@ -244,8 +249,9 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
+ 	links[id].dpcm_playback = 1;
+ 	/*
+ 	 * capture only supported with specific versions of the Hifiberry DAC+
+-	 * links[id].dpcm_capture = 1;
+ 	 */
++	if (sof_pcm512x_quirk & SOF_PCM512X_ENABLE_SSP_CAPTURE)
++		links[id].dpcm_capture = 1;
+ 	links[id].no_pcm = 1;
+ 	links[id].cpus = &cpus[id];
+ 	links[id].num_cpus = 1;
+@@ -380,6 +386,9 @@ static int sof_audio_probe(struct platform_device *pdev)
+ 
+ 	ssp_codec = sof_pcm512x_quirk & SOF_PCM512X_SSP_CODEC_MASK;
+ 
++	if (!(sof_pcm512x_quirk & SOF_PCM512X_ENABLE_DMIC))
++		dmic_be_num = 0;
++
+ 	/* compute number of dai links */
+ 	sof_audio_card_pcm512x.num_links = 1 + dmic_be_num + hdmi_num;
+ 
+diff --git a/sound/soc/intel/skylake/skl-messages.c b/sound/soc/intel/skylake/skl-messages.c
+index 476ef1897961d..79c6cf2c14bfb 100644
+--- a/sound/soc/intel/skylake/skl-messages.c
++++ b/sound/soc/intel/skylake/skl-messages.c
+@@ -802,9 +802,12 @@ static u16 skl_get_module_param_size(struct skl_dev *skl,
+ 
+ 	case SKL_MODULE_TYPE_BASE_OUTFMT:
+ 	case SKL_MODULE_TYPE_MIC_SELECT:
+-	case SKL_MODULE_TYPE_KPB:
+ 		return sizeof(struct skl_base_outfmt_cfg);
+ 
++	case SKL_MODULE_TYPE_MIXER:
++	case SKL_MODULE_TYPE_KPB:
++		return sizeof(struct skl_base_cfg);
++
+ 	default:
+ 		/*
+ 		 * return only base cfg when no specific module type is
+@@ -857,10 +860,14 @@ static int skl_set_module_format(struct skl_dev *skl,
+ 
+ 	case SKL_MODULE_TYPE_BASE_OUTFMT:
+ 	case SKL_MODULE_TYPE_MIC_SELECT:
+-	case SKL_MODULE_TYPE_KPB:
+ 		skl_set_base_outfmt_format(skl, module_config, *param_data);
+ 		break;
+ 
++	case SKL_MODULE_TYPE_MIXER:
++	case SKL_MODULE_TYPE_KPB:
++		skl_set_base_module_format(skl, module_config, *param_data);
++		break;
++
+ 	default:
+ 		skl_set_base_module_format(skl, module_config, *param_data);
+ 		break;
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index b1ca64d2f7ea6..031d5dc7e6601 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -1317,21 +1317,6 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 		return -EIO;
+ 	}
+ 
+-	list_for_each_entry(module, &skl->uuid_list, list) {
+-		if (guid_equal(uuid_mod, &module->uuid)) {
+-			mconfig->id.module_id = module->id;
+-			if (mconfig->module)
+-				mconfig->module->loadable = module->is_loadable;
+-			ret = 0;
+-			break;
+-		}
+-	}
+-
+-	if (ret)
+-		return ret;
+-
+-	uuid_mod = &module->uuid;
+-	ret = -EIO;
+ 	for (i = 0; i < skl->nr_modules; i++) {
+ 		skl_module = skl->modules[i];
+ 		uuid_tplg = &skl_module->uuid;
+@@ -1341,10 +1326,18 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 			break;
+ 		}
+ 	}
++
+ 	if (skl->nr_modules && ret)
+ 		return ret;
+ 
++	ret = -EIO;
+ 	list_for_each_entry(module, &skl->uuid_list, list) {
++		if (guid_equal(uuid_mod, &module->uuid)) {
++			mconfig->id.module_id = module->id;
++			mconfig->module->loadable = module->is_loadable;
++			ret = 0;
++		}
++
+ 		for (i = 0; i < MAX_IN_QUEUE; i++) {
+ 			pin_id = &mconfig->m_in_pin[i].id;
+ 			if (guid_equal(&pin_id->mod_uuid, &module->uuid))
+@@ -1358,7 +1351,7 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int skl_populate_modules(struct skl_dev *skl)
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index c7dc3509bceb6..b65dfbc3545b9 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -186,7 +186,9 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ {
+ 	struct rk_i2s_dev *i2s = to_info(cpu_dai);
+ 	unsigned int mask = 0, val = 0;
++	int ret = 0;
+ 
++	pm_runtime_get_sync(cpu_dai->dev);
+ 	mask = I2S_CKR_MSS_MASK;
+ 	switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+ 	case SND_SOC_DAIFMT_CBS_CFS:
+@@ -199,7 +201,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		i2s->is_master_mode = false;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_CKR, mask, val);
+@@ -213,7 +216,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		val = I2S_CKR_CKP_POS;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_CKR, mask, val);
+@@ -229,14 +233,15 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_I2S:
+ 		val = I2S_TXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		val = I2S_TXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */
+ 		val = I2S_TXCR_TFS_PCM | I2S_TXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		val = I2S_TXCR_TFS_PCM;
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_TXCR, mask, val);
+@@ -252,19 +257,23 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_I2S:
+ 		val = I2S_RXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		val = I2S_RXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */
+ 		val = I2S_RXCR_TFS_PCM | I2S_RXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		val = I2S_RXCR_TFS_PCM;
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_RXCR, mask, val);
+ 
+-	return 0;
++err_pm_put:
++	pm_runtime_put(cpu_dai->dev);
++
++	return ret;
+ }
+ 
+ static int rockchip_i2s_hw_params(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index 0ebee1ed06a90..5f1e72edfee04 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -391,9 +391,9 @@ static struct clk *rsnd_adg_create_null_clk(struct rsnd_priv *priv,
+ 	struct clk *clk;
+ 
+ 	clk = clk_register_fixed_rate(dev, name, parent, 0, 0);
+-	if (IS_ERR(clk)) {
++	if (IS_ERR_OR_NULL(clk)) {
+ 		dev_err(dev, "create null clk error\n");
+-		return NULL;
++		return ERR_CAST(clk);
+ 	}
+ 
+ 	return clk;
+@@ -430,9 +430,9 @@ static int rsnd_adg_get_clkin(struct rsnd_priv *priv)
+ 	for (i = 0; i < CLKMAX; i++) {
+ 		clk = devm_clk_get(dev, clk_name[i]);
+ 
+-		if (IS_ERR(clk))
++		if (IS_ERR_OR_NULL(clk))
+ 			clk = rsnd_adg_null_clk_get(priv);
+-		if (IS_ERR(clk))
++		if (IS_ERR_OR_NULL(clk))
+ 			goto err;
+ 
+ 		adg->clk[i] = clk;
+@@ -582,7 +582,7 @@ static int rsnd_adg_get_clkout(struct rsnd_priv *priv)
+ 	if (!count) {
+ 		clk = clk_register_fixed_rate(dev, clkout_name[CLKOUT],
+ 					      parent_clk_name, 0, req_rate[0]);
+-		if (IS_ERR(clk))
++		if (IS_ERR_OR_NULL(clk))
+ 			goto err;
+ 
+ 		adg->clkout[CLKOUT] = clk;
+@@ -596,7 +596,7 @@ static int rsnd_adg_get_clkout(struct rsnd_priv *priv)
+ 			clk = clk_register_fixed_rate(dev, clkout_name[i],
+ 						      parent_clk_name, 0,
+ 						      req_rate[0]);
+-			if (IS_ERR(clk))
++			if (IS_ERR_OR_NULL(clk))
+ 				goto err;
+ 
+ 			adg->clkout[i] = clk;
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index d1c570ca21ea7..b944f56a469a6 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2001,6 +2001,8 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
+ 	struct snd_soc_pcm_runtime *be;
+ 	struct snd_soc_dpcm *dpcm;
+ 	int ret = 0;
++	unsigned long flags;
++	enum snd_soc_dpcm_state state;
+ 
+ 	for_each_dpcm_be(fe, stream, dpcm) {
+ 		struct snd_pcm_substream *be_substream;
+@@ -2017,76 +2019,141 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
+ 
+ 		switch (cmd) {
+ 		case SNDRV_PCM_TRIGGER_START:
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
+ 			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+-			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_RESUME:
+-			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND))
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+-			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_STOP:
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) &&
+-			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ 				continue;
+ 
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
++
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_SUSPEND:
+-			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ 				continue;
+ 
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
++
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_SUSPEND;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ 				continue;
+ 
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
++
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED;
+ 			break;
+ 		}
+ 	}
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index 017a5a5e56cd1..64ec6d4858348 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -83,6 +83,8 @@ struct davinci_mcasp {
+ 	struct snd_pcm_substream *substreams[2];
+ 	unsigned int dai_fmt;
+ 
++	u32 iec958_status;
++
+ 	/* Audio can not be enabled due to missing parameter(s) */
+ 	bool	missing_audio_param;
+ 
+@@ -757,6 +759,9 @@ static int davinci_mcasp_set_tdm_slot(struct snd_soc_dai *dai,
+ {
+ 	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(dai);
+ 
++	if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE)
++		return 0;
++
+ 	dev_dbg(mcasp->dev,
+ 		 "%s() tx_mask 0x%08x rx_mask 0x%08x slots %d width %d\n",
+ 		 __func__, tx_mask, rx_mask, slots, slot_width);
+@@ -827,6 +832,20 @@ static int davinci_config_channel_size(struct davinci_mcasp *mcasp,
+ 		mcasp_mod_bits(mcasp, DAVINCI_MCASP_RXFMT_REG, RXROT(rx_rotate),
+ 			       RXROT(7));
+ 		mcasp_set_reg(mcasp, DAVINCI_MCASP_RXMASK_REG, mask);
++	} else {
++		/*
++		 * according to the TRM it should be TXROT=0, this one works:
++		 * 16 bit to 23-8 (TXROT=6, rotate 24 bits)
++		 * 24 bit to 23-0 (TXROT=0, rotate 0 bits)
++		 *
++		 * TXROT = 0 only works with 24bit samples
++		 */
++		tx_rotate = (sample_width / 4 + 2) & 0x7;
++
++		mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXROT(tx_rotate),
++			       TXROT(7));
++		mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSSZ(15),
++			       TXSSZ(0x0F));
+ 	}
+ 
+ 	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, mask);
+@@ -842,10 +861,16 @@ static int mcasp_common_hw_param(struct davinci_mcasp *mcasp, int stream,
+ 	u8 tx_ser = 0;
+ 	u8 rx_ser = 0;
+ 	u8 slots = mcasp->tdm_slots;
+-	u8 max_active_serializers = (channels + slots - 1) / slots;
+-	u8 max_rx_serializers, max_tx_serializers;
++	u8 max_active_serializers, max_rx_serializers, max_tx_serializers;
+ 	int active_serializers, numevt;
+ 	u32 reg;
++
++	/* In DIT mode we only allow maximum of one serializers for now */
++	if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE)
++		max_active_serializers = 1;
++	else
++		max_active_serializers = (channels + slots - 1) / slots;
++
+ 	/* Default configuration */
+ 	if (mcasp->version < MCASP_VERSION_3)
+ 		mcasp_set_bits(mcasp, DAVINCI_MCASP_PWREMUMGT_REG, MCASP_SOFT);
+@@ -1031,16 +1056,18 @@ static int mcasp_i2s_hw_param(struct davinci_mcasp *mcasp, int stream,
+ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
+ 			      unsigned int rate)
+ {
+-	u32 cs_value = 0;
+-	u8 *cs_bytes = (u8*) &cs_value;
++	u8 *cs_bytes = (u8 *)&mcasp->iec958_status;
+ 
+-	/* Set the TX format : 24 bit right rotation, 32 bit slot, Pad 0
+-	   and LSB first */
+-	mcasp_set_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXROT(6) | TXSSZ(15));
++	if (!mcasp->dat_port)
++		mcasp_set_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSEL);
++	else
++		mcasp_clr_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSEL);
+ 
+ 	/* Set TX frame synch : DIT Mode, 1 bit width, internal, rising edge */
+ 	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXFMCTL_REG, AFSXE | FSXMOD(0x180));
+ 
++	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, 0xFFFF);
++
+ 	/* Set the TX tdm : for all the slots */
+ 	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXTDM_REG, 0xFFFFFFFF);
+ 
+@@ -1049,16 +1076,8 @@ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
+ 
+ 	mcasp_clr_bits(mcasp, DAVINCI_MCASP_XEVTCTL_REG, TXDATADMADIS);
+ 
+-	/* Only 44100 and 48000 are valid, both have the same setting */
+-	mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG, AHCLKXDIV(3));
+-
+-	/* Enable the DIT */
+-	mcasp_set_bits(mcasp, DAVINCI_MCASP_TXDITCTL_REG, DITEN);
+-
+ 	/* Set S/PDIF channel status bits */
+-	cs_bytes[0] = IEC958_AES0_CON_NOT_COPYRIGHT;
+-	cs_bytes[1] = IEC958_AES1_CON_PCM_CODER;
+-
++	cs_bytes[3] &= ~IEC958_AES3_CON_FS;
+ 	switch (rate) {
+ 	case 22050:
+ 		cs_bytes[3] |= IEC958_AES3_CON_FS_22050;
+@@ -1088,12 +1107,15 @@ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
+ 		cs_bytes[3] |= IEC958_AES3_CON_FS_192000;
+ 		break;
+ 	default:
+-		printk(KERN_WARNING "unsupported sampling rate: %d\n", rate);
++		dev_err(mcasp->dev, "unsupported sampling rate: %d\n", rate);
+ 		return -EINVAL;
+ 	}
+ 
+-	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRA_REG, cs_value);
+-	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRB_REG, cs_value);
++	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRA_REG, mcasp->iec958_status);
++	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRB_REG, mcasp->iec958_status);
++
++	/* Enable the DIT */
++	mcasp_set_bits(mcasp, DAVINCI_MCASP_TXDITCTL_REG, DITEN);
+ 
+ 	return 0;
+ }
+@@ -1237,12 +1259,18 @@ static int davinci_mcasp_hw_params(struct snd_pcm_substream *substream,
+ 		int slots = mcasp->tdm_slots;
+ 		int rate = params_rate(params);
+ 		int sbits = params_width(params);
++		unsigned int bclk_target;
+ 
+ 		if (mcasp->slot_width)
+ 			sbits = mcasp->slot_width;
+ 
++		if (mcasp->op_mode == DAVINCI_MCASP_IIS_MODE)
++			bclk_target = rate * sbits * slots;
++		else
++			bclk_target = rate * 128;
++
+ 		davinci_mcasp_calc_clk_div(mcasp, mcasp->sysclk_freq,
+-					   rate * sbits * slots, true);
++					   bclk_target, true);
+ 	}
+ 
+ 	ret = mcasp_common_hw_param(mcasp, substream->stream,
+@@ -1598,6 +1626,77 @@ static const struct snd_soc_dai_ops davinci_mcasp_dai_ops = {
+ 	.set_tdm_slot	= davinci_mcasp_set_tdm_slot,
+ };
+ 
++static int davinci_mcasp_iec958_info(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_info *uinfo)
++{
++	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
++	uinfo->count = 1;
++
++	return 0;
++}
++
++static int davinci_mcasp_iec958_get(struct snd_kcontrol *kcontrol,
++				    struct snd_ctl_elem_value *uctl)
++{
++	struct snd_soc_dai *cpu_dai = snd_kcontrol_chip(kcontrol);
++	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(cpu_dai);
++
++	memcpy(uctl->value.iec958.status, &mcasp->iec958_status,
++	       sizeof(mcasp->iec958_status));
++
++	return 0;
++}
++
++static int davinci_mcasp_iec958_put(struct snd_kcontrol *kcontrol,
++				    struct snd_ctl_elem_value *uctl)
++{
++	struct snd_soc_dai *cpu_dai = snd_kcontrol_chip(kcontrol);
++	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(cpu_dai);
++
++	memcpy(&mcasp->iec958_status, uctl->value.iec958.status,
++	       sizeof(mcasp->iec958_status));
++
++	return 0;
++}
++
++static int davinci_mcasp_iec958_con_mask_get(struct snd_kcontrol *kcontrol,
++					     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_dai *cpu_dai = snd_kcontrol_chip(kcontrol);
++	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(cpu_dai);
++
++	memset(ucontrol->value.iec958.status, 0xff, sizeof(mcasp->iec958_status));
++	return 0;
++}
++
++static const struct snd_kcontrol_new davinci_mcasp_iec958_ctls[] = {
++	{
++		.access = (SNDRV_CTL_ELEM_ACCESS_READWRITE |
++			   SNDRV_CTL_ELEM_ACCESS_VOLATILE),
++		.iface = SNDRV_CTL_ELEM_IFACE_PCM,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, DEFAULT),
++		.info = davinci_mcasp_iec958_info,
++		.get = davinci_mcasp_iec958_get,
++		.put = davinci_mcasp_iec958_put,
++	}, {
++		.access = SNDRV_CTL_ELEM_ACCESS_READ,
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, CON_MASK),
++		.info = davinci_mcasp_iec958_info,
++		.get = davinci_mcasp_iec958_con_mask_get,
++	},
++};
++
++static void davinci_mcasp_init_iec958_status(struct davinci_mcasp *mcasp)
++{
++	unsigned char *cs = (u8 *)&mcasp->iec958_status;
++
++	cs[0] = IEC958_AES0_CON_NOT_COPYRIGHT | IEC958_AES0_CON_EMPHASIS_NONE;
++	cs[1] = IEC958_AES1_CON_PCM_CODER;
++	cs[2] = IEC958_AES2_CON_SOURCE_UNSPEC | IEC958_AES2_CON_CHANNEL_UNSPEC;
++	cs[3] = IEC958_AES3_CON_CLOCK_1000PPM;
++}
++
+ static int davinci_mcasp_dai_probe(struct snd_soc_dai *dai)
+ {
+ 	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(dai);
+@@ -1605,6 +1704,12 @@ static int davinci_mcasp_dai_probe(struct snd_soc_dai *dai)
+ 	dai->playback_dma_data = &mcasp->dma_data[SNDRV_PCM_STREAM_PLAYBACK];
+ 	dai->capture_dma_data = &mcasp->dma_data[SNDRV_PCM_STREAM_CAPTURE];
+ 
++	if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE) {
++		davinci_mcasp_init_iec958_status(mcasp);
++		snd_soc_add_dai_controls(dai, davinci_mcasp_iec958_ctls,
++					 ARRAY_SIZE(davinci_mcasp_iec958_ctls));
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1651,7 +1756,8 @@ static struct snd_soc_dai_driver davinci_mcasp_dai[] = {
+ 			.channels_min	= 1,
+ 			.channels_max	= 384,
+ 			.rates		= DAVINCI_MCASP_RATES,
+-			.formats	= DAVINCI_MCASP_PCM_FMTS,
++			.formats	= SNDRV_PCM_FMTBIT_S16_LE |
++					  SNDRV_PCM_FMTBIT_S24_LE,
+ 		},
+ 		.ops 		= &davinci_mcasp_dai_ops,
+ 	},
+@@ -1871,6 +1977,8 @@ out:
+ 		} else {
+ 			mcasp->tdm_slots = pdata->tdm_slots;
+ 		}
++	} else {
++		mcasp->tdm_slots = 32;
+ 	}
+ 
+ 	mcasp->num_serializer = pdata->num_serializer;
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 2234d5c33177a..d27e017ebfbea 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3894,6 +3894,42 @@ static int bpf_map_find_btf_info(struct bpf_object *obj, struct bpf_map *map)
+ 	return 0;
+ }
+ 
++static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
++{
++	char file[PATH_MAX], buff[4096];
++	FILE *fp;
++	__u32 val;
++	int err;
++
++	snprintf(file, sizeof(file), "/proc/%d/fdinfo/%d", getpid(), fd);
++	memset(info, 0, sizeof(*info));
++
++	fp = fopen(file, "r");
++	if (!fp) {
++		err = -errno;
++		pr_warn("failed to open %s: %d. No procfs support?\n", file,
++			err);
++		return err;
++	}
++
++	while (fgets(buff, sizeof(buff), fp)) {
++		if (sscanf(buff, "map_type:\t%u", &val) == 1)
++			info->type = val;
++		else if (sscanf(buff, "key_size:\t%u", &val) == 1)
++			info->key_size = val;
++		else if (sscanf(buff, "value_size:\t%u", &val) == 1)
++			info->value_size = val;
++		else if (sscanf(buff, "max_entries:\t%u", &val) == 1)
++			info->max_entries = val;
++		else if (sscanf(buff, "map_flags:\t%i", &val) == 1)
++			info->map_flags = val;
++	}
++
++	fclose(fp);
++
++	return 0;
++}
++
+ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ {
+ 	struct bpf_map_info info = {};
+@@ -3902,6 +3938,8 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ 	char *new_name;
+ 
+ 	err = bpf_obj_get_info_by_fd(fd, &info, &len);
++	if (err && errno == EINVAL)
++		err = bpf_get_map_info_from_fdinfo(fd, &info);
+ 	if (err)
+ 		return libbpf_err(err);
+ 
+@@ -4381,12 +4419,16 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
+ 	struct bpf_map_info map_info = {};
+ 	char msg[STRERR_BUFSIZE];
+ 	__u32 map_info_len;
++	int err;
+ 
+ 	map_info_len = sizeof(map_info);
+ 
+-	if (bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len)) {
+-		pr_warn("failed to get map info for map FD %d: %s\n",
+-			map_fd, libbpf_strerror_r(errno, msg, sizeof(msg)));
++	err = bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len);
++	if (err && errno == EINVAL)
++		err = bpf_get_map_info_from_fdinfo(map_fd, &map_info);
++	if (err) {
++		pr_warn("failed to get map info for map FD %d: %s\n", map_fd,
++			libbpf_strerror_r(errno, msg, sizeof(msg)));
+ 		return false;
+ 	}
+ 
+@@ -4614,10 +4656,13 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 	char *cp, errmsg[STRERR_BUFSIZE];
+ 	unsigned int i, j;
+ 	int err;
++	bool retried;
+ 
+ 	for (i = 0; i < obj->nr_maps; i++) {
+ 		map = &obj->maps[i];
+ 
++		retried = false;
++retry:
+ 		if (map->pin_path) {
+ 			err = bpf_object__reuse_map(map);
+ 			if (err) {
+@@ -4625,6 +4670,12 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 					map->name);
+ 				goto err_out;
+ 			}
++			if (retried && map->fd < 0) {
++				pr_warn("map '%s': cannot find pinned map\n",
++					map->name);
++				err = -ENOENT;
++				goto err_out;
++			}
+ 		}
+ 
+ 		if (map->fd >= 0) {
+@@ -4658,9 +4709,13 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 		if (map->pin_path && !map->pinned) {
+ 			err = bpf_map__pin(map, NULL);
+ 			if (err) {
++				zclose(map->fd);
++				if (!retried && err == -EEXIST) {
++					retried = true;
++					goto retry;
++				}
+ 				pr_warn("map '%s': failed to auto-pin at '%s': %d\n",
+ 					map->name, map->pin_path, err);
+-				zclose(map->fd);
+ 				goto err_out;
+ 			}
+ 		}
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index f50ac31920d13..0328a1e08f659 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -298,7 +298,7 @@ int mte_default_setup(void)
+ 	int ret;
+ 
+ 	if (!(hwcaps2 & HWCAP2_MTE)) {
+-		ksft_print_msg("FAIL: MTE features unavailable\n");
++		ksft_print_msg("SKIP: MTE features unavailable\n");
+ 		return KSFT_SKIP;
+ 	}
+ 	/* Get current mte mode */
+diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
+index 592fe538506e3..b743daa772f55 100644
+--- a/tools/testing/selftests/arm64/pauth/pac.c
++++ b/tools/testing/selftests/arm64/pauth/pac.c
+@@ -25,13 +25,15 @@
+ do { \
+ 	unsigned long hwcaps = getauxval(AT_HWCAP); \
+ 	/* data key instructions are not in NOP space. This prevents a SIGILL */ \
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
++	if (!(hwcaps & HWCAP_PACA))					\
++		SKIP(return, "PAUTH not enabled"); \
+ } while (0)
+ #define ASSERT_GENERIC_PAUTH_ENABLED() \
+ do { \
+ 	unsigned long hwcaps = getauxval(AT_HWCAP); \
+ 	/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
+-	ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
++	if (!(hwcaps & HWCAP_PACG)) \
++		SKIP(return, "Generic PAUTH not enabled");	\
+ } while (0)
+ 
+ void sign_specific(struct signatures *sign, size_t val)
+@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
+ 	unsigned long hwcaps = getauxval(AT_HWCAP);
+ 
+ 	/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
++	ASSERT_PAUTH_ENABLED();
+ 	if (!(hwcaps & HWCAP_PACG)) {
+ 		TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
+ 		nkeys = NKEYS - 1;
+@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
+ 	unsigned long hwcaps = getauxval(AT_HWCAP);
+ 
+ 	/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
++	ASSERT_PAUTH_ENABLED();
+ 	if (!(hwcaps & HWCAP_PACG)) {
+ 		TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
+ 		nkeys = NKEYS - 1;
+diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+index 023cc532992d3..839f7ddaec16c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
++++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+@@ -1,5 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <test_progs.h>
++#include <sys/time.h>
++#include <sys/resource.h>
+ #include "test_send_signal_kern.skel.h"
+ 
+ int sigusr1_received = 0;
+@@ -41,12 +43,23 @@ static void test_send_signal_common(struct perf_event_attr *attr,
+ 	}
+ 
+ 	if (pid == 0) {
++		int old_prio;
++
+ 		/* install signal handler and notify parent */
+ 		signal(SIGUSR1, sigusr1_handler);
+ 
+ 		close(pipe_c2p[0]); /* close read */
+ 		close(pipe_p2c[1]); /* close write */
+ 
++		/* boost with a high priority so we got a higher chance
++		 * that if an interrupt happens, the underlying task
++		 * is this process.
++		 */
++		errno = 0;
++		old_prio = getpriority(PRIO_PROCESS, 0);
++		ASSERT_OK(errno, "getpriority");
++		ASSERT_OK(setpriority(PRIO_PROCESS, 0, -20), "setpriority");
++
+ 		/* notify parent signal handler is installed */
+ 		CHECK(write(pipe_c2p[1], buf, 1) != 1, "pipe_write", "err %d\n", -errno);
+ 
+@@ -62,6 +75,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
+ 		/* wait for parent notification and exit */
+ 		CHECK(read(pipe_p2c[0], buf, 1) != 1, "pipe_read", "err %d\n", -errno);
+ 
++		/* restore the old priority */
++		ASSERT_OK(setpriority(PRIO_PROCESS, 0, old_prio), "setpriority");
++
+ 		close(pipe_c2p[1]);
+ 		close(pipe_p2c[0]);
+ 		exit(0);
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c b/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
+index ec281b0363b82..86f97681ad898 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
+@@ -195,8 +195,10 @@ static void run_test(int cgroup_fd)
+ 
+ 	pthread_mutex_lock(&server_started_mtx);
+ 	if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
+-				      (void *)&server_fd)))
++				      (void *)&server_fd))) {
++		pthread_mutex_unlock(&server_started_mtx);
+ 		goto close_server_fd;
++	}
+ 	pthread_cond_wait(&server_started, &server_started_mtx);
+ 	pthread_mutex_unlock(&server_started_mtx);
+ 
+diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
+index 94e6c2b281cb6..5f725c720e008 100644
+--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
++++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
+@@ -3,7 +3,7 @@
+ #include <linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
+ 
+-SEC("tx")
++SEC("xdp")
+ int xdp_tx(struct xdp_md *xdp)
+ {
+ 	return XDP_TX;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index abdfc41f7685a..4fd01450a4089 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -985,7 +985,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ 
+ 		FD_ZERO(&w);
+ 		FD_SET(sfd[3], &w);
+-		to.tv_sec = 1;
++		to.tv_sec = 30;
+ 		to.tv_usec = 0;
+ 		s = select(sfd[3] + 1, &w, NULL, NULL, &to);
+ 		if (s == -1) {
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 6f103106a39bb..bfbf2277b61a6 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -148,18 +148,18 @@ void test__end_subtest()
+ 	struct prog_test_def *test = env.test;
+ 	int sub_error_cnt = test->error_cnt - test->old_error_cnt;
+ 
+-	if (sub_error_cnt)
+-		env.fail_cnt++;
+-	else if (test->skip_cnt == 0)
+-		env.sub_succ_cnt++;
+-	skip_account();
+-
+ 	dump_test_log(test, sub_error_cnt);
+ 
+ 	fprintf(env.stdout, "#%d/%d %s:%s\n",
+ 	       test->test_num, test->subtest_num, test->subtest_name,
+ 	       sub_error_cnt ? "FAIL" : (test->skip_cnt ? "SKIP" : "OK"));
+ 
++	if (sub_error_cnt)
++		env.fail_cnt++;
++	else if (test->skip_cnt == 0)
++		env.sub_succ_cnt++;
++	skip_account();
++
+ 	free(test->subtest_name);
+ 	test->subtest_name = NULL;
+ }
+@@ -786,17 +786,18 @@ int main(int argc, char **argv)
+ 			test__end_subtest();
+ 
+ 		test->tested = true;
+-		if (test->error_cnt)
+-			env.fail_cnt++;
+-		else
+-			env.succ_cnt++;
+-		skip_account();
+ 
+ 		dump_test_log(test, test->error_cnt);
+ 
+ 		fprintf(env.stdout, "#%d %s:%s\n",
+ 			test->test_num, test->test_name,
+-			test->error_cnt ? "FAIL" : "OK");
++			test->error_cnt ? "FAIL" : (test->skip_cnt ? "SKIP" : "OK"));
++
++		if (test->error_cnt)
++			env.fail_cnt++;
++		else
++			env.succ_cnt++;
++		skip_account();
+ 
+ 		reset_affinity();
+ 		restore_netns();
+diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
+index ba8ffcdaac302..995278e684b6e 100755
+--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
++++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
+@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
+ ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
+ 
+ ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
+-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
++ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
+ ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
+ 
+ trap cleanup EXIT
+diff --git a/tools/testing/selftests/firmware/fw_namespace.c b/tools/testing/selftests/firmware/fw_namespace.c
+index 0e393cb5f42de..4c6f0cd83c5b0 100644
+--- a/tools/testing/selftests/firmware/fw_namespace.c
++++ b/tools/testing/selftests/firmware/fw_namespace.c
+@@ -129,7 +129,8 @@ int main(int argc, char **argv)
+ 		die("mounting tmpfs to /lib/firmware failed\n");
+ 
+ 	sys_path = argv[1];
+-	asprintf(&fw_path, "/lib/firmware/%s", fw_name);
++	if (asprintf(&fw_path, "/lib/firmware/%s", fw_name) < 0)
++		die("error: failed to build full fw_path\n");
+ 
+ 	setup_fw(fw_path);
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
+index a6fac927ee82f..0cee6b067a374 100644
+--- a/tools/testing/selftests/ftrace/test.d/functions
++++ b/tools/testing/selftests/ftrace/test.d/functions
+@@ -115,7 +115,7 @@ check_requires() { # Check required files and tracers
+                 echo "Required tracer $t is not configured."
+                 exit_unsupported
+             fi
+-        elif [ $r != $i ]; then
++        elif [ "$r" != "$i" ]; then
+             if ! grep -Fq "$r" README ; then
+                 echo "Required feature pattern \"$r\" is not in README."
+                 exit_unsupported
+diff --git a/tools/testing/selftests/nci/nci_dev.c b/tools/testing/selftests/nci/nci_dev.c
+index 57b505cb15618..acd4125ff39fe 100644
+--- a/tools/testing/selftests/nci/nci_dev.c
++++ b/tools/testing/selftests/nci/nci_dev.c
+@@ -110,11 +110,11 @@ static int send_cmd_mt_nla(int sd, __u16 nlmsg_type, __u32 nlmsg_pid,
+ 		na->nla_type = nla_type[cnt];
+ 		na->nla_len = nla_len[cnt] + NLA_HDRLEN;
+ 
+-		if (nla_len > 0)
++		if (nla_len[cnt] > 0)
+ 			memcpy(NLA_DATA(na), nla_data[cnt], nla_len[cnt]);
+ 
+-		msg.n.nlmsg_len += NLMSG_ALIGN(na->nla_len);
+-		prv_len = na->nla_len;
++		prv_len = NLA_ALIGN(nla_len[cnt]) + NLA_HDRLEN;
++		msg.n.nlmsg_len += prv_len;
+ 	}
+ 
+ 	buf = (char *)&msg;
+diff --git a/tools/thermal/tmon/Makefile b/tools/thermal/tmon/Makefile
+index 9db867df76794..610334f86f631 100644
+--- a/tools/thermal/tmon/Makefile
++++ b/tools/thermal/tmon/Makefile
+@@ -10,7 +10,7 @@ override CFLAGS+= $(call cc-option,-O3,-O1) ${WARNFLAGS}
+ # Add "-fstack-protector" only if toolchain supports it.
+ override CFLAGS+= $(call cc-option,-fstack-protector-strong)
+ CC?= $(CROSS_COMPILE)gcc
+-PKG_CONFIG?= pkg-config
++PKG_CONFIG?= $(CROSS_COMPILE)pkg-config
+ 
+ override CFLAGS+=-D VERSION=\"$(VERSION)\"
+ LDFLAGS+=


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-20 22:01 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-20 22:01 UTC (permalink / raw
  To: gentoo-commits

commit:     1096610a7b719fbfec3193f7f3c59a1610e18711
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 20 21:57:57 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 20 22:00:49 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1096610a

Move USER_NS to GENTOO_LINUX_PORTAGE

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index d2175f0..74e80d3 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -65,6 +65,7 @@
 +	select NET_NS
 +	select PID_NS
 +	select SYSVIPC
++	select USER_NS
 +	select UTS_NS
 +
 +	help
@@ -145,7 +146,6 @@
 +	select TIMERFD
 +	select TMPFS_POSIX_ACL
 +	select TMPFS_XATTR
-+	select USER_NS
 +
 +	select ANON_INODES
 +	select BLOCK


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-22 11:37 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-22 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     00a2b84fdf9371e8fc3cfa89c197db0aa7f58939
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 22 11:37:23 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 22 11:37:23 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=00a2b84f

Linux patch 5.14.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1006_linux-5.14.7.patch | 6334 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6338 insertions(+)

diff --git a/0000_README b/0000_README
index df8a957..0c8fa67 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1005_linux-5.14.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.6
 
+Patch:  1006_linux-5.14.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.14.7.patch b/1006_linux-5.14.7.patch
new file mode 100644
index 0000000..a7e8c31
--- /dev/null
+++ b/1006_linux-5.14.7.patch
@@ -0,0 +1,6334 @@
+diff --git a/Documentation/devicetree/bindings/arm/tegra.yaml b/Documentation/devicetree/bindings/arm/tegra.yaml
+index b9f75e20fef5c..b2a645740ffe6 100644
+--- a/Documentation/devicetree/bindings/arm/tegra.yaml
++++ b/Documentation/devicetree/bindings/arm/tegra.yaml
+@@ -54,7 +54,7 @@ properties:
+           - const: toradex,apalis_t30
+           - const: nvidia,tegra30
+       - items:
+-          - const: toradex,apalis_t30-eval-v1.1
++          - const: toradex,apalis_t30-v1.1-eval
+           - const: toradex,apalis_t30-eval
+           - const: toradex,apalis_t30-v1.1
+           - const: toradex,apalis_t30
+diff --git a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
+index 44919d48d2415..c459f169a9044 100644
+--- a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
++++ b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
+@@ -122,7 +122,7 @@ on various other factors also like;
+ 	so the device should have enough free bytes available its OOB/Spare
+ 	area to accommodate ECC for entire page. In general following expression
+ 	helps in determining if given device can accommodate ECC syndrome:
+-	"2 + (PAGESIZE / 512) * ECC_BYTES" >= OOBSIZE"
++	"2 + (PAGESIZE / 512) * ECC_BYTES" <= OOBSIZE"
+ 	where
+ 		OOBSIZE		number of bytes in OOB/spare area
+ 		PAGESIZE	number of bytes in main-area of device page
+diff --git a/Makefile b/Makefile
+index f9c8bbf8cf71e..efb603f06e711 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
+index a2fbea3ee07c7..102418ac5ff4a 100644
+--- a/arch/arc/mm/cache.c
++++ b/arch/arc/mm/cache.c
+@@ -1123,7 +1123,7 @@ void clear_user_page(void *to, unsigned long u_vaddr, struct page *page)
+ 	clear_page(to);
+ 	clear_bit(PG_dc_clean, &page->flags);
+ }
+-
++EXPORT_SYMBOL(clear_user_page);
+ 
+ /**********************************************************************
+  * Explicit Cache flush request from user space via syscall
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index e57b23f952846..3599b9a2f1dff 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -511,7 +511,7 @@ size_t sve_state_size(struct task_struct const *task)
+ void sve_alloc(struct task_struct *task)
+ {
+ 	if (task->thread.sve_state) {
+-		memset(task->thread.sve_state, 0, sve_state_size(current));
++		memset(task->thread.sve_state, 0, sve_state_size(task));
+ 		return;
+ 	}
+ 
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 5d1fc9c4bca5e..45ee8abcf2025 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1220,6 +1220,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 		if (copy_from_user(&reg, argp, sizeof(reg)))
+ 			break;
+ 
++		/*
++		 * We could owe a reset due to PSCI. Handle the pending reset
++		 * here to ensure userspace register accesses are ordered after
++		 * the reset.
++		 */
++		if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu))
++			kvm_reset_vcpu(vcpu);
++
+ 		if (ioctl == KVM_SET_ONE_REG)
+ 			r = kvm_arm_set_reg(vcpu, &reg);
+ 		else
+diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
+index 6f48336b1d86a..04ebab299aa4e 100644
+--- a/arch/arm64/kvm/handle_exit.c
++++ b/arch/arm64/kvm/handle_exit.c
+@@ -292,11 +292,12 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
+ 		kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
+ }
+ 
+-void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr,
++void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
++					      u64 elr_virt, u64 elr_phys,
+ 					      u64 par, uintptr_t vcpu,
+ 					      u64 far, u64 hpfar) {
+-	u64 elr_in_kimg = __phys_to_kimg(__hyp_pa(elr));
+-	u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr;
++	u64 elr_in_kimg = __phys_to_kimg(elr_phys);
++	u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr_virt;
+ 	u64 mode = spsr & PSR_MODE_MASK;
+ 
+ 	/*
+@@ -309,20 +310,24 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr,
+ 		kvm_err("Invalid host exception to nVHE hyp!\n");
+ 	} else if (ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
+ 		   (esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == BUG_BRK_IMM) {
+-		struct bug_entry *bug = find_bug(elr_in_kimg);
+ 		const char *file = NULL;
+ 		unsigned int line = 0;
+ 
+ 		/* All hyp bugs, including warnings, are treated as fatal. */
+-		if (bug)
+-			bug_get_file_line(bug, &file, &line);
++		if (!is_protected_kvm_enabled() ||
++		    IS_ENABLED(CONFIG_NVHE_EL2_DEBUG)) {
++			struct bug_entry *bug = find_bug(elr_in_kimg);
++
++			if (bug)
++				bug_get_file_line(bug, &file, &line);
++		}
+ 
+ 		if (file)
+ 			kvm_err("nVHE hyp BUG at: %s:%u!\n", file, line);
+ 		else
+-			kvm_err("nVHE hyp BUG at: %016llx!\n", elr + hyp_offset);
++			kvm_err("nVHE hyp BUG at: %016llx!\n", elr_virt + hyp_offset);
+ 	} else {
+-		kvm_err("nVHE hyp panic at: %016llx!\n", elr + hyp_offset);
++		kvm_err("nVHE hyp panic at: %016llx!\n", elr_virt + hyp_offset);
+ 	}
+ 
+ 	/*
+@@ -334,5 +339,5 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr,
+ 	kvm_err("Hyp Offset: 0x%llx\n", hyp_offset);
+ 
+ 	panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%016lx\n",
+-	      spsr, elr, esr, far, hpfar, par, vcpu);
++	      spsr, elr_virt, esr, far, hpfar, par, vcpu);
+ }
+diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
+index 2b23400e0fb30..4b652ffb591d4 100644
+--- a/arch/arm64/kvm/hyp/nvhe/host.S
++++ b/arch/arm64/kvm/hyp/nvhe/host.S
+@@ -7,6 +7,7 @@
+ #include <linux/linkage.h>
+ 
+ #include <asm/assembler.h>
++#include <asm/kvm_arm.h>
+ #include <asm/kvm_asm.h>
+ #include <asm/kvm_mmu.h>
+ 
+@@ -85,12 +86,24 @@ SYM_FUNC_START(__hyp_do_panic)
+ 
+ 	mov	x29, x0
+ 
++#ifdef CONFIG_NVHE_EL2_DEBUG
++	/* Ensure host stage-2 is disabled */
++	mrs	x0, hcr_el2
++	bic	x0, x0, #HCR_VM
++	msr	hcr_el2, x0
++	isb
++	tlbi	vmalls12e1
++	dsb	nsh
++#endif
++
+ 	/* Load the panic arguments into x0-7 */
+ 	mrs	x0, esr_el2
+-	get_vcpu_ptr x4, x5
+-	mrs	x5, far_el2
+-	mrs	x6, hpfar_el2
+-	mov	x7, xzr			// Unused argument
++	mov	x4, x3
++	mov	x3, x2
++	hyp_pa	x3, x6
++	get_vcpu_ptr x5, x6
++	mrs	x6, far_el2
++	mrs	x7, hpfar_el2
+ 
+ 	/* Enter the host, conditionally restoring the host context. */
+ 	cbz	x29, __host_enter_without_restoring
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index cba7872d69a85..d010778b93ffe 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -210,10 +210,16 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
+  */
+ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ {
++	struct vcpu_reset_state reset_state;
+ 	int ret;
+ 	bool loaded;
+ 	u32 pstate;
+ 
++	mutex_lock(&vcpu->kvm->lock);
++	reset_state = vcpu->arch.reset_state;
++	WRITE_ONCE(vcpu->arch.reset_state.reset, false);
++	mutex_unlock(&vcpu->kvm->lock);
++
+ 	/* Reset PMU outside of the non-preemptible section */
+ 	kvm_pmu_vcpu_reset(vcpu);
+ 
+@@ -276,8 +282,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 	 * Additional reset state handling that PSCI may have imposed on us.
+ 	 * Must be done after all the sys_reg reset.
+ 	 */
+-	if (vcpu->arch.reset_state.reset) {
+-		unsigned long target_pc = vcpu->arch.reset_state.pc;
++	if (reset_state.reset) {
++		unsigned long target_pc = reset_state.pc;
+ 
+ 		/* Gracefully handle Thumb2 entry point */
+ 		if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) {
+@@ -286,13 +292,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 		}
+ 
+ 		/* Propagate caller endianness */
+-		if (vcpu->arch.reset_state.be)
++		if (reset_state.be)
+ 			kvm_vcpu_set_be(vcpu);
+ 
+ 		*vcpu_pc(vcpu) = target_pc;
+-		vcpu_set_reg(vcpu, 0, vcpu->arch.reset_state.r0);
+-
+-		vcpu->arch.reset_state.reset = false;
++		vcpu_set_reg(vcpu, 0, reset_state.r0);
+ 	}
+ 
+ 	/* Reset timer */
+@@ -317,6 +321,14 @@ int kvm_set_ipa_limit(void)
+ 	mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+ 	parange = cpuid_feature_extract_unsigned_field(mmfr0,
+ 				ID_AA64MMFR0_PARANGE_SHIFT);
++	/*
++	 * IPA size beyond 48 bits could not be supported
++	 * on either 4K or 16K page size. Hence let's cap
++	 * it to 48 bits, in case it's reported as larger
++	 * on the system.
++	 */
++	if (PAGE_SIZE != SZ_64K)
++		parange = min(parange, (unsigned int)ID_AA64MMFR0_PARANGE_48);
+ 
+ 	/*
+ 	 * Check with ARMv8.5-GTG that our PAGE_SIZE is supported at
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index 21bbd615ca410..ec4e2d3635077 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -19,6 +19,7 @@
+ #include <asm/switch_to.h>
+ #include <asm/syscall.h>
+ #include <asm/time.h>
++#include <asm/tm.h>
+ #include <asm/unistd.h>
+ 
+ #if defined(CONFIG_PPC_ADV_DEBUG_REGS) && defined(CONFIG_PPC32)
+@@ -138,6 +139,48 @@ notrace long system_call_exception(long r3, long r4, long r5,
+ 	 */
+ 	irq_soft_mask_regs_set_state(regs, IRQS_ENABLED);
+ 
++	/*
++	 * If system call is called with TM active, set _TIF_RESTOREALL to
++	 * prevent RFSCV being used to return to userspace, because POWER9
++	 * TM implementation has problems with this instruction returning to
++	 * transactional state. Final register values are not relevant because
++	 * the transaction will be aborted upon return anyway. Or in the case
++	 * of unsupported_scv SIGILL fault, the return state does not much
++	 * matter because it's an edge case.
++	 */
++	if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
++			unlikely(MSR_TM_TRANSACTIONAL(regs->msr)))
++		current_thread_info()->flags |= _TIF_RESTOREALL;
++
++	/*
++	 * If the system call was made with a transaction active, doom it and
++	 * return without performing the system call. Unless it was an
++	 * unsupported scv vector, in which case it's treated like an illegal
++	 * instruction.
++	 */
++#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
++	if (unlikely(MSR_TM_TRANSACTIONAL(regs->msr)) &&
++	    !trap_is_unsupported_scv(regs)) {
++		/* Enable TM in the kernel, and disable EE (for scv) */
++		hard_irq_disable();
++		mtmsr(mfmsr() | MSR_TM);
++
++		/* tabort, this dooms the transaction, nothing else */
++		asm volatile(".long 0x7c00071d | ((%0) << 16)"
++				:: "r"(TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT));
++
++		/*
++		 * Userspace will never see the return value. Execution will
++		 * resume after the tbegin. of the aborted transaction with the
++		 * checkpointed register state. A context switch could occur
++		 * or signal delivered to the process before resuming the
++		 * doomed transaction context, but that should all be handled
++		 * as expected.
++		 */
++		return -ENOSYS;
++	}
++#endif // CONFIG_PPC_TRANSACTIONAL_MEM
++
+ 	local_irq_enable();
+ 
+ 	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
+diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
+index d4212d2ff0b54..ec950b08a8dcc 100644
+--- a/arch/powerpc/kernel/interrupt_64.S
++++ b/arch/powerpc/kernel/interrupt_64.S
+@@ -12,7 +12,6 @@
+ #include <asm/mmu.h>
+ #include <asm/ppc_asm.h>
+ #include <asm/ptrace.h>
+-#include <asm/tm.h>
+ 
+ 	.section	".toc","aw"
+ SYS_CALL_TABLE:
+@@ -55,12 +54,6 @@ COMPAT_SYS_CALL_TABLE:
+ 	.globl system_call_vectored_\name
+ system_call_vectored_\name:
+ _ASM_NOKPROBE_SYMBOL(system_call_vectored_\name)
+-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+-BEGIN_FTR_SECTION
+-	extrdi.	r10, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */
+-	bne	tabort_syscall
+-END_FTR_SECTION_IFSET(CPU_FTR_TM)
+-#endif
+ 	SCV_INTERRUPT_TO_KERNEL
+ 	mr	r10,r1
+ 	ld	r1,PACAKSAVE(r13)
+@@ -247,12 +240,6 @@ _ASM_NOKPROBE_SYMBOL(system_call_common_real)
+ 	.globl system_call_common
+ system_call_common:
+ _ASM_NOKPROBE_SYMBOL(system_call_common)
+-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+-BEGIN_FTR_SECTION
+-	extrdi.	r10, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */
+-	bne	tabort_syscall
+-END_FTR_SECTION_IFSET(CPU_FTR_TM)
+-#endif
+ 	mr	r10,r1
+ 	ld	r1,PACAKSAVE(r13)
+ 	std	r10,0(r1)
+@@ -425,34 +412,6 @@ SOFT_MASK_TABLE(.Lsyscall_rst_start, 1b)
+ RESTART_TABLE(.Lsyscall_rst_start, .Lsyscall_rst_end, syscall_restart)
+ #endif
+ 
+-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+-tabort_syscall:
+-_ASM_NOKPROBE_SYMBOL(tabort_syscall)
+-	/* Firstly we need to enable TM in the kernel */
+-	mfmsr	r10
+-	li	r9, 1
+-	rldimi	r10, r9, MSR_TM_LG, 63-MSR_TM_LG
+-	mtmsrd	r10, 0
+-
+-	/* tabort, this dooms the transaction, nothing else */
+-	li	r9, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
+-	TABORT(R9)
+-
+-	/*
+-	 * Return directly to userspace. We have corrupted user register state,
+-	 * but userspace will never see that register state. Execution will
+-	 * resume after the tbegin of the aborted transaction with the
+-	 * checkpointed register state.
+-	 */
+-	li	r9, MSR_RI
+-	andc	r10, r10, r9
+-	mtmsrd	r10, 1
+-	mtspr	SPRN_SRR0, r11
+-	mtspr	SPRN_SRR1, r12
+-	RFI_TO_USER
+-	b	.	/* prevent speculative execution */
+-#endif
+-
+ 	/*
+ 	 * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not
+ 	 * touched, no exit work created, then this can be used.
+diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
+index 47a683cd00d24..fd829f7f25a47 100644
+--- a/arch/powerpc/kernel/mce.c
++++ b/arch/powerpc/kernel/mce.c
+@@ -249,6 +249,7 @@ void machine_check_queue_event(void)
+ {
+ 	int index;
+ 	struct machine_check_event evt;
++	unsigned long msr;
+ 
+ 	if (!get_mce_event(&evt, MCE_EVENT_RELEASE))
+ 		return;
+@@ -262,8 +263,20 @@ void machine_check_queue_event(void)
+ 	memcpy(&local_paca->mce_info->mce_event_queue[index],
+ 	       &evt, sizeof(evt));
+ 
+-	/* Queue irq work to process this event later. */
+-	irq_work_queue(&mce_event_process_work);
++	/*
++	 * Queue irq work to process this event later. Before
++	 * queuing the work enable translation for non radix LPAR,
++	 * as irq_work_queue may try to access memory outside RMO
++	 * region.
++	 */
++	if (!radix_enabled() && firmware_has_feature(FW_FEATURE_LPAR)) {
++		msr = mfmsr();
++		mtmsr(msr | MSR_IR | MSR_DR);
++		irq_work_queue(&mce_event_process_work);
++		mtmsr(msr);
++	} else {
++		irq_work_queue(&mce_event_process_work);
++	}
+ }
+ 
+ void mce_common_process_ue(struct pt_regs *regs,
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 8dd437d7a2c63..dd18e1c447512 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -2578,7 +2578,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST)
+ 	/* The following code handles the fake_suspend = 1 case */
+ 	mflr	r0
+ 	std	r0, PPC_LR_STKOFF(r1)
+-	stdu	r1, -PPC_MIN_STKFRM(r1)
++	stdu	r1, -TM_FRAME_SIZE(r1)
+ 
+ 	/* Turn on TM. */
+ 	mfmsr	r8
+@@ -2593,10 +2593,42 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
+ 	nop
+ 
++	/*
++	 * It's possible that treclaim. may modify registers, if we have lost
++	 * track of fake-suspend state in the guest due to it using rfscv.
++	 * Save and restore registers in case this occurs.
++	 */
++	mfspr	r3, SPRN_DSCR
++	mfspr	r4, SPRN_XER
++	mfspr	r5, SPRN_AMR
++	/* SPRN_TAR would need to be saved here if the kernel ever used it */
++	mfcr	r12
++	SAVE_NVGPRS(r1)
++	SAVE_GPR(2, r1)
++	SAVE_GPR(3, r1)
++	SAVE_GPR(4, r1)
++	SAVE_GPR(5, r1)
++	stw	r12, 8(r1)
++	std	r1, HSTATE_HOST_R1(r13)
++
+ 	/* We have to treclaim here because that's the only way to do S->N */
+ 	li	r3, TM_CAUSE_KVM_RESCHED
+ 	TRECLAIM(R3)
+ 
++	GET_PACA(r13)
++	ld	r1, HSTATE_HOST_R1(r13)
++	REST_GPR(2, r1)
++	REST_GPR(3, r1)
++	REST_GPR(4, r1)
++	REST_GPR(5, r1)
++	lwz	r12, 8(r1)
++	REST_NVGPRS(r1)
++	mtspr	SPRN_DSCR, r3
++	mtspr	SPRN_XER, r4
++	mtspr	SPRN_AMR, r5
++	mtcr	r12
++	HMT_MEDIUM
++
+ 	/*
+ 	 * We were in fake suspend, so we are not going to save the
+ 	 * register state as the guest checkpointed state (since
+@@ -2624,7 +2656,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
+ 	std	r5, VCPU_TFHAR(r9)
+ 	std	r6, VCPU_TFIAR(r9)
+ 
+-	addi	r1, r1, PPC_MIN_STKFRM
++	addi	r1, r1, TM_FRAME_SIZE
+ 	ld	r0, PPC_LR_STKOFF(r1)
+ 	mtlr	r0
+ 	blr
+diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
+index b0ca5058e7ae6..767852ae5e84f 100644
+--- a/arch/riscv/include/asm/page.h
++++ b/arch/riscv/include/asm/page.h
+@@ -79,8 +79,8 @@ typedef struct page *pgtable_t;
+ #endif
+ 
+ #ifdef CONFIG_MMU
+-extern unsigned long pfn_base;
+-#define ARCH_PFN_OFFSET		(pfn_base)
++extern unsigned long riscv_pfn_base;
++#define ARCH_PFN_OFFSET		(riscv_pfn_base)
+ #else
+ #define ARCH_PFN_OFFSET		(PAGE_OFFSET >> PAGE_SHIFT)
+ #endif /* CONFIG_MMU */
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 7cb4f391d106f..9786100f3a140 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -234,8 +234,8 @@ static struct pt_alloc_ops _pt_ops __initdata;
+ #define pt_ops _pt_ops
+ #endif
+ 
+-unsigned long pfn_base __ro_after_init;
+-EXPORT_SYMBOL(pfn_base);
++unsigned long riscv_pfn_base __ro_after_init;
++EXPORT_SYMBOL(riscv_pfn_base);
+ 
+ pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+ pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+@@ -579,7 +579,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ 	kernel_map.va_kernel_pa_offset = kernel_map.virt_addr - kernel_map.phys_addr;
+ #endif
+ 
+-	pfn_base = PFN_DOWN(kernel_map.phys_addr);
++	riscv_pfn_base = PFN_DOWN(kernel_map.phys_addr);
+ 
+ 	/*
+ 	 * Enforce boot alignment requirements of RV32 and
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 88419263a89a9..840d8594437d5 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -248,8 +248,7 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
+ 
+ #define EMIT6_PCREL(op1, op2, b1, b2, i, off, mask)		\
+ ({								\
+-	/* Branch instruction needs 6 bytes */			\
+-	int rel = (addrs[(i) + (off) + 1] - (addrs[(i) + 1] - 6)) / 2;\
++	int rel = (addrs[(i) + (off) + 1] - jit->prg) / 2;	\
+ 	_EMIT6((op1) | reg(b1, b2) << 16 | (rel & 0xffff), (op2) | (mask));\
+ 	REG_SET_SEEN(b1);					\
+ 	REG_SET_SEEN(b2);					\
+@@ -761,10 +760,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb9080000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */
+-		if (!imm)
+-			break;
+-		/* alfi %dst,imm */
+-		EMIT6_IMM(0xc20b0000, dst_reg, imm);
++		if (imm != 0) {
++			/* alfi %dst,imm */
++			EMIT6_IMM(0xc20b0000, dst_reg, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_ADD | BPF_K: /* dst = dst + imm */
+@@ -786,17 +785,22 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb9090000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */
+-		if (!imm)
+-			break;
+-		/* alfi %dst,-imm */
+-		EMIT6_IMM(0xc20b0000, dst_reg, -imm);
++		if (imm != 0) {
++			/* alfi %dst,-imm */
++			EMIT6_IMM(0xc20b0000, dst_reg, -imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_SUB | BPF_K: /* dst = dst - imm */
+ 		if (!imm)
+ 			break;
+-		/* agfi %dst,-imm */
+-		EMIT6_IMM(0xc2080000, dst_reg, -imm);
++		if (imm == -0x80000000) {
++			/* algfi %dst,0x80000000 */
++			EMIT6_IMM(0xc20a0000, dst_reg, 0x80000000);
++		} else {
++			/* agfi %dst,-imm */
++			EMIT6_IMM(0xc2080000, dst_reg, -imm);
++		}
+ 		break;
+ 	/*
+ 	 * BPF_MUL
+@@ -811,10 +815,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb90c0000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */
+-		if (imm == 1)
+-			break;
+-		/* msfi %r5,imm */
+-		EMIT6_IMM(0xc2010000, dst_reg, imm);
++		if (imm != 1) {
++			/* msfi %r5,imm */
++			EMIT6_IMM(0xc2010000, dst_reg, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_MUL | BPF_K: /* dst = dst * imm */
+@@ -867,6 +871,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 			if (BPF_OP(insn->code) == BPF_MOD)
+ 				/* lhgi %dst,0 */
+ 				EMIT4_IMM(0xa7090000, dst_reg, 0);
++			else
++				EMIT_ZERO(dst_reg);
+ 			break;
+ 		}
+ 		/* lhi %w0,0 */
+@@ -999,10 +1005,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT4(0xb9820000, dst_reg, src_reg);
+ 		break;
+ 	case BPF_ALU | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */
+-		if (!imm)
+-			break;
+-		/* xilf %dst,imm */
+-		EMIT6_IMM(0xc0070000, dst_reg, imm);
++		if (imm != 0) {
++			/* xilf %dst,imm */
++			EMIT6_IMM(0xc0070000, dst_reg, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_XOR | BPF_K: /* dst = dst ^ imm */
+@@ -1033,10 +1039,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, src_reg, 0);
+ 		break;
+ 	case BPF_ALU | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */
+-		if (imm == 0)
+-			break;
+-		/* sll %dst,imm(%r0) */
+-		EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
++		if (imm != 0) {
++			/* sll %dst,imm(%r0) */
++			EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_LSH | BPF_K: /* dst = dst << imm */
+@@ -1058,10 +1064,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, src_reg, 0);
+ 		break;
+ 	case BPF_ALU | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */
+-		if (imm == 0)
+-			break;
+-		/* srl %dst,imm(%r0) */
+-		EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
++		if (imm != 0) {
++			/* srl %dst,imm(%r0) */
++			EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_RSH | BPF_K: /* dst = dst >> imm */
+@@ -1083,10 +1089,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 		EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0);
+ 		break;
+ 	case BPF_ALU | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */
+-		if (imm == 0)
+-			break;
+-		/* sra %dst,imm(%r0) */
+-		EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
++		if (imm != 0) {
++			/* sra %dst,imm(%r0) */
++			EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
++		}
+ 		EMIT_ZERO(dst_reg);
+ 		break;
+ 	case BPF_ALU64 | BPF_ARSH | BPF_K: /* ((s64) dst) >>= imm */
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index ae683aa623ace..c5b35ea129cfa 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -159,7 +159,7 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
+ 
+ 	mmap_read_lock(current->mm);
+ 	ret = -EINVAL;
+-	vma = find_vma(current->mm, mmio_addr);
++	vma = vma_lookup(current->mm, mmio_addr);
+ 	if (!vma)
+ 		goto out_unlock_mmap;
+ 	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+@@ -298,7 +298,7 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
+ 
+ 	mmap_read_lock(current->mm);
+ 	ret = -EINVAL;
+-	vma = find_vma(current->mm, mmio_addr);
++	vma = vma_lookup(current->mm, mmio_addr);
+ 	if (!vma)
+ 		goto out_unlock_mmap;
+ 	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index c9fa7be3df82d..5c95d242f38d7 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -301,8 +301,8 @@ do {									\
+ 	unsigned int __gu_low, __gu_high;				\
+ 	const unsigned int __user *__gu_ptr;				\
+ 	__gu_ptr = (const void __user *)(ptr);				\
+-	__get_user_asm(__gu_low, ptr, "l", "=r", label);		\
+-	__get_user_asm(__gu_high, ptr+1, "l", "=r", label);		\
++	__get_user_asm(__gu_low, __gu_ptr, "l", "=r", label);		\
++	__get_user_asm(__gu_high, __gu_ptr+1, "l", "=r", label);	\
+ 	(x) = ((unsigned long long)__gu_high << 32) | __gu_low;		\
+ } while (0)
+ #else
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 8cb7816d03b4c..193204aee8801 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -1253,6 +1253,9 @@ static void __mc_scan_banks(struct mce *m, struct pt_regs *regs, struct mce *fin
+ 
+ static void kill_me_now(struct callback_head *ch)
+ {
++	struct task_struct *p = container_of(ch, struct task_struct, mce_kill_me);
++
++	p->mce_count = 0;
+ 	force_sig(SIGBUS);
+ }
+ 
+@@ -1262,6 +1265,7 @@ static void kill_me_maybe(struct callback_head *cb)
+ 	int flags = MF_ACTION_REQUIRED;
+ 	int ret;
+ 
++	p->mce_count = 0;
+ 	pr_err("Uncorrected hardware memory error in user-access at %llx", p->mce_addr);
+ 
+ 	if (!p->mce_ripv)
+@@ -1290,17 +1294,34 @@ static void kill_me_maybe(struct callback_head *cb)
+ 	}
+ }
+ 
+-static void queue_task_work(struct mce *m, int kill_current_task)
++static void queue_task_work(struct mce *m, char *msg, int kill_current_task)
+ {
+-	current->mce_addr = m->addr;
+-	current->mce_kflags = m->kflags;
+-	current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
+-	current->mce_whole_page = whole_page(m);
++	int count = ++current->mce_count;
+ 
+-	if (kill_current_task)
+-		current->mce_kill_me.func = kill_me_now;
+-	else
+-		current->mce_kill_me.func = kill_me_maybe;
++	/* First call, save all the details */
++	if (count == 1) {
++		current->mce_addr = m->addr;
++		current->mce_kflags = m->kflags;
++		current->mce_ripv = !!(m->mcgstatus & MCG_STATUS_RIPV);
++		current->mce_whole_page = whole_page(m);
++
++		if (kill_current_task)
++			current->mce_kill_me.func = kill_me_now;
++		else
++			current->mce_kill_me.func = kill_me_maybe;
++	}
++
++	/* Ten is likely overkill. Don't expect more than two faults before task_work() */
++	if (count > 10)
++		mce_panic("Too many consecutive machine checks while accessing user data", m, msg);
++
++	/* Second or later call, make sure page address matches the one from first call */
++	if (count > 1 && (current->mce_addr >> PAGE_SHIFT) != (m->addr >> PAGE_SHIFT))
++		mce_panic("Consecutive machine checks to different user pages", m, msg);
++
++	/* Do not call task_work_add() more than once */
++	if (count > 1)
++		return;
+ 
+ 	task_work_add(current, &current->mce_kill_me, TWA_RESUME);
+ }
+@@ -1438,7 +1459,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 		/* If this triggers there is no way to recover. Die hard. */
+ 		BUG_ON(!on_thread_stack() || !user_mode(regs));
+ 
+-		queue_task_work(&m, kill_current_task);
++		queue_task_work(&m, msg, kill_current_task);
+ 
+ 	} else {
+ 		/*
+@@ -1456,7 +1477,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ 		}
+ 
+ 		if (m.kflags & MCE_IN_KERNEL_COPYIN)
+-			queue_task_work(&m, kill_current_task);
++			queue_task_work(&m, msg, kill_current_task);
+ 	}
+ out:
+ 	mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index ddeaba947eb3d..879886c6cc537 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1433,18 +1433,18 @@ int kern_addr_valid(unsigned long addr)
+ 		return 0;
+ 
+ 	p4d = p4d_offset(pgd, addr);
+-	if (p4d_none(*p4d))
++	if (!p4d_present(*p4d))
+ 		return 0;
+ 
+ 	pud = pud_offset(p4d, addr);
+-	if (pud_none(*pud))
++	if (!pud_present(*pud))
+ 		return 0;
+ 
+ 	if (pud_large(*pud))
+ 		return pfn_valid(pud_pfn(*pud));
+ 
+ 	pmd = pmd_offset(pud, addr);
+-	if (pmd_none(*pmd))
++	if (!pmd_present(*pmd))
+ 		return 0;
+ 
+ 	if (pmd_large(*pmd))
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index 3112ca7786ed1..4ba2a3ee4bce1 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -583,7 +583,12 @@ int memtype_reserve(u64 start, u64 end, enum page_cache_mode req_type,
+ 	int err = 0;
+ 
+ 	start = sanitize_phys(start);
+-	end = sanitize_phys(end);
++
++	/*
++	 * The end address passed into this function is exclusive, but
++	 * sanitize_phys() expects an inclusive address.
++	 */
++	end = sanitize_phys(end - 1) + 1;
+ 	if (start >= end) {
+ 		WARN(1, "%s failed: [mem %#010Lx-%#010Lx], req %s\n", __func__,
+ 				start, end - 1, cattr_name(req_type));
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 03149422dce2b..475d9c71b1713 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1215,6 +1215,11 @@ static void __init xen_dom0_set_legacy_features(void)
+ 	x86_platform.legacy.rtc = 1;
+ }
+ 
++static void __init xen_domu_set_legacy_features(void)
++{
++	x86_platform.legacy.rtc = 0;
++}
++
+ /* First C function to be called on Xen boot */
+ asmlinkage __visible void __init xen_start_kernel(void)
+ {
+@@ -1367,6 +1372,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 		add_preferred_console("xenboot", 0, NULL);
+ 		if (pci_xen)
+ 			x86_init.pci.arch_init = pci_xen_init;
++		x86_platform.set_legacy_features =
++				xen_domu_set_legacy_features;
+ 	} else {
+ 		const struct dom0_vga_console_info *info =
+ 			(void *)((char *)xen_start_info +
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index ade789e73ee42..167c4958cdf40 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -1518,14 +1518,17 @@ static inline void xen_alloc_ptpage(struct mm_struct *mm, unsigned long pfn,
+ 	if (pinned) {
+ 		struct page *page = pfn_to_page(pfn);
+ 
+-		if (static_branch_likely(&xen_struct_pages_ready))
++		pinned = false;
++		if (static_branch_likely(&xen_struct_pages_ready)) {
++			pinned = PagePinned(page);
+ 			SetPagePinned(page);
++		}
+ 
+ 		xen_mc_batch();
+ 
+ 		__set_pfn_prot(pfn, PAGE_KERNEL_RO);
+ 
+-		if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS)
++		if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS && !pinned)
+ 			__pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn);
+ 
+ 		xen_mc_issue(PARAVIRT_LAZY_MMU);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 9360c65169ff4..3a1038b6eeb30 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2662,6 +2662,15 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * are likely to increase the throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
++	/*
++	 * The above assignment schedules the following redirections:
++	 * each time some I/O for bfqq arrives, the process that
++	 * generated that I/O is disassociated from bfqq and
++	 * associated with new_bfqq. Here we increases new_bfqq->ref
++	 * in advance, adding the number of processes that are
++	 * expected to be associated with new_bfqq as they happen to
++	 * issue I/O.
++	 */
+ 	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+@@ -2724,6 +2733,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+ 
++	/* if a merge has already been setup, then proceed with that first */
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
+ 	/*
+ 	 * Check delayed stable merge for rotational or non-queueing
+ 	 * devs. For this branch to be executed, bfqq must not be
+@@ -2825,9 +2838,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfq_too_late_for_merging(bfqq))
+ 		return NULL;
+ 
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
+-
+ 	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+ 
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 31fe9be179d99..26446f97deee4 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1201,10 +1201,6 @@ int blkcg_init_queue(struct request_queue *q)
+ 	if (preloaded)
+ 		radix_tree_preload_end();
+ 
+-	ret = blk_iolatency_init(q);
+-	if (ret)
+-		goto err_destroy_all;
+-
+ 	ret = blk_ioprio_init(q);
+ 	if (ret)
+ 		goto err_destroy_all;
+@@ -1213,6 +1209,12 @@ int blkcg_init_queue(struct request_queue *q)
+ 	if (ret)
+ 		goto err_destroy_all;
+ 
++	ret = blk_iolatency_init(q);
++	if (ret) {
++		blk_throtl_exit(q);
++		goto err_destroy_all;
++	}
++
+ 	return 0;
+ 
+ err_destroy_all:
+diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c
+index a97f33d0c59f9..94665037f4a35 100644
+--- a/drivers/base/power/trace.c
++++ b/drivers/base/power/trace.c
+@@ -13,6 +13,7 @@
+ #include <linux/export.h>
+ #include <linux/rtc.h>
+ #include <linux/suspend.h>
++#include <linux/init.h>
+ 
+ #include <linux/mc146818rtc.h>
+ 
+@@ -165,6 +166,9 @@ void generate_pm_trace(const void *tracedata, unsigned int user)
+ 	const char *file = *(const char **)(tracedata + 2);
+ 	unsigned int user_hash_value, file_hash_value;
+ 
++	if (!x86_platform.legacy.rtc)
++		return;
++
+ 	user_hash_value = user % USERHASH;
+ 	file_hash_value = hash_string(lineno, file, FILEHASH);
+ 	set_magic_time(user_hash_value, file_hash_value, dev_hash_value);
+@@ -267,6 +271,9 @@ static struct notifier_block pm_trace_nb = {
+ 
+ static int __init early_resume_init(void)
+ {
++	if (!x86_platform.legacy.rtc)
++		return 0;
++
+ 	hash_value_early_read = read_magic_time();
+ 	register_pm_notifier(&pm_trace_nb);
+ 	return 0;
+@@ -277,6 +284,9 @@ static int __init late_resume_init(void)
+ 	unsigned int val = hash_value_early_read;
+ 	unsigned int user, file, dev;
+ 
++	if (!x86_platform.legacy.rtc)
++		return 0;
++
+ 	user = val % USERHASH;
+ 	val = val / USERHASH;
+ 	file = val % FILEHASH;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index f0cdff0c5fbf4..1f91bd41a29b2 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -2113,18 +2113,6 @@ int loop_register_transfer(struct loop_func_table *funcs)
+ 	return 0;
+ }
+ 
+-static int unregister_transfer_cb(int id, void *ptr, void *data)
+-{
+-	struct loop_device *lo = ptr;
+-	struct loop_func_table *xfer = data;
+-
+-	mutex_lock(&lo->lo_mutex);
+-	if (lo->lo_encryption == xfer)
+-		loop_release_xfer(lo);
+-	mutex_unlock(&lo->lo_mutex);
+-	return 0;
+-}
+-
+ int loop_unregister_transfer(int number)
+ {
+ 	unsigned int n = number;
+@@ -2132,9 +2120,20 @@ int loop_unregister_transfer(int number)
+ 
+ 	if (n == 0 || n >= MAX_LO_CRYPT || (xfer = xfer_funcs[n]) == NULL)
+ 		return -EINVAL;
++	/*
++	 * This function is called from only cleanup_cryptoloop().
++	 * Given that each loop device that has a transfer enabled holds a
++	 * reference to the module implementing it we should never get here
++	 * with a transfer that is set (unless forced module unloading is
++	 * requested). Thus, check module's refcount and warn if this is
++	 * not a clean unloading.
++	 */
++#ifdef CONFIG_MODULE_UNLOAD
++	if (xfer->owner && module_refcount(xfer->owner) != -1)
++		pr_err("Danger! Unregistering an in use transfer function.\n");
++#endif
+ 
+ 	xfer_funcs[n] = NULL;
+-	idr_for_each(&loop_index_idr, &unregister_transfer_cb, xfer);
+ 	return 0;
+ }
+ 
+@@ -2325,8 +2324,9 @@ static int loop_add(int i)
+ 	} else {
+ 		err = idr_alloc(&loop_index_idr, lo, 0, 0, GFP_KERNEL);
+ 	}
++	mutex_unlock(&loop_ctl_mutex);
+ 	if (err < 0)
+-		goto out_unlock;
++		goto out_free_dev;
+ 	i = err;
+ 
+ 	err = -ENOMEM;
+@@ -2392,15 +2392,19 @@ static int loop_add(int i)
+ 	disk->private_data	= lo;
+ 	disk->queue		= lo->lo_queue;
+ 	sprintf(disk->disk_name, "loop%d", i);
++	/* Make this loop device reachable from pathname. */
+ 	add_disk(disk);
++	/* Show this loop device. */
++	mutex_lock(&loop_ctl_mutex);
++	lo->idr_visible = true;
+ 	mutex_unlock(&loop_ctl_mutex);
+ 	return i;
+ 
+ out_cleanup_tags:
+ 	blk_mq_free_tag_set(&lo->tag_set);
+ out_free_idr:
++	mutex_lock(&loop_ctl_mutex);
+ 	idr_remove(&loop_index_idr, i);
+-out_unlock:
+ 	mutex_unlock(&loop_ctl_mutex);
+ out_free_dev:
+ 	kfree(lo);
+@@ -2410,9 +2414,14 @@ out:
+ 
+ static void loop_remove(struct loop_device *lo)
+ {
++	/* Make this loop device unreachable from pathname. */
+ 	del_gendisk(lo->lo_disk);
+ 	blk_cleanup_disk(lo->lo_disk);
+ 	blk_mq_free_tag_set(&lo->tag_set);
++	mutex_lock(&loop_ctl_mutex);
++	idr_remove(&loop_index_idr, lo->lo_number);
++	mutex_unlock(&loop_ctl_mutex);
++	/* There is no route which can find this loop device. */
+ 	mutex_destroy(&lo->lo_mutex);
+ 	kfree(lo);
+ }
+@@ -2436,31 +2445,40 @@ static int loop_control_remove(int idx)
+ 		return -EINVAL;
+ 	}
+ 		
++	/* Hide this loop device for serialization. */
+ 	ret = mutex_lock_killable(&loop_ctl_mutex);
+ 	if (ret)
+ 		return ret;
+-
+ 	lo = idr_find(&loop_index_idr, idx);
+-	if (!lo) {
++	if (!lo || !lo->idr_visible)
+ 		ret = -ENODEV;
+-		goto out_unlock_ctrl;
+-	}
++	else
++		lo->idr_visible = false;
++	mutex_unlock(&loop_ctl_mutex);
++	if (ret)
++		return ret;
+ 
++	/* Check whether this loop device can be removed. */
+ 	ret = mutex_lock_killable(&lo->lo_mutex);
+ 	if (ret)
+-		goto out_unlock_ctrl;
++		goto mark_visible;
+ 	if (lo->lo_state != Lo_unbound ||
+ 	    atomic_read(&lo->lo_refcnt) > 0) {
+ 		mutex_unlock(&lo->lo_mutex);
+ 		ret = -EBUSY;
+-		goto out_unlock_ctrl;
++		goto mark_visible;
+ 	}
++	/* Mark this loop device no longer open()-able. */
+ 	lo->lo_state = Lo_deleting;
+ 	mutex_unlock(&lo->lo_mutex);
+ 
+-	idr_remove(&loop_index_idr, lo->lo_number);
+ 	loop_remove(lo);
+-out_unlock_ctrl:
++	return 0;
++
++mark_visible:
++	/* Show this loop device again. */
++	mutex_lock(&loop_ctl_mutex);
++	lo->idr_visible = true;
+ 	mutex_unlock(&loop_ctl_mutex);
+ 	return ret;
+ }
+@@ -2474,7 +2492,8 @@ static int loop_control_get_free(int idx)
+ 	if (ret)
+ 		return ret;
+ 	idr_for_each_entry(&loop_index_idr, lo, id) {
+-		if (lo->lo_state == Lo_unbound)
++		/* Hitting a race results in creating a new loop device which is harmless. */
++		if (lo->idr_visible && data_race(lo->lo_state) == Lo_unbound)
+ 			goto found;
+ 	}
+ 	mutex_unlock(&loop_ctl_mutex);
+@@ -2590,10 +2609,14 @@ static void __exit loop_exit(void)
+ 	unregister_blkdev(LOOP_MAJOR, "loop");
+ 	misc_deregister(&loop_misc);
+ 
+-	mutex_lock(&loop_ctl_mutex);
++	/*
++	 * There is no need to use loop_ctl_mutex here, for nobody else can
++	 * access loop_index_idr when this module is unloading (unless forced
++	 * module unloading is requested). If this is not a clean unloading,
++	 * we have no means to avoid kernel crash.
++	 */
+ 	idr_for_each_entry(&loop_index_idr, lo, id)
+ 		loop_remove(lo);
+-	mutex_unlock(&loop_ctl_mutex);
+ 
+ 	idr_destroy(&loop_index_idr);
+ }
+diff --git a/drivers/block/loop.h b/drivers/block/loop.h
+index 1988899db63ac..04c88dd6eabd6 100644
+--- a/drivers/block/loop.h
++++ b/drivers/block/loop.h
+@@ -68,6 +68,7 @@ struct loop_device {
+ 	struct blk_mq_tag_set	tag_set;
+ 	struct gendisk		*lo_disk;
+ 	struct mutex		lo_mutex;
++	bool			idr_visible;
+ };
+ 
+ struct loop_cmd {
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 50b321a1ab1b6..d574e8cb6d7cd 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -332,7 +332,7 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 				 mpc8xxx_gc->regs + GPIO_DIR, NULL,
+ 				 BGPIOF_BIG_ENDIAN);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 		dev_dbg(&pdev->dev, "GPIO registers are LITTLE endian\n");
+ 	} else {
+ 		ret = bgpio_init(gc, &pdev->dev, 4,
+@@ -342,7 +342,7 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 				 BGPIOF_BIG_ENDIAN
+ 				 | BGPIOF_BIG_ENDIAN_BYTE_ORDER);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 		dev_dbg(&pdev->dev, "GPIO registers are BIG endian\n");
+ 	}
+ 
+@@ -380,11 +380,11 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 	    is_acpi_node(fwnode))
+ 		gc->write_reg(mpc8xxx_gc->regs + GPIO_IBE, 0xffffffff);
+ 
+-	ret = gpiochip_add_data(gc, mpc8xxx_gc);
++	ret = devm_gpiochip_add_data(&pdev->dev, gc, mpc8xxx_gc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev,
+ 			"GPIO chip registration failed with status %d\n", ret);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	mpc8xxx_gc->irqn = platform_get_irq(pdev, 0);
+@@ -416,7 +416,7 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ err:
+-	iounmap(mpc8xxx_gc->regs);
++	irq_domain_remove(mpc8xxx_gc->irq);
+ 	return ret;
+ }
+ 
+@@ -429,9 +429,6 @@ static int mpc8xxx_remove(struct platform_device *pdev)
+ 		irq_domain_remove(mpc8xxx_gc->irq);
+ 	}
+ 
+-	gpiochip_remove(&mpc8xxx_gc->gc);
+-	iounmap(mpc8xxx_gc->regs);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 8ac6eb9f1fdb8..177a663a6a691 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -757,7 +757,7 @@ enum amd_hw_ip_block_type {
+ 	MAX_HWIP
+ };
+ 
+-#define HWIP_MAX_INSTANCE	8
++#define HWIP_MAX_INSTANCE	10
+ 
+ struct amd_powerplay {
+ 	void *pp_handle;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index f9c01bdc3d4c7..ec472c244835c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -191,6 +191,16 @@ void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm)
+ 		kgd2kfd_suspend(adev->kfd.dev, run_pm);
+ }
+ 
++int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev)
++{
++	int r = 0;
++
++	if (adev->kfd.dev)
++		r = kgd2kfd_resume_iommu(adev->kfd.dev);
++
++	return r;
++}
++
+ int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm)
+ {
+ 	int r = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index cf62f43a03da1..293dd0d595c7a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -137,6 +137,7 @@ int amdgpu_amdkfd_init(void);
+ void amdgpu_amdkfd_fini(void);
+ 
+ void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm);
++int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev);
+ int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm);
+ void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
+ 			const void *ih_ring_entry);
+@@ -325,6 +326,7 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 			 const struct kgd2kfd_shared_resources *gpu_resources);
+ void kgd2kfd_device_exit(struct kfd_dev *kfd);
+ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm);
++int kgd2kfd_resume_iommu(struct kfd_dev *kfd);
+ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm);
+ int kgd2kfd_pre_reset(struct kfd_dev *kfd);
+ int kgd2kfd_post_reset(struct kfd_dev *kfd);
+@@ -363,6 +365,11 @@ static inline void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
+ {
+ }
+ 
++static int __maybe_unused kgd2kfd_resume_iommu(struct kfd_dev *kfd)
++{
++	return 0;
++}
++
+ static inline int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
+ {
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 536005bff24ad..83db7d8fa1508 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -1544,20 +1544,18 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
+ 	struct dentry *ent;
+ 	int r, i;
+ 
+-
+-
+ 	ent = debugfs_create_file("amdgpu_preempt_ib", 0600, root, adev,
+ 				  &fops_ib_preempt);
+-	if (!ent) {
++	if (IS_ERR(ent)) {
+ 		DRM_ERROR("unable to create amdgpu_preempt_ib debugsfs file\n");
+-		return -EIO;
++		return PTR_ERR(ent);
+ 	}
+ 
+ 	ent = debugfs_create_file("amdgpu_force_sclk", 0200, root, adev,
+ 				  &fops_sclk_set);
+-	if (!ent) {
++	if (IS_ERR(ent)) {
+ 		DRM_ERROR("unable to create amdgpu_set_sclk debugsfs file\n");
+-		return -EIO;
++		return PTR_ERR(ent);
+ 	}
+ 
+ 	/* Register debugfs entries for amdgpu_ttm */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f944ed858f3e7..7b42636fc7dc6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2342,6 +2342,10 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 	if (r)
+ 		goto init_failed;
+ 
++	r = amdgpu_amdkfd_resume_iommu(adev);
++	if (r)
++		goto init_failed;
++
+ 	r = amdgpu_device_ip_hw_init_phase1(adev);
+ 	if (r)
+ 		goto init_failed;
+@@ -3096,6 +3100,10 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
+ {
+ 	int r;
+ 
++	r = amdgpu_amdkfd_resume_iommu(adev);
++	if (r)
++		return r;
++
+ 	r = amdgpu_device_ip_resume_phase1(adev);
+ 	if (r)
+ 		return r;
+@@ -4534,6 +4542,10 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ 				dev_warn(tmp_adev->dev, "asic atom init failed!");
+ 			} else {
+ 				dev_info(tmp_adev->dev, "GPU reset succeeded, trying to resume\n");
++				r = amdgpu_amdkfd_resume_iommu(tmp_adev);
++				if (r)
++					goto out;
++
+ 				r = amdgpu_device_ip_resume_phase1(tmp_adev);
+ 				if (r)
+ 					goto out;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 7b634a1517f9c..0554576d36955 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -428,8 +428,8 @@ int amdgpu_debugfs_ring_init(struct amdgpu_device *adev,
+ 	ent = debugfs_create_file(name,
+ 				  S_IFREG | S_IRUGO, root,
+ 				  ring, &amdgpu_debugfs_ring_fops);
+-	if (!ent)
+-		return -ENOMEM;
++	if (IS_ERR(ent))
++		return PTR_ERR(ent);
+ 
+ 	i_size_write(ent->d_inode, ring->ring_size + 12);
+ 	ring->ent = ent;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 3a55f08e00e1d..2335b596d892f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -513,6 +513,15 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
+ 		goto out;
+ 	}
+ 
++	if (bo->type == ttm_bo_type_device &&
++	    new_mem->mem_type == TTM_PL_VRAM &&
++	    old_mem->mem_type != TTM_PL_VRAM) {
++		/* amdgpu_bo_fault_reserve_notify will re-set this if the CPU
++		 * accesses the BO after it's moved.
++		 */
++		abo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
++	}
++
+ 	if (adev->mman.buffer_funcs_enabled) {
+ 		if (((old_mem->mem_type == TTM_PL_SYSTEM &&
+ 		      new_mem->mem_type == TTM_PL_VRAM) ||
+@@ -543,15 +552,6 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
+ 			return r;
+ 	}
+ 
+-	if (bo->type == ttm_bo_type_device &&
+-	    new_mem->mem_type == TTM_PL_VRAM &&
+-	    old_mem->mem_type != TTM_PL_VRAM) {
+-		/* amdgpu_bo_fault_reserve_notify will re-set this if the CPU
+-		 * accesses the BO after it's moved.
+-		 */
+-		abo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+-	}
+-
+ out:
+ 	/* update statistics */
+ 	atomic64_add(bo->base.size, &adev->num_bytes_moved);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 6b57dfd2cd2ac..9e52948d49920 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -1008,17 +1008,21 @@ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
+ 	return ret;
+ }
+ 
+-static int kfd_resume(struct kfd_dev *kfd)
++int kgd2kfd_resume_iommu(struct kfd_dev *kfd)
+ {
+ 	int err = 0;
+ 
+ 	err = kfd_iommu_resume(kfd);
+-	if (err) {
++	if (err)
+ 		dev_err(kfd_device,
+ 			"Failed to resume IOMMU for device %x:%x\n",
+ 			kfd->pdev->vendor, kfd->pdev->device);
+-		return err;
+-	}
++	return err;
++}
++
++static int kfd_resume(struct kfd_dev *kfd)
++{
++	int err = 0;
+ 
+ 	err = kfd->dqm->ops.start(kfd->dqm);
+ 	if (err) {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3f913e4abd49e..6a4c6c47dcfaf 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -998,6 +998,8 @@ static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_
+ 	uint32_t agp_base, agp_bot, agp_top;
+ 	PHYSICAL_ADDRESS_LOC page_table_start, page_table_end, page_table_base;
+ 
++	memset(pa_config, 0, sizeof(*pa_config));
++
+ 	logical_addr_low  = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;
+ 	pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
+ 
+@@ -6778,14 +6780,15 @@ const struct drm_encoder_helper_funcs amdgpu_dm_encoder_helper_funcs = {
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
+-					    struct dc_state *dc_state)
++					    struct dc_state *dc_state,
++					    struct dsc_mst_fairness_vars *vars)
+ {
+ 	struct dc_stream_state *stream = NULL;
+ 	struct drm_connector *connector;
+ 	struct drm_connector_state *new_con_state;
+ 	struct amdgpu_dm_connector *aconnector;
+ 	struct dm_connector_state *dm_conn_state;
+-	int i, j, clock, bpp;
++	int i, j, clock;
+ 	int vcpi, pbn_div, pbn = 0;
+ 
+ 	for_each_new_connector_in_state(state, connector, new_con_state, i) {
+@@ -6824,9 +6827,15 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
+ 		}
+ 
+ 		pbn_div = dm_mst_get_pbn_divider(stream->link);
+-		bpp = stream->timing.dsc_cfg.bits_per_pixel;
+ 		clock = stream->timing.pix_clk_100hz / 10;
+-		pbn = drm_dp_calc_pbn_mode(clock, bpp, true);
++		/* pbn is calculated by compute_mst_dsc_configs_for_state*/
++		for (j = 0; j < dc_state->stream_count; j++) {
++			if (vars[j].aconnector == aconnector) {
++				pbn = vars[j].pbn;
++				break;
++			}
++		}
++
+ 		vcpi = drm_dp_mst_atomic_enable_dsc(state,
+ 						    aconnector->port,
+ 						    pbn, pbn_div,
+@@ -10208,6 +10217,9 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 	int ret, i;
+ 	bool lock_and_validation_needed = false;
+ 	struct dm_crtc_state *dm_old_crtc_state;
++#if defined(CONFIG_DRM_AMD_DC_DCN)
++	struct dsc_mst_fairness_vars vars[MAX_PIPES];
++#endif
+ 
+ 	trace_amdgpu_dm_atomic_check_begin(state);
+ 
+@@ -10438,10 +10450,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+-		if (!compute_mst_dsc_configs_for_state(state, dm_state->context))
++		if (!compute_mst_dsc_configs_for_state(state, dm_state->context, vars))
+ 			goto fail;
+ 
+-		ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context);
++		ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context, vars);
+ 		if (ret)
+ 			goto fail;
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 5568d4e518e6b..a2e5ab0bd1a03 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -495,12 +495,7 @@ struct dsc_mst_fairness_params {
+ 	uint32_t num_slices_h;
+ 	uint32_t num_slices_v;
+ 	uint32_t bpp_overwrite;
+-};
+-
+-struct dsc_mst_fairness_vars {
+-	int pbn;
+-	bool dsc_enabled;
+-	int bpp_x16;
++	struct amdgpu_dm_connector *aconnector;
+ };
+ 
+ static int kbps_to_peak_pbn(int kbps)
+@@ -727,12 +722,12 @@ static void try_disable_dsc(struct drm_atomic_state *state,
+ 
+ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ 					     struct dc_state *dc_state,
+-					     struct dc_link *dc_link)
++					     struct dc_link *dc_link,
++					     struct dsc_mst_fairness_vars *vars)
+ {
+ 	int i;
+ 	struct dc_stream_state *stream;
+ 	struct dsc_mst_fairness_params params[MAX_PIPES];
+-	struct dsc_mst_fairness_vars vars[MAX_PIPES];
+ 	struct amdgpu_dm_connector *aconnector;
+ 	int count = 0;
+ 	bool debugfs_overwrite = false;
+@@ -753,6 +748,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ 		params[count].timing = &stream->timing;
+ 		params[count].sink = stream->sink;
+ 		aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
++		params[count].aconnector = aconnector;
+ 		params[count].port = aconnector->port;
+ 		params[count].clock_force_enable = aconnector->dsc_settings.dsc_force_enable;
+ 		if (params[count].clock_force_enable == DSC_CLK_FORCE_ENABLE)
+@@ -775,6 +771,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ 	}
+ 	/* Try no compression */
+ 	for (i = 0; i < count; i++) {
++		vars[i].aconnector = params[i].aconnector;
+ 		vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
+ 		vars[i].dsc_enabled = false;
+ 		vars[i].bpp_x16 = 0;
+@@ -828,7 +825,8 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ }
+ 
+ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+-				       struct dc_state *dc_state)
++				       struct dc_state *dc_state,
++				       struct dsc_mst_fairness_vars *vars)
+ {
+ 	int i, j;
+ 	struct dc_stream_state *stream;
+@@ -859,7 +857,7 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+ 			return false;
+ 
+ 		mutex_lock(&aconnector->mst_mgr.lock);
+-		if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link)) {
++		if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars)) {
+ 			mutex_unlock(&aconnector->mst_mgr.lock);
+ 			return false;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+index b38bd68121ceb..900d3f7a84989 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+@@ -39,8 +39,17 @@ void
+ dm_dp_create_fake_mst_encoders(struct amdgpu_device *adev);
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
++
++struct dsc_mst_fairness_vars {
++	int pbn;
++	bool dsc_enabled;
++	int bpp_x16;
++	struct amdgpu_dm_connector *aconnector;
++};
++
+ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
+-				       struct dc_state *dc_state);
++				       struct dc_state *dc_state,
++				       struct dsc_mst_fairness_vars *vars);
+ #endif
+ 
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 6132b645bfd19..29c861b54b440 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -2578,13 +2578,21 @@ static struct abm *get_abm_from_stream_res(const struct dc_link *link)
+ 
+ int dc_link_get_backlight_level(const struct dc_link *link)
+ {
+-
+ 	struct abm *abm = get_abm_from_stream_res(link);
++	struct panel_cntl *panel_cntl = link->panel_cntl;
++	struct dc  *dc = link->ctx->dc;
++	struct dmcu *dmcu = dc->res_pool->dmcu;
++	bool fw_set_brightness = true;
+ 
+-	if (abm == NULL || abm->funcs->get_current_backlight == NULL)
+-		return DC_ERROR_UNEXPECTED;
++	if (dmcu)
++		fw_set_brightness = dmcu->funcs->is_dmcu_initialized(dmcu);
+ 
+-	return (int) abm->funcs->get_current_backlight(abm);
++	if (!fw_set_brightness && panel_cntl->funcs->get_current_backlight)
++		return panel_cntl->funcs->get_current_backlight(panel_cntl);
++	else if (abm != NULL && abm->funcs->get_current_backlight != NULL)
++		return (int) abm->funcs->get_current_backlight(abm);
++	else
++		return DC_ERROR_UNEXPECTED;
+ }
+ 
+ int dc_link_get_target_backlight_pwm(const struct dc_link *link)
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+index e923392358631..e8570060d007b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+@@ -49,7 +49,6 @@
+ static unsigned int dce_get_16_bit_backlight_from_pwm(struct panel_cntl *panel_cntl)
+ {
+ 	uint64_t current_backlight;
+-	uint32_t round_result;
+ 	uint32_t bl_period, bl_int_count;
+ 	uint32_t bl_pwm, fractional_duty_cycle_en;
+ 	uint32_t bl_period_mask, bl_pwm_mask;
+@@ -84,15 +83,6 @@ static unsigned int dce_get_16_bit_backlight_from_pwm(struct panel_cntl *panel_c
+ 	current_backlight = div_u64(current_backlight, bl_period);
+ 	current_backlight = (current_backlight + 1) >> 1;
+ 
+-	current_backlight = (uint64_t)(current_backlight) * bl_period;
+-
+-	round_result = (uint32_t)(current_backlight & 0xFFFFFFFF);
+-
+-	round_result = (round_result >> (bl_int_count-1)) & 1;
+-
+-	current_backlight >>= bl_int_count;
+-	current_backlight += round_result;
+-
+ 	return (uint32_t)(current_backlight);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index ebe6721428085..42e72a16a1128 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1381,7 +1381,7 @@ static int smu_disable_dpms(struct smu_context *smu)
+ 	 */
+ 	if (smu->uploading_custom_pp_table &&
+ 	    (adev->asic_type >= CHIP_NAVI10) &&
+-	    (adev->asic_type <= CHIP_DIMGREY_CAVEFISH))
++	    (adev->asic_type <= CHIP_BEIGE_GOBY))
+ 		return smu_disable_all_features_with_exception(smu,
+ 							       true,
+ 							       SMU_FEATURE_COUNT);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 1ba42b69ce742..23ada41351ad0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2269,7 +2269,27 @@ static int navi10_baco_enter(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+ 
+-	if (adev->in_runpm)
++	/*
++	 * This aims the case below:
++	 *   amdgpu driver loaded -> runpm suspend kicked -> sound driver loaded
++	 *
++	 * For NAVI10 and later ASICs, we rely on PMFW to handle the runpm. To
++	 * make that possible, PMFW needs to acknowledge the dstate transition
++	 * process for both gfx(function 0) and audio(function 1) function of
++	 * the ASIC.
++	 *
++	 * The PCI device's initial runpm status is RUNPM_SUSPENDED. So as the
++	 * device representing the audio function of the ASIC. And that means
++	 * even if the sound driver(snd_hda_intel) was not loaded yet, it's still
++	 * possible runpm suspend kicked on the ASIC. However without the dstate
++	 * transition notification from audio function, pmfw cannot handle the
++	 * BACO in/exit correctly. And that will cause driver hang on runpm
++	 * resuming.
++	 *
++	 * To address this, we revert to legacy message way(driver masters the
++	 * timing for BACO in/exit) on sound driver missing.
++	 */
++	if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev))
+ 		return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_BACO);
+ 	else
+ 		return smu_v11_0_baco_enter(smu);
+@@ -2279,7 +2299,7 @@ static int navi10_baco_exit(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+ 
+-	if (adev->in_runpm) {
++	if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) {
+ 		/* Wait for PMFW handling for the Dstate change */
+ 		msleep(10);
+ 		return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_ULPS);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index d92dd2c7448e3..9b170bd12c1b6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -2133,7 +2133,7 @@ static int sienna_cichlid_baco_enter(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+ 
+-	if (adev->in_runpm)
++	if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev))
+ 		return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_BACO);
+ 	else
+ 		return smu_v11_0_baco_enter(smu);
+@@ -2143,7 +2143,7 @@ static int sienna_cichlid_baco_exit(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+ 
+-	if (adev->in_runpm) {
++	if (adev->in_runpm && smu_cmn_is_audio_func_enabled(adev)) {
+ 		/* Wait for PMFW handling for the Dstate change */
+ 		msleep(10);
+ 		return smu_v11_0_baco_set_armd3_sequence(smu, BACO_SEQ_ULPS);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index 415be74df28c7..54881cce1b06c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -1053,3 +1053,24 @@ int smu_cmn_set_mp1_state(struct smu_context *smu,
+ 
+ 	return ret;
+ }
++
++bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev)
++{
++	struct pci_dev *p = NULL;
++	bool snd_driver_loaded;
++
++	/*
++	 * If the ASIC comes with no audio function, we always assume
++	 * it is "enabled".
++	 */
++	p = pci_get_domain_bus_and_slot(pci_domain_nr(adev->pdev->bus),
++			adev->pdev->bus->number, 1);
++	if (!p)
++		return true;
++
++	snd_driver_loaded = pci_is_enabled(p) ? true : false;
++
++	pci_dev_put(p);
++
++	return snd_driver_loaded;
++}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 16993daa2ae04..b1d41360a3897 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -110,5 +110,7 @@ void smu_cmn_init_soft_gpu_metrics(void *table, uint8_t frev, uint8_t crev);
+ int smu_cmn_set_mp1_state(struct smu_context *smu,
+ 			  enum pp_mp1_state mp1_state);
+ 
++bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev);
++
+ #endif
+ #endif
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+index 76d38561c9103..cf741c5c82d25 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+@@ -397,8 +397,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
+ 		if (switch_mmu_context) {
+ 			struct etnaviv_iommu_context *old_context = gpu->mmu_context;
+ 
+-			etnaviv_iommu_context_get(mmu_context);
+-			gpu->mmu_context = mmu_context;
++			gpu->mmu_context = etnaviv_iommu_context_get(mmu_context);
+ 			etnaviv_iommu_context_put(old_context);
+ 		}
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index b8fa6ed3dd738..fb7a33b88fc0b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -303,8 +303,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get(
+ 		list_del(&mapping->obj_node);
+ 	}
+ 
+-	etnaviv_iommu_context_get(mmu_context);
+-	mapping->context = mmu_context;
++	mapping->context = etnaviv_iommu_context_get(mmu_context);
+ 	mapping->use = 1;
+ 
+ 	ret = etnaviv_iommu_map_gem(mmu_context, etnaviv_obj,
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index 4dd7d9d541c09..486259e154aff 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -532,8 +532,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 		goto err_submit_objects;
+ 
+ 	submit->ctx = file->driver_priv;
+-	etnaviv_iommu_context_get(submit->ctx->mmu);
+-	submit->mmu_context = submit->ctx->mmu;
++	submit->mmu_context = etnaviv_iommu_context_get(submit->ctx->mmu);
+ 	submit->exec_state = args->exec_state;
+ 	submit->flags = args->flags;
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 4102bcea33413..1fa98ce870f78 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -569,6 +569,12 @@ static int etnaviv_hw_reset(struct etnaviv_gpu *gpu)
+ 	/* We rely on the GPU running, so program the clock */
+ 	etnaviv_gpu_update_clock(gpu);
+ 
++	gpu->fe_running = false;
++	gpu->exec_state = -1;
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = NULL;
++
+ 	return 0;
+ }
+ 
+@@ -631,19 +637,23 @@ void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch)
+ 			  VIVS_MMUv2_SEC_COMMAND_CONTROL_ENABLE |
+ 			  VIVS_MMUv2_SEC_COMMAND_CONTROL_PREFETCH(prefetch));
+ 	}
++
++	gpu->fe_running = true;
+ }
+ 
+-static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu)
++static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu,
++					  struct etnaviv_iommu_context *context)
+ {
+-	u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer,
+-				&gpu->mmu_context->cmdbuf_mapping);
+ 	u16 prefetch;
++	u32 address;
+ 
+ 	/* setup the MMU */
+-	etnaviv_iommu_restore(gpu, gpu->mmu_context);
++	etnaviv_iommu_restore(gpu, context);
+ 
+ 	/* Start command processor */
+ 	prefetch = etnaviv_buffer_init(gpu);
++	address = etnaviv_cmdbuf_get_va(&gpu->buffer,
++					&gpu->mmu_context->cmdbuf_mapping);
+ 
+ 	etnaviv_gpu_start_fe(gpu, address, prefetch);
+ }
+@@ -826,7 +836,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 	/* Now program the hardware */
+ 	mutex_lock(&gpu->lock);
+ 	etnaviv_gpu_hw_init(gpu);
+-	gpu->exec_state = -1;
+ 	mutex_unlock(&gpu->lock);
+ 
+ 	pm_runtime_mark_last_busy(gpu->dev);
+@@ -1051,8 +1060,6 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu)
+ 	spin_unlock(&gpu->event_spinlock);
+ 
+ 	etnaviv_gpu_hw_init(gpu);
+-	gpu->exec_state = -1;
+-	gpu->mmu_context = NULL;
+ 
+ 	mutex_unlock(&gpu->lock);
+ 	pm_runtime_mark_last_busy(gpu->dev);
+@@ -1364,14 +1371,12 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit)
+ 		goto out_unlock;
+ 	}
+ 
+-	if (!gpu->mmu_context) {
+-		etnaviv_iommu_context_get(submit->mmu_context);
+-		gpu->mmu_context = submit->mmu_context;
+-		etnaviv_gpu_start_fe_idleloop(gpu);
+-	} else {
+-		etnaviv_iommu_context_get(gpu->mmu_context);
+-		submit->prev_mmu_context = gpu->mmu_context;
+-	}
++	if (!gpu->fe_running)
++		etnaviv_gpu_start_fe_idleloop(gpu, submit->mmu_context);
++
++	if (submit->prev_mmu_context)
++		etnaviv_iommu_context_put(submit->prev_mmu_context);
++	submit->prev_mmu_context = etnaviv_iommu_context_get(gpu->mmu_context);
+ 
+ 	if (submit->nr_pmrs) {
+ 		gpu->event[event[1]].sync_point = &sync_point_perfmon_sample_pre;
+@@ -1573,7 +1578,7 @@ int etnaviv_gpu_wait_idle(struct etnaviv_gpu *gpu, unsigned int timeout_ms)
+ 
+ static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu)
+ {
+-	if (gpu->initialized && gpu->mmu_context) {
++	if (gpu->initialized && gpu->fe_running) {
+ 		/* Replace the last WAIT with END */
+ 		mutex_lock(&gpu->lock);
+ 		etnaviv_buffer_end(gpu);
+@@ -1586,8 +1591,7 @@ static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu)
+ 		 */
+ 		etnaviv_gpu_wait_idle(gpu, 100);
+ 
+-		etnaviv_iommu_context_put(gpu->mmu_context);
+-		gpu->mmu_context = NULL;
++		gpu->fe_running = false;
+ 	}
+ 
+ 	gpu->exec_state = -1;
+@@ -1735,6 +1739,9 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
+ 	etnaviv_gpu_hw_suspend(gpu);
+ #endif
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++
+ 	if (gpu->initialized) {
+ 		etnaviv_cmdbuf_free(&gpu->buffer);
+ 		etnaviv_iommu_global_fini(gpu);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+index 8ea48697d1321..1c75c8ed5bcea 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
+@@ -101,6 +101,7 @@ struct etnaviv_gpu {
+ 	struct workqueue_struct *wq;
+ 	struct drm_gpu_scheduler sched;
+ 	bool initialized;
++	bool fe_running;
+ 
+ 	/* 'ring'-buffer: */
+ 	struct etnaviv_cmdbuf buffer;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c
+index 1a7c89a67bea3..afe5dd6a9925b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c
+@@ -92,6 +92,10 @@ static void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu,
+ 	struct etnaviv_iommuv1_context *v1_context = to_v1_context(context);
+ 	u32 pgtable;
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = etnaviv_iommu_context_get(context);
++
+ 	/* set base addresses */
+ 	gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, context->global->memory_base);
+ 	gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, context->global->memory_base);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
+index f8bf488e9d717..d664ae29ae209 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c
+@@ -172,6 +172,10 @@ static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu,
+ 	if (gpu_read(gpu, VIVS_MMUv2_CONTROL) & VIVS_MMUv2_CONTROL_ENABLE)
+ 		return;
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = etnaviv_iommu_context_get(context);
++
+ 	prefetch = etnaviv_buffer_config_mmuv2(gpu,
+ 				(u32)v2_context->mtlb_dma,
+ 				(u32)context->global->bad_page_dma);
+@@ -192,6 +196,10 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu,
+ 	if (gpu_read(gpu, VIVS_MMUv2_SEC_CONTROL) & VIVS_MMUv2_SEC_CONTROL_ENABLE)
+ 		return;
+ 
++	if (gpu->mmu_context)
++		etnaviv_iommu_context_put(gpu->mmu_context);
++	gpu->mmu_context = etnaviv_iommu_context_get(context);
++
+ 	gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_LOW,
+ 		  lower_32_bits(context->global->v2.pta_dma));
+ 	gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_HIGH,
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+index dab1b58006d83..9fb1a2aadbcb0 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+@@ -199,6 +199,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context,
+ 		 */
+ 		list_for_each_entry_safe(m, n, &list, scan_node) {
+ 			etnaviv_iommu_remove_mapping(context, m);
++			etnaviv_iommu_context_put(m->context);
+ 			m->context = NULL;
+ 			list_del_init(&m->mmu_node);
+ 			list_del_init(&m->scan_node);
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h
+index d1d6902fd13be..e4a0b7d09c2ea 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h
+@@ -105,9 +105,11 @@ void etnaviv_iommu_dump(struct etnaviv_iommu_context *ctx, void *buf);
+ struct etnaviv_iommu_context *
+ etnaviv_iommu_context_init(struct etnaviv_iommu_global *global,
+ 			   struct etnaviv_cmdbuf_suballoc *suballoc);
+-static inline void etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx)
++static inline struct etnaviv_iommu_context *
++etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx)
+ {
+ 	kref_get(&ctx->refcount);
++	return ctx;
+ }
+ void etnaviv_iommu_context_put(struct etnaviv_iommu_context *ctx);
+ void etnaviv_iommu_restore(struct etnaviv_gpu *gpu,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 862c1df69cc2a..d511e578ba79d 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2453,11 +2453,14 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
+ 	 */
+ 	if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_DPCD_REV,
+ 			     intel_dp->edp_dpcd, sizeof(intel_dp->edp_dpcd)) ==
+-			     sizeof(intel_dp->edp_dpcd))
++			     sizeof(intel_dp->edp_dpcd)) {
+ 		drm_dbg_kms(&dev_priv->drm, "eDP DPCD: %*ph\n",
+ 			    (int)sizeof(intel_dp->edp_dpcd),
+ 			    intel_dp->edp_dpcd);
+ 
++		intel_dp->use_max_params = intel_dp->edp_dpcd[0] < DP_EDP_14;
++	}
++
+ 	/*
+ 	 * This has to be called after intel_dp->edp_dpcd is filled, PSR checks
+ 	 * for SET_POWER_CAPABLE bit in intel_dp->edp_dpcd[1]
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+index 053a3c2f72677..508a514c5e37d 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+@@ -848,7 +848,7 @@ intel_dp_link_train_all_phys(struct intel_dp *intel_dp,
+ 	}
+ 
+ 	if (ret)
+-		intel_dp_link_train_phy(intel_dp, crtc_state, DP_PHY_DPRX);
++		ret = intel_dp_link_train_phy(intel_dp, crtc_state, DP_PHY_DPRX);
+ 
+ 	if (intel_dp->set_idle_link_train)
+ 		intel_dp->set_idle_link_train(intel_dp, crtc_state);
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 0473583dcdac2..482fb0ae6cb5d 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -119,7 +119,7 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags)
+ #endif
+ 
+ 	if (pci_find_capability(pdev, PCI_CAP_ID_AGP))
+-		rdev->agp = radeon_agp_head_init(rdev->ddev);
++		rdev->agp = radeon_agp_head_init(dev);
+ 	if (rdev->agp) {
+ 		rdev->agp->agp_mtrr = arch_phys_wc_add(
+ 			rdev->agp->agp_info.aper_base,
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index 8ab3247dbc4aa..13c6b857158fc 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -1123,7 +1123,7 @@ static int cdn_dp_suspend(struct device *dev)
+ 	return ret;
+ }
+ 
+-static int cdn_dp_resume(struct device *dev)
++static __maybe_unused int cdn_dp_resume(struct device *dev)
+ {
+ 	struct cdn_dp_device *dp = dev_get_drvdata(dev);
+ 
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 2aee356840a2b..314015d9e912d 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -245,6 +245,7 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
+ 	mutex_unlock(&ring_info->ring_buffer_mutex);
+ 
+ 	kfree(ring_info->pkt_buffer);
++	ring_info->pkt_buffer = NULL;
+ 	ring_info->pkt_buffer_size = 0;
+ }
+ 
+diff --git a/drivers/mfd/ab8500-core.c b/drivers/mfd/ab8500-core.c
+index 30489670ea528..cca0aac261486 100644
+--- a/drivers/mfd/ab8500-core.c
++++ b/drivers/mfd/ab8500-core.c
+@@ -485,7 +485,7 @@ static int ab8500_handle_hierarchical_line(struct ab8500 *ab8500,
+ 		if (line == AB8540_INT_GPIO43F || line == AB8540_INT_GPIO44F)
+ 			line += 1;
+ 
+-		handle_nested_irq(irq_create_mapping(ab8500->domain, line));
++		handle_nested_irq(irq_find_mapping(ab8500->domain, line));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
+index 4145a38b38904..d0ac019850d17 100644
+--- a/drivers/mfd/axp20x.c
++++ b/drivers/mfd/axp20x.c
+@@ -125,12 +125,13 @@ static const struct regmap_range axp288_writeable_ranges[] = {
+ 
+ static const struct regmap_range axp288_volatile_ranges[] = {
+ 	regmap_reg_range(AXP20X_PWR_INPUT_STATUS, AXP288_POWER_REASON),
++	regmap_reg_range(AXP22X_PWR_OUT_CTRL1, AXP22X_ALDO3_V_OUT),
+ 	regmap_reg_range(AXP288_BC_GLOBAL, AXP288_BC_GLOBAL),
+ 	regmap_reg_range(AXP288_BC_DET_STAT, AXP20X_VBUS_IPSOUT_MGMT),
+ 	regmap_reg_range(AXP20X_CHRG_BAK_CTRL, AXP20X_CHRG_BAK_CTRL),
+ 	regmap_reg_range(AXP20X_IRQ1_EN, AXP20X_IPSOUT_V_HIGH_L),
+ 	regmap_reg_range(AXP20X_TIMER_CTRL, AXP20X_TIMER_CTRL),
+-	regmap_reg_range(AXP22X_GPIO_STATE, AXP22X_GPIO_STATE),
++	regmap_reg_range(AXP20X_GPIO1_CTRL, AXP22X_GPIO_STATE),
+ 	regmap_reg_range(AXP288_RT_BATT_V_H, AXP288_RT_BATT_V_L),
+ 	regmap_reg_range(AXP20X_FG_RES, AXP288_FG_CC_CAP_REG),
+ };
+diff --git a/drivers/mfd/db8500-prcmu.c b/drivers/mfd/db8500-prcmu.c
+index 3bde7fda755f1..dea4e4e8bed54 100644
+--- a/drivers/mfd/db8500-prcmu.c
++++ b/drivers/mfd/db8500-prcmu.c
+@@ -1622,22 +1622,20 @@ static long round_clock_rate(u8 clock, unsigned long rate)
+ }
+ 
+ static const unsigned long db8500_armss_freqs[] = {
+-	200000000,
+-	400000000,
+-	800000000,
++	199680000,
++	399360000,
++	798720000,
+ 	998400000
+ };
+ 
+ /* The DB8520 has slightly higher ARMSS max frequency */
+ static const unsigned long db8520_armss_freqs[] = {
+-	200000000,
+-	400000000,
+-	800000000,
++	199680000,
++	399360000,
++	798720000,
+ 	1152000000
+ };
+ 
+-
+-
+ static long round_armss_rate(unsigned long rate)
+ {
+ 	unsigned long freq = 0;
+diff --git a/drivers/mfd/lpc_sch.c b/drivers/mfd/lpc_sch.c
+index 428a526cbe863..9ab9adce06fdd 100644
+--- a/drivers/mfd/lpc_sch.c
++++ b/drivers/mfd/lpc_sch.c
+@@ -22,7 +22,7 @@
+ #define SMBASE		0x40
+ #define SMBUS_IO_SIZE	64
+ 
+-#define GPIOBASE	0x44
++#define GPIO_BASE	0x44
+ #define GPIO_IO_SIZE	64
+ #define GPIO_IO_SIZE_CENTERTON	128
+ 
+@@ -145,7 +145,7 @@ static int lpc_sch_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	if (ret == 0)
+ 		cells++;
+ 
+-	ret = lpc_sch_populate_cell(dev, GPIOBASE, "sch_gpio",
++	ret = lpc_sch_populate_cell(dev, GPIO_BASE, "sch_gpio",
+ 				    info->io_size_gpio,
+ 				    id->device, &lpc_sch_cells[cells]);
+ 	if (ret < 0)
+diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
+index 1dd39483e7c14..58d09c615e673 100644
+--- a/drivers/mfd/stmpe.c
++++ b/drivers/mfd/stmpe.c
+@@ -1095,7 +1095,7 @@ static irqreturn_t stmpe_irq(int irq, void *data)
+ 
+ 	if (variant->id_val == STMPE801_ID ||
+ 	    variant->id_val == STMPE1600_ID) {
+-		int base = irq_create_mapping(stmpe->domain, 0);
++		int base = irq_find_mapping(stmpe->domain, 0);
+ 
+ 		handle_nested_irq(base);
+ 		return IRQ_HANDLED;
+@@ -1123,7 +1123,7 @@ static irqreturn_t stmpe_irq(int irq, void *data)
+ 		while (status) {
+ 			int bit = __ffs(status);
+ 			int line = bank * 8 + bit;
+-			int nestedirq = irq_create_mapping(stmpe->domain, line);
++			int nestedirq = irq_find_mapping(stmpe->domain, line);
+ 
+ 			handle_nested_irq(nestedirq);
+ 			status &= ~(1 << bit);
+diff --git a/drivers/mfd/tc3589x.c b/drivers/mfd/tc3589x.c
+index 7614f8fe0e91c..13583cdb93b6f 100644
+--- a/drivers/mfd/tc3589x.c
++++ b/drivers/mfd/tc3589x.c
+@@ -187,7 +187,7 @@ again:
+ 
+ 	while (status) {
+ 		int bit = __ffs(status);
+-		int virq = irq_create_mapping(tc3589x->domain, bit);
++		int virq = irq_find_mapping(tc3589x->domain, bit);
+ 
+ 		handle_nested_irq(virq);
+ 		status &= ~(1 << bit);
+diff --git a/drivers/mfd/tqmx86.c b/drivers/mfd/tqmx86.c
+index ddddf08b6a4cc..732013f40e4e8 100644
+--- a/drivers/mfd/tqmx86.c
++++ b/drivers/mfd/tqmx86.c
+@@ -209,6 +209,8 @@ static int tqmx86_probe(struct platform_device *pdev)
+ 
+ 		/* Assumes the IRQ resource is first. */
+ 		tqmx_gpio_resources[0].start = gpio_irq;
++	} else {
++		tqmx_gpio_resources[0].flags = 0;
+ 	}
+ 
+ 	ocores_platfom_data.clock_khz = tqmx86_board_id_to_clk_rate(board_id);
+diff --git a/drivers/mfd/wm8994-irq.c b/drivers/mfd/wm8994-irq.c
+index 6c3a619e26286..651a028bc519a 100644
+--- a/drivers/mfd/wm8994-irq.c
++++ b/drivers/mfd/wm8994-irq.c
+@@ -154,7 +154,7 @@ static irqreturn_t wm8994_edge_irq(int irq, void *data)
+ 	struct wm8994 *wm8994 = data;
+ 
+ 	while (gpio_get_value_cansleep(wm8994->pdata.irq_gpio))
+-		handle_nested_irq(irq_create_mapping(wm8994->edge_irq, 0));
++		handle_nested_irq(irq_find_mapping(wm8994->edge_irq, 0));
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/mtd/mtdconcat.c b/drivers/mtd/mtdconcat.c
+index 6e4d0017c0bd4..f685a581df481 100644
+--- a/drivers/mtd/mtdconcat.c
++++ b/drivers/mtd/mtdconcat.c
+@@ -641,6 +641,7 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 	int i;
+ 	size_t size;
+ 	struct mtd_concat *concat;
++	struct mtd_info *subdev_master = NULL;
+ 	uint32_t max_erasesize, curr_erasesize;
+ 	int num_erase_region;
+ 	int max_writebufsize = 0;
+@@ -679,18 +680,24 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 	concat->mtd.subpage_sft = subdev[0]->subpage_sft;
+ 	concat->mtd.oobsize = subdev[0]->oobsize;
+ 	concat->mtd.oobavail = subdev[0]->oobavail;
+-	if (subdev[0]->_writev)
++
++	subdev_master = mtd_get_master(subdev[0]);
++	if (subdev_master->_writev)
+ 		concat->mtd._writev = concat_writev;
+-	if (subdev[0]->_read_oob)
++	if (subdev_master->_read_oob)
+ 		concat->mtd._read_oob = concat_read_oob;
+-	if (subdev[0]->_write_oob)
++	if (subdev_master->_write_oob)
+ 		concat->mtd._write_oob = concat_write_oob;
+-	if (subdev[0]->_block_isbad)
++	if (subdev_master->_block_isbad)
+ 		concat->mtd._block_isbad = concat_block_isbad;
+-	if (subdev[0]->_block_markbad)
++	if (subdev_master->_block_markbad)
+ 		concat->mtd._block_markbad = concat_block_markbad;
+-	if (subdev[0]->_panic_write)
++	if (subdev_master->_panic_write)
+ 		concat->mtd._panic_write = concat_panic_write;
++	if (subdev_master->_read)
++		concat->mtd._read = concat_read;
++	if (subdev_master->_write)
++		concat->mtd._write = concat_write;
+ 
+ 	concat->mtd.ecc_stats.badblocks = subdev[0]->ecc_stats.badblocks;
+ 
+@@ -721,14 +728,22 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 				    subdev[i]->flags & MTD_WRITEABLE;
+ 		}
+ 
++		subdev_master = mtd_get_master(subdev[i]);
+ 		concat->mtd.size += subdev[i]->size;
+ 		concat->mtd.ecc_stats.badblocks +=
+ 			subdev[i]->ecc_stats.badblocks;
+ 		if (concat->mtd.writesize   !=  subdev[i]->writesize ||
+ 		    concat->mtd.subpage_sft != subdev[i]->subpage_sft ||
+ 		    concat->mtd.oobsize    !=  subdev[i]->oobsize ||
+-		    !concat->mtd._read_oob  != !subdev[i]->_read_oob ||
+-		    !concat->mtd._write_oob != !subdev[i]->_write_oob) {
++		    !concat->mtd._read_oob  != !subdev_master->_read_oob ||
++		    !concat->mtd._write_oob != !subdev_master->_write_oob) {
++			/*
++			 * Check against subdev[i] for data members, because
++			 * subdev's attributes may be different from master
++			 * mtd device. Check against subdev's master mtd
++			 * device for callbacks, because the existence of
++			 * subdev's callbacks is decided by master mtd device.
++			 */
+ 			kfree(concat);
+ 			printk("Incompatible OOB or ECC data on \"%s\"\n",
+ 			       subdev[i]->name);
+@@ -744,8 +759,6 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[],	/* subdevices to c
+ 	concat->mtd.name = name;
+ 
+ 	concat->mtd._erase = concat_erase;
+-	concat->mtd._read = concat_read;
+-	concat->mtd._write = concat_write;
+ 	concat->mtd._sync = concat_sync;
+ 	concat->mtd._lock = concat_lock;
+ 	concat->mtd._unlock = concat_unlock;
+diff --git a/drivers/mtd/nand/raw/cafe_nand.c b/drivers/mtd/nand/raw/cafe_nand.c
+index d0e8ffd55c224..9dbf031716a61 100644
+--- a/drivers/mtd/nand/raw/cafe_nand.c
++++ b/drivers/mtd/nand/raw/cafe_nand.c
+@@ -751,7 +751,7 @@ static int cafe_nand_probe(struct pci_dev *pdev,
+ 			  "CAFE NAND", mtd);
+ 	if (err) {
+ 		dev_warn(&pdev->dev, "Could not register IRQ %d\n", pdev->irq);
+-		goto out_ior;
++		goto out_free_rs;
+ 	}
+ 
+ 	/* Disable master reset, enable NAND clock */
+@@ -795,6 +795,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
+ 	/* Disable NAND IRQ in global IRQ mask register */
+ 	cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK);
+ 	free_irq(pdev->irq, mtd);
++ out_free_rs:
++	free_rs(cafe->rs);
+  out_ior:
+ 	pci_iounmap(pdev, cafe->mmio);
+  out_free_mtd:
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index bd1417a66cbf2..604f541126654 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1144,7 +1144,7 @@ static void b53_force_link(struct b53_device *dev, int port, int link)
+ 	u8 reg, val, off;
+ 
+ 	/* Override the port settings */
+-	if (port == dev->cpu_port) {
++	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
+ 	} else {
+@@ -1168,7 +1168,7 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ 	u8 reg, val, off;
+ 
+ 	/* Override the port settings */
+-	if (port == dev->cpu_port) {
++	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
+ 	} else {
+@@ -1236,7 +1236,7 @@ static void b53_adjust_link(struct dsa_switch *ds, int port,
+ 	b53_force_link(dev, port, phydev->link);
+ 
+ 	if (is531x5(dev) && phy_interface_is_rgmii(phydev)) {
+-		if (port == 8)
++		if (port == dev->imp_port)
+ 			off = B53_RGMII_CTRL_IMP;
+ 		else
+ 			off = B53_RGMII_CTRL_P(port);
+@@ -2280,6 +2280,7 @@ struct b53_chip_data {
+ 	const char *dev_name;
+ 	u16 vlans;
+ 	u16 enabled_ports;
++	u8 imp_port;
+ 	u8 cpu_port;
+ 	u8 vta_regs[3];
+ 	u8 arl_bins;
+@@ -2304,6 +2305,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 2,
+ 		.arl_buckets = 1024,
++		.imp_port = 5,
+ 		.cpu_port = B53_CPU_PORT_25,
+ 		.duplex_reg = B53_DUPLEX_STAT_FE,
+ 	},
+@@ -2314,6 +2316,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 2,
+ 		.arl_buckets = 1024,
++		.imp_port = 5,
+ 		.cpu_port = B53_CPU_PORT_25,
+ 		.duplex_reg = B53_DUPLEX_STAT_FE,
+ 	},
+@@ -2324,6 +2327,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2337,6 +2341,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2350,6 +2355,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS_9798,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2363,6 +2369,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x7f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS_9798,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2377,6 +2384,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
+ 		.vta_regs = B53_VTA_REGS,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+ 		.jumbo_pm_reg = B53_JUMBO_PORT_MASK,
+@@ -2389,6 +2397,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0xff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2402,6 +2411,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2415,6 +2425,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0, /* pdata must provide them */
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS_63XX,
+ 		.duplex_reg = B53_DUPLEX_STAT_63XX,
+@@ -2428,6 +2439,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2441,6 +2453,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1bf,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2454,6 +2467,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1bf,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2467,6 +2481,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2480,6 +2495,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1f,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT_25, /* TODO: auto detect */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2493,6 +2509,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2506,6 +2523,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x103,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2520,6 +2538,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1bf,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 256,
++		.imp_port = 8,
+ 		.cpu_port = 8, /* TODO: ports 4, 5, 8 */
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2533,6 +2552,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 1024,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2546,6 +2566,7 @@ static const struct b53_chip_data b53_switch_chips[] = {
+ 		.enabled_ports = 0x1ff,
+ 		.arl_bins = 4,
+ 		.arl_buckets = 256,
++		.imp_port = 8,
+ 		.cpu_port = B53_CPU_PORT,
+ 		.vta_regs = B53_VTA_REGS,
+ 		.duplex_reg = B53_DUPLEX_STAT_GE,
+@@ -2571,6 +2592,7 @@ static int b53_switch_init(struct b53_device *dev)
+ 			dev->vta_regs[1] = chip->vta_regs[1];
+ 			dev->vta_regs[2] = chip->vta_regs[2];
+ 			dev->jumbo_pm_reg = chip->jumbo_pm_reg;
++			dev->imp_port = chip->imp_port;
+ 			dev->cpu_port = chip->cpu_port;
+ 			dev->num_vlans = chip->vlans;
+ 			dev->num_arl_bins = chip->arl_bins;
+@@ -2612,9 +2634,10 @@ static int b53_switch_init(struct b53_device *dev)
+ 			dev->cpu_port = 5;
+ 	}
+ 
+-	/* cpu port is always last */
+-	dev->num_ports = dev->cpu_port + 1;
+ 	dev->enabled_ports |= BIT(dev->cpu_port);
++	dev->num_ports = fls(dev->enabled_ports);
++
++	dev->ds->num_ports = min_t(unsigned int, dev->num_ports, DSA_MAX_PORTS);
+ 
+ 	/* Include non standard CPU port built-in PHYs to be probed */
+ 	if (is539x(dev) || is531x5(dev)) {
+@@ -2660,7 +2683,6 @@ struct b53_device *b53_switch_alloc(struct device *base,
+ 		return NULL;
+ 
+ 	ds->dev = base;
+-	ds->num_ports = DSA_MAX_PORTS;
+ 
+ 	dev = devm_kzalloc(base, sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 9bf8319342b0b..5d068acf7cf81 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -123,6 +123,7 @@ struct b53_device {
+ 
+ 	/* used ports mask */
+ 	u16 enabled_ports;
++	unsigned int imp_port;
+ 	unsigned int cpu_port;
+ 
+ 	/* connect specific data */
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 6ce9ec1283e05..b6c4b3adb1715 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -68,7 +68,7 @@ static unsigned int bcm_sf2_num_active_ports(struct dsa_switch *ds)
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+ 	unsigned int port, count = 0;
+ 
+-	for (port = 0; port < ARRAY_SIZE(priv->port_sts); port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		if (dsa_is_cpu_port(ds, port))
+ 			continue;
+ 		if (priv->port_sts[port].enabled)
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 64d6dfa831220..267324889dd64 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1885,6 +1885,12 @@ static int gswip_gphy_fw_load(struct gswip_priv *priv, struct gswip_gphy_fw *gph
+ 
+ 	reset_control_assert(gphy_fw->reset);
+ 
++	/* The vendor BSP uses a 200ms delay after asserting the reset line.
++	 * Without this some users are observing that the PHY is not coming up
++	 * on the MDIO bus.
++	 */
++	msleep(200);
++
+ 	ret = request_firmware(&fw, gphy_fw->fw_name, dev);
+ 	if (ret) {
+ 		dev_err(dev, "failed to load firmware: %s, error: %i\n",
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 1f63f50f73f17..bda5a9bf4f529 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -643,10 +643,8 @@ qca8k_mdio_busy_wait(struct mii_bus *bus, u32 reg, u32 mask)
+ }
+ 
+ static int
+-qca8k_mdio_write(struct mii_bus *salve_bus, int phy, int regnum, u16 data)
++qca8k_mdio_write(struct mii_bus *bus, int phy, int regnum, u16 data)
+ {
+-	struct qca8k_priv *priv = salve_bus->priv;
+-	struct mii_bus *bus = priv->bus;
+ 	u16 r1, r2, page;
+ 	u32 val;
+ 	int ret;
+@@ -682,10 +680,8 @@ exit:
+ }
+ 
+ static int
+-qca8k_mdio_read(struct mii_bus *salve_bus, int phy, int regnum)
++qca8k_mdio_read(struct mii_bus *bus, int phy, int regnum)
+ {
+-	struct qca8k_priv *priv = salve_bus->priv;
+-	struct mii_bus *bus = priv->bus;
+ 	u16 r1, r2, page;
+ 	u32 val;
+ 	int ret;
+@@ -726,6 +722,24 @@ exit:
+ 	return ret;
+ }
+ 
++static int
++qca8k_internal_mdio_write(struct mii_bus *slave_bus, int phy, int regnum, u16 data)
++{
++	struct qca8k_priv *priv = slave_bus->priv;
++	struct mii_bus *bus = priv->bus;
++
++	return qca8k_mdio_write(bus, phy, regnum, data);
++}
++
++static int
++qca8k_internal_mdio_read(struct mii_bus *slave_bus, int phy, int regnum)
++{
++	struct qca8k_priv *priv = slave_bus->priv;
++	struct mii_bus *bus = priv->bus;
++
++	return qca8k_mdio_read(bus, phy, regnum);
++}
++
+ static int
+ qca8k_phy_write(struct dsa_switch *ds, int port, int regnum, u16 data)
+ {
+@@ -775,8 +789,8 @@ qca8k_mdio_register(struct qca8k_priv *priv, struct device_node *mdio)
+ 
+ 	bus->priv = (void *)priv;
+ 	bus->name = "qca8k slave mii";
+-	bus->read = qca8k_mdio_read;
+-	bus->write = qca8k_mdio_write;
++	bus->read = qca8k_internal_mdio_read;
++	bus->write = qca8k_internal_mdio_write;
+ 	snprintf(bus->id, MII_BUS_ID_SIZE, "qca8k-%d",
+ 		 ds->index);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+index 27943b0446c28..a207c36246b6a 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+@@ -1224,7 +1224,7 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
+ 
+ 	/* SR-IOV capability was enabled but there are no VFs*/
+ 	if (iov->total == 0) {
+-		err = -EINVAL;
++		err = 0;
+ 		goto failed;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 8a97640cdfe76..fdbf47446a997 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2172,25 +2172,33 @@ static int bnxt_async_event_process(struct bnxt *bp,
+ 		if (!fw_health)
+ 			goto async_event_process_exit;
+ 
+-		fw_health->enabled = EVENT_DATA1_RECOVERY_ENABLED(data1);
+-		fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1);
+-		if (!fw_health->enabled) {
++		if (!EVENT_DATA1_RECOVERY_ENABLED(data1)) {
++			fw_health->enabled = false;
+ 			netif_info(bp, drv, bp->dev,
+ 				   "Error recovery info: error recovery[0]\n");
+ 			break;
+ 		}
++		fw_health->master = EVENT_DATA1_RECOVERY_MASTER_FUNC(data1);
+ 		fw_health->tmr_multiplier =
+ 			DIV_ROUND_UP(fw_health->polling_dsecs * HZ,
+ 				     bp->current_interval * 10);
+ 		fw_health->tmr_counter = fw_health->tmr_multiplier;
+-		fw_health->last_fw_heartbeat =
+-			bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
++		if (!fw_health->enabled)
++			fw_health->last_fw_heartbeat =
++				bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG);
+ 		fw_health->last_fw_reset_cnt =
+ 			bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
+ 		netif_info(bp, drv, bp->dev,
+ 			   "Error recovery info: error recovery[1], master[%d], reset count[%u], health status: 0x%x\n",
+ 			   fw_health->master, fw_health->last_fw_reset_cnt,
+ 			   bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG));
++		if (!fw_health->enabled) {
++			/* Make sure tmr_counter is set and visible to
++			 * bnxt_health_check() before setting enabled to true.
++			 */
++			smp_wmb();
++			fw_health->enabled = true;
++		}
+ 		goto async_event_process_exit;
+ 	}
+ 	case ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION:
+@@ -2680,6 +2688,9 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
+ 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
+ 		int j;
+ 
++		if (!txr->tx_buf_ring)
++			continue;
++
+ 		for (j = 0; j < max_idx;) {
+ 			struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
+ 			struct sk_buff *skb;
+@@ -2764,6 +2775,9 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
+ 	}
+ 
+ skip_rx_tpa_free:
++	if (!rxr->rx_buf_ring)
++		goto skip_rx_buf_free;
++
+ 	for (i = 0; i < max_idx; i++) {
+ 		struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i];
+ 		dma_addr_t mapping = rx_buf->mapping;
+@@ -2786,6 +2800,11 @@ skip_rx_tpa_free:
+ 			kfree(data);
+ 		}
+ 	}
++
++skip_rx_buf_free:
++	if (!rxr->rx_agg_ring)
++		goto skip_rx_agg_free;
++
+ 	for (i = 0; i < max_agg_idx; i++) {
+ 		struct bnxt_sw_rx_agg_bd *rx_agg_buf = &rxr->rx_agg_ring[i];
+ 		struct page *page = rx_agg_buf->page;
+@@ -2802,6 +2821,8 @@ skip_rx_tpa_free:
+ 
+ 		__free_page(page);
+ 	}
++
++skip_rx_agg_free:
+ 	if (rxr->rx_page) {
+ 		__free_page(rxr->rx_page);
+ 		rxr->rx_page = NULL;
+@@ -11237,6 +11258,8 @@ static void bnxt_fw_health_check(struct bnxt *bp)
+ 	if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
+ 		return;
+ 
++	/* Make sure it is enabled before checking the tmr_counter. */
++	smp_rmb();
+ 	if (fw_health->tmr_counter) {
+ 		fw_health->tmr_counter--;
+ 		return;
+@@ -12169,6 +12192,11 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 			return;
+ 		}
+ 
++		if ((bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY) &&
++		    bp->fw_health->enabled) {
++			bp->fw_health->last_fw_reset_cnt =
++				bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
++		}
+ 		bp->fw_reset_state = 0;
+ 		/* Make sure fw_reset_state is 0 before clearing the flag */
+ 		smp_mb__before_atomic();
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+index 64381be935a8c..bb228619ec641 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+@@ -449,7 +449,7 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ 		return rc;
+ 
+ 	ver_resp = &bp->ver_resp;
+-	sprintf(buf, "%X", ver_resp->chip_rev);
++	sprintf(buf, "%c%d", 'A' + ver_resp->chip_rev, ver_resp->chip_metal);
+ 	rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_FIXED,
+ 			      DEVLINK_INFO_VERSION_GENERIC_ASIC_REV, buf);
+ 	if (rc)
+@@ -471,8 +471,8 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ 	if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &nvm_cfg_ver)) {
+ 		u32 ver = nvm_cfg_ver.vu32;
+ 
+-		sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xf, (ver >> 8) & 0xf,
+-			ver & 0xf);
++		sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xff, (ver >> 8) & 0xff,
++			ver & 0xff);
+ 		rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_STORED,
+ 				      DEVLINK_INFO_VERSION_GENERIC_FW_PSID,
+ 				      buf);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 5e4429b14b8ca..2186706cf9130 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -1870,9 +1870,6 @@ bnxt_tc_indr_block_cb_lookup(struct bnxt *bp, struct net_device *netdev)
+ {
+ 	struct bnxt_flower_indr_block_cb_priv *cb_priv;
+ 
+-	/* All callback list access should be protected by RTNL. */
+-	ASSERT_RTNL();
+-
+ 	list_for_each_entry(cb_priv, &bp->tc_indr_block_list, list)
+ 		if (cb_priv->tunnel_netdev == netdev)
+ 			return cb_priv;
+diff --git a/drivers/net/ethernet/chelsio/cxgb/cxgb2.c b/drivers/net/ethernet/chelsio/cxgb/cxgb2.c
+index 512da98019c66..2a28a38da036c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb/cxgb2.c
++++ b/drivers/net/ethernet/chelsio/cxgb/cxgb2.c
+@@ -1107,6 +1107,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (!adapter->registered_device_map) {
+ 		pr_err("%s: could not register any net devices\n",
+ 		       pci_name(pdev));
++		err = -EINVAL;
+ 		goto out_release_adapter_res;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c
+index cb5c79c43bc9c..7bb81e08f9532 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c
+@@ -3306,6 +3306,9 @@ void t3_sge_stop(struct adapter *adap)
+ 
+ 	t3_sge_stop_dma(adap);
+ 
++	/* workqueues aren't initialized otherwise */
++	if (!(adap->flags & FULL_INIT_DONE))
++		return;
+ 	for (i = 0; i < SGE_QSETS; ++i) {
+ 		struct sge_qset *qs = &adap->sge.qs[i];
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index cdb5f14fb6bc5..9faa3712ea5b8 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -73,6 +73,7 @@ MODULE_PARM_DESC(tx_sgl, "Minimum number of frags when using dma_map_sg() to opt
+ #define HNS3_OUTER_VLAN_TAG	2
+ 
+ #define HNS3_MIN_TX_LEN		33U
++#define HNS3_MIN_TUN_PKT_LEN	65U
+ 
+ /* hns3_pci_tbl - PCI Device ID Table
+  *
+@@ -1425,8 +1426,11 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
+ 			       l4.tcp->doff);
+ 		break;
+ 	case IPPROTO_UDP:
+-		if (hns3_tunnel_csum_bug(skb))
+-			return skb_checksum_help(skb);
++		if (hns3_tunnel_csum_bug(skb)) {
++			int ret = skb_put_padto(skb, HNS3_MIN_TUN_PKT_LEN);
++
++			return ret ? ret : skb_checksum_help(skb);
++		}
+ 
+ 		hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
+ 		hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+index 288788186eccd..e6e617aba2a4c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+@@ -1710,6 +1710,10 @@ hclge_dbg_get_imp_stats_info(struct hclge_dev *hdev, char *buf, int len)
+ 	}
+ 
+ 	bd_num = le32_to_cpu(req->bd_num);
++	if (!bd_num) {
++		dev_err(&hdev->pdev->dev, "imp statistics bd number is 0!\n");
++		return -EINVAL;
++	}
+ 
+ 	desc_src = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL);
+ 	if (!desc_src)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 03ae122f1c9ac..72d55c028ac4b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1528,9 +1528,10 @@ static void hclge_init_kdump_kernel_config(struct hclge_dev *hdev)
+ static int hclge_configure(struct hclge_dev *hdev)
+ {
+ 	struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
++	const struct cpumask *cpumask = cpu_online_mask;
+ 	struct hclge_cfg cfg;
+ 	unsigned int i;
+-	int ret;
++	int node, ret;
+ 
+ 	ret = hclge_get_cfg(hdev, &cfg);
+ 	if (ret)
+@@ -1595,11 +1596,12 @@ static int hclge_configure(struct hclge_dev *hdev)
+ 
+ 	hclge_init_kdump_kernel_config(hdev);
+ 
+-	/* Set the init affinity based on pci func number */
+-	i = cpumask_weight(cpumask_of_node(dev_to_node(&hdev->pdev->dev)));
+-	i = i ? PCI_FUNC(hdev->pdev->devfn) % i : 0;
+-	cpumask_set_cpu(cpumask_local_spread(i, dev_to_node(&hdev->pdev->dev)),
+-			&hdev->affinity_mask);
++	/* Set the affinity based on numa node */
++	node = dev_to_node(&hdev->pdev->dev);
++	if (node != NUMA_NO_NODE)
++		cpumask = cpumask_of_node(node);
++
++	cpumask_copy(&hdev->affinity_mask, cpumask);
+ 
+ 	return ret;
+ }
+@@ -8118,11 +8120,12 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	hclge_clear_arfs_rules(hdev);
+ 	spin_unlock_bh(&hdev->fd_rule_lock);
+ 
+-	/* If it is not PF reset, the firmware will disable the MAC,
++	/* If it is not PF reset or FLR, the firmware will disable the MAC,
+ 	 * so it only need to stop phy here.
+ 	 */
+ 	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) &&
+-	    hdev->reset_type != HNAE3_FUNC_RESET) {
++	    hdev->reset_type != HNAE3_FUNC_RESET &&
++	    hdev->reset_type != HNAE3_FLR_RESET) {
+ 		hclge_mac_stop_phy(hdev);
+ 		hclge_update_link_status(hdev);
+ 		return;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 938654778979a..be3ea7023ed8c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2463,6 +2463,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 
+ 	hclgevf_enable_vector(&hdev->misc_vector, false);
+ 	event_cause = hclgevf_check_evt_cause(hdev, &clearval);
++	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
++		hclgevf_clear_event_cause(hdev, clearval);
+ 
+ 	switch (event_cause) {
+ 	case HCLGEVF_VECTOR0_EVENT_RST:
+@@ -2475,10 +2477,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
+ 		break;
+ 	}
+ 
+-	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) {
+-		hclgevf_clear_event_cause(hdev, clearval);
++	if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
+ 		hclgevf_enable_vector(&hdev->misc_vector, true);
+-	}
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index a775c69e4fd7f..6aa6ff89a7651 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4700,6 +4700,14 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
+ 		return 0;
+ 	}
+ 
++	if (adapter->failover_pending) {
++		adapter->init_done_rc = -EAGAIN;
++		netdev_dbg(netdev, "Failover pending, ignoring login response\n");
++		complete(&adapter->init_done);
++		/* login response buffer will be released on reset */
++		return 0;
++	}
++
+ 	netdev->mtu = adapter->req_mtu - ETH_HLEN;
+ 
+ 	netdev_dbg(adapter->netdev, "Login Response Buffer:\n");
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index eadcb99583464..3c4f08d20414e 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -695,6 +695,7 @@ static inline void ice_set_rdma_cap(struct ice_pf *pf)
+ {
+ 	if (pf->hw.func_caps.common_cap.rdma && pf->num_rdma_msix) {
+ 		set_bit(ICE_FLAG_RDMA_ENA, pf->flags);
++		set_bit(ICE_FLAG_AUX_ENA, pf->flags);
+ 		ice_plug_aux_dev(pf);
+ 	}
+ }
+@@ -707,5 +708,6 @@ static inline void ice_clear_rdma_cap(struct ice_pf *pf)
+ {
+ 	ice_unplug_aux_dev(pf);
+ 	clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
++	clear_bit(ICE_FLAG_AUX_ENA, pf->flags);
+ }
+ #endif /* _ICE_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
+index 1f2afdf6cd483..adcc9a251595a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_idc.c
++++ b/drivers/net/ethernet/intel/ice/ice_idc.c
+@@ -271,6 +271,12 @@ int ice_plug_aux_dev(struct ice_pf *pf)
+ 	struct auxiliary_device *adev;
+ 	int ret;
+ 
++	/* if this PF doesn't support a technology that requires auxiliary
++	 * devices, then gracefully exit
++	 */
++	if (!ice_is_aux_ena(pf))
++		return 0;
++
+ 	iadev = kzalloc(sizeof(*iadev), GFP_KERNEL);
+ 	if (!iadev)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index f62982c4d933d..78114e625ffdc 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -5962,7 +5962,9 @@ static int igc_probe(struct pci_dev *pdev,
+ 	if (pci_using_dac)
+ 		netdev->features |= NETIF_F_HIGHDMA;
+ 
+-	netdev->vlan_features |= netdev->features;
++	netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
++	netdev->mpls_features |= NETIF_F_HW_CSUM;
++	netdev->hw_enc_features |= netdev->vlan_features;
+ 
+ 	/* MTU range: 68 - 9216 */
+ 	netdev->min_mtu = ETH_MIN_MTU;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 5fe277e354f7a..c10cae78e79f8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -92,7 +92,8 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
+  */
+ int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
+ {
+-	unsigned long timeout = jiffies + usecs_to_jiffies(10000);
++	unsigned long timeout = jiffies + usecs_to_jiffies(20000);
++	bool twice = false;
+ 	void __iomem *reg;
+ 	u64 reg_val;
+ 
+@@ -107,6 +108,15 @@ again:
+ 		usleep_range(1, 5);
+ 		goto again;
+ 	}
++	/* In scenarios where CPU is scheduled out before checking
++	 * 'time_before' (above) and gets scheduled in such that
++	 * jiffies are beyond timeout value, then check again if HW is
++	 * done with the operation in the meantime.
++	 */
++	if (!twice) {
++		twice = true;
++		goto again;
++	}
+ 	return -EBUSY;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 3f8a98093f8cb..f9cf9fb315479 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -1007,7 +1007,7 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer)
+ 	err = mlx5_core_alloc_pd(dev, &tracer->buff.pdn);
+ 	if (err) {
+ 		mlx5_core_warn(dev, "FWTracer: Failed to allocate PD %d\n", err);
+-		return err;
++		goto err_cancel_work;
+ 	}
+ 
+ 	err = mlx5_fw_tracer_create_mkey(tracer);
+@@ -1031,6 +1031,7 @@ err_notifier_unregister:
+ 	mlx5_core_destroy_mkey(dev, &tracer->buff.mkey);
+ err_dealloc_pd:
+ 	mlx5_core_dealloc_pd(dev, tracer->buff.pdn);
++err_cancel_work:
+ 	cancel_work_sync(&tracer->read_fw_strings_work);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index b1b51bbba0541..3f67efbe12fc5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -940,7 +940,7 @@ void mlx5e_set_rx_mode_work(struct work_struct *work);
+ 
+ int mlx5e_hwstamp_set(struct mlx5e_priv *priv, struct ifreq *ifr);
+ int mlx5e_hwstamp_get(struct mlx5e_priv *priv, struct ifreq *ifr);
+-int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool val);
++int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool val, bool rx_filter);
+ 
+ int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
+ 			  u16 vid);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+index 059799e4f483f..ef271b97fe5ef 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
+@@ -300,9 +300,6 @@ mlx5e_rep_indr_block_priv_lookup(struct mlx5e_rep_priv *rpriv,
+ {
+ 	struct mlx5e_rep_indr_block_priv *cb_priv;
+ 
+-	/* All callback list access should be protected by RTNL. */
+-	ASSERT_RTNL();
+-
+ 	list_for_each_entry(cb_priv,
+ 			    &rpriv->uplink_priv.tc_indr_block_priv_list,
+ 			    list)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index bd72572e03d1d..1cc279d389d6f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1882,7 +1882,7 @@ static int set_pflag_rx_cqe_based_moder(struct net_device *netdev, bool enable)
+ 	return set_pflag_cqe_based_moder(netdev, enable, true);
+ }
+ 
+-int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val)
++int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val, bool rx_filter)
+ {
+ 	bool curr_val = MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS);
+ 	struct mlx5e_params new_params;
+@@ -1894,8 +1894,7 @@ int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val
+ 	if (curr_val == new_val)
+ 		return 0;
+ 
+-	if (new_val && !priv->profile->rx_ptp_support &&
+-	    priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE) {
++	if (new_val && !priv->profile->rx_ptp_support && rx_filter) {
+ 		netdev_err(priv->netdev,
+ 			   "Profile doesn't support enabling of CQE compression while hardware time-stamping is enabled.\n");
+ 		return -EINVAL;
+@@ -1903,7 +1902,7 @@ int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val
+ 
+ 	new_params = priv->channels.params;
+ 	MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_RX_CQE_COMPRESS, new_val);
+-	if (priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE)
++	if (rx_filter)
+ 		new_params.ptp_rx = new_val;
+ 
+ 	if (new_params.ptp_rx == priv->channels.params.ptp_rx)
+@@ -1926,12 +1925,14 @@ static int set_pflag_rx_cqe_compress(struct net_device *netdev,
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 	struct mlx5_core_dev *mdev = priv->mdev;
++	bool rx_filter;
+ 	int err;
+ 
+ 	if (!MLX5_CAP_GEN(mdev, cqe_compression))
+ 		return -EOPNOTSUPP;
+ 
+-	err = mlx5e_modify_rx_cqe_compression_locked(priv, enable);
++	rx_filter = priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE;
++	err = mlx5e_modify_rx_cqe_compression_locked(priv, enable, rx_filter);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 2d53eaf3b9241..fa718e71db2d4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4004,14 +4004,14 @@ static int mlx5e_hwstamp_config_no_ptp_rx(struct mlx5e_priv *priv, bool rx_filte
+ 
+ 	if (!rx_filter)
+ 		/* Reset CQE compression to Admin default */
+-		return mlx5e_modify_rx_cqe_compression_locked(priv, rx_cqe_compress_def);
++		return mlx5e_modify_rx_cqe_compression_locked(priv, rx_cqe_compress_def, false);
+ 
+ 	if (!MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS))
+ 		return 0;
+ 
+ 	/* Disable CQE compression */
+ 	netdev_warn(priv->netdev, "Disabling RX cqe compression\n");
+-	err = mlx5e_modify_rx_cqe_compression_locked(priv, false);
++	err = mlx5e_modify_rx_cqe_compression_locked(priv, false, true);
+ 	if (err)
+ 		netdev_err(priv->netdev, "Failed disabling cqe compression err=%d\n", err);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index c0697e1b71185..938ef5afe5053 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1682,14 +1682,13 @@ static int build_match_list(struct match_list *match_head,
+ 
+ 		curr_match = kmalloc(sizeof(*curr_match), GFP_ATOMIC);
+ 		if (!curr_match) {
++			rcu_read_unlock();
+ 			free_match_list(match_head, ft_locked);
+-			err = -ENOMEM;
+-			goto out;
++			return -ENOMEM;
+ 		}
+ 		curr_match->g = g;
+ 		list_add_tail(&curr_match->list, &match_head->list);
+ 	}
+-out:
+ 	rcu_read_unlock();
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+index a0a059e0154ff..04c7dc224effa 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+@@ -142,6 +142,13 @@ static int mlxbf_gige_open(struct net_device *netdev)
+ 	err = mlxbf_gige_clean_port(priv);
+ 	if (err)
+ 		goto free_irqs;
++
++	/* Clear driver's valid_polarity to match hardware,
++	 * since the above call to clean_port() resets the
++	 * receive polarity used by hardware.
++	 */
++	priv->valid_polarity = 0;
++
+ 	err = mlxbf_gige_rx_init(priv);
+ 	if (err)
+ 		goto free_irqs;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index 2406d33356ad2..d87a9eab25a79 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -1766,9 +1766,6 @@ nfp_flower_indr_block_cb_priv_lookup(struct nfp_app *app,
+ 	struct nfp_flower_indr_block_cb_priv *cb_priv;
+ 	struct nfp_flower_priv *priv = app->priv;
+ 
+-	/* All callback list access should be protected by RTNL. */
+-	ASSERT_RTNL();
+-
+ 	list_for_each_entry(cb_priv, &priv->indr_block_cb_priv, list)
+ 		if (cb_priv->netdev == netdev)
+ 			return cb_priv;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index 4387292c37e2f..e8e17bfc41c54 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -3368,6 +3368,7 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn,
+ 			  struct qed_nvm_image_att *p_image_att)
+ {
+ 	enum nvm_image_type type;
++	int rc;
+ 	u32 i;
+ 
+ 	/* Translate image_id into MFW definitions */
+@@ -3396,7 +3397,10 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn,
+ 		return -EINVAL;
+ 	}
+ 
+-	qed_mcp_nvm_info_populate(p_hwfn);
++	rc = qed_mcp_nvm_info_populate(p_hwfn);
++	if (rc)
++		return rc;
++
+ 	for (i = 0; i < p_hwfn->nvm_info.num_images; i++)
+ 		if (type == p_hwfn->nvm_info.image_att[i].image_type)
+ 			break;
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
+index e6784023bce42..aa7ee43f92525 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c
+@@ -439,7 +439,6 @@ int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter)
+ 	QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, 1);
+ 	msleep(20);
+ 
+-	qlcnic_rom_unlock(adapter);
+ 	/* big hammer don't reset CAM block on reset */
+ 	QLCWR32(adapter, QLCNIC_ROMUSB_GLB_SW_RESET, 0xfeffffff);
+ 
+diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c
+index 47e9998b62f09..6a2416bec7ddc 100644
+--- a/drivers/net/ethernet/rdc/r6040.c
++++ b/drivers/net/ethernet/rdc/r6040.c
+@@ -119,6 +119,8 @@
+ #define PHY_ST		0x8A	/* PHY status register */
+ #define MAC_SM		0xAC	/* MAC status machine */
+ #define  MAC_SM_RST	0x0002	/* MAC status machine reset */
++#define MD_CSC		0xb6	/* MDC speed control register */
++#define  MD_CSC_DEFAULT	0x0030
+ #define MAC_ID		0xBE	/* Identifier register */
+ 
+ #define TX_DCNT		0x80	/* TX descriptor count */
+@@ -355,8 +357,9 @@ static void r6040_reset_mac(struct r6040_private *lp)
+ {
+ 	void __iomem *ioaddr = lp->base;
+ 	int limit = MAC_DEF_TIMEOUT;
+-	u16 cmd;
++	u16 cmd, md_csc;
+ 
++	md_csc = ioread16(ioaddr + MD_CSC);
+ 	iowrite16(MAC_RST, ioaddr + MCR1);
+ 	while (limit--) {
+ 		cmd = ioread16(ioaddr + MCR1);
+@@ -368,6 +371,10 @@ static void r6040_reset_mac(struct r6040_private *lp)
+ 	iowrite16(MAC_SM_RST, ioaddr + MAC_SM);
+ 	iowrite16(0, ioaddr + MAC_SM);
+ 	mdelay(5);
++
++	/* Restore MDIO clock frequency */
++	if (md_csc != MD_CSC_DEFAULT)
++		iowrite16(md_csc, ioaddr + MD_CSC);
+ }
+ 
+ static void r6040_init_mac_regs(struct net_device *dev)
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 840478692a370..dfd439eadd492 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -2533,6 +2533,7 @@ static netdev_tx_t sh_eth_start_xmit(struct sk_buff *skb,
+ 	else
+ 		txdesc->status |= cpu_to_le32(TD_TACT);
+ 
++	wmb(); /* cur_tx must be incremented after TACT bit was set */
+ 	mdp->cur_tx++;
+ 
+ 	if (!(sh_eth_read(ndev, EDTRR) & mdp->cd->edtrr_trns))
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index 4c9a37dd0d3ff..ecf759ee1c9f5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -109,8 +109,10 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ 		plat->bus_id = pci_dev_id(pdev);
+ 
+ 	phy_mode = device_get_phy_mode(&pdev->dev);
+-	if (phy_mode < 0)
++	if (phy_mode < 0) {
+ 		dev_err(&pdev->dev, "phy_mode not found\n");
++		return phy_mode;
++	}
+ 
+ 	plat->phy_interface = phy_mode;
+ 	plat->interface = PHY_INTERFACE_MODE_GMII;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 8a150cc462dcf..0dbd189c2721d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7113,13 +7113,10 @@ int stmmac_suspend(struct device *dev)
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	u32 chan;
+-	int ret;
+ 
+ 	if (!ndev || !netif_running(ndev))
+ 		return 0;
+ 
+-	phylink_mac_change(priv->phylink, false);
+-
+ 	mutex_lock(&priv->lock);
+ 
+ 	netif_device_detach(ndev);
+@@ -7145,27 +7142,22 @@ int stmmac_suspend(struct device *dev)
+ 		stmmac_pmt(priv, priv->hw, priv->wolopts);
+ 		priv->irq_wake = 1;
+ 	} else {
+-		mutex_unlock(&priv->lock);
+-		rtnl_lock();
+-		if (device_may_wakeup(priv->device))
+-			phylink_speed_down(priv->phylink, false);
+-		phylink_stop(priv->phylink);
+-		rtnl_unlock();
+-		mutex_lock(&priv->lock);
+-
+ 		stmmac_mac_set(priv, priv->ioaddr, false);
+ 		pinctrl_pm_select_sleep_state(priv->device);
+-		/* Disable clock in case of PWM is off */
+-		clk_disable_unprepare(priv->plat->clk_ptp_ref);
+-		ret = pm_runtime_force_suspend(dev);
+-		if (ret) {
+-			mutex_unlock(&priv->lock);
+-			return ret;
+-		}
+ 	}
+ 
+ 	mutex_unlock(&priv->lock);
+ 
++	rtnl_lock();
++	if (device_may_wakeup(priv->device) && priv->plat->pmt) {
++		phylink_suspend(priv->phylink, true);
++	} else {
++		if (device_may_wakeup(priv->device))
++			phylink_speed_down(priv->phylink, false);
++		phylink_suspend(priv->phylink, false);
++	}
++	rtnl_unlock();
++
+ 	if (priv->dma_cap.fpesel) {
+ 		/* Disable FPE */
+ 		stmmac_fpe_configure(priv, priv->ioaddr,
+@@ -7237,12 +7229,6 @@ int stmmac_resume(struct device *dev)
+ 		priv->irq_wake = 0;
+ 	} else {
+ 		pinctrl_pm_select_default_state(priv->device);
+-		/* enable the clk previously disabled */
+-		ret = pm_runtime_force_resume(dev);
+-		if (ret)
+-			return ret;
+-		if (priv->plat->clk_ptp_ref)
+-			clk_prepare_enable(priv->plat->clk_ptp_ref);
+ 		/* reset the phy so that it's ready */
+ 		if (priv->mii)
+ 			stmmac_mdio_reset(priv->mii);
+@@ -7256,13 +7242,15 @@ int stmmac_resume(struct device *dev)
+ 			return ret;
+ 	}
+ 
+-	if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
+-		rtnl_lock();
+-		phylink_start(priv->phylink);
+-		/* We may have called phylink_speed_down before */
+-		phylink_speed_up(priv->phylink);
+-		rtnl_unlock();
++	rtnl_lock();
++	if (device_may_wakeup(priv->device) && priv->plat->pmt) {
++		phylink_resume(priv->phylink);
++	} else {
++		phylink_resume(priv->phylink);
++		if (device_may_wakeup(priv->device))
++			phylink_speed_up(priv->phylink);
+ 	}
++	rtnl_unlock();
+ 
+ 	rtnl_lock();
+ 	mutex_lock(&priv->lock);
+@@ -7283,8 +7271,6 @@ int stmmac_resume(struct device *dev)
+ 	mutex_unlock(&priv->lock);
+ 	rtnl_unlock();
+ 
+-	phylink_mac_change(priv->phylink, true);
+-
+ 	netif_device_attach(ndev);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 5ca710844cc1e..62cec9bfcd337 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -9,6 +9,7 @@
+ *******************************************************************************/
+ 
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/module.h>
+ #include <linux/io.h>
+ #include <linux/of.h>
+@@ -771,9 +772,52 @@ static int __maybe_unused stmmac_runtime_resume(struct device *dev)
+ 	return stmmac_bus_clks_config(priv, true);
+ }
+ 
++static int __maybe_unused stmmac_pltfr_noirq_suspend(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++	int ret;
++
++	if (!netif_running(ndev))
++		return 0;
++
++	if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
++		/* Disable clock in case of PWM is off */
++		clk_disable_unprepare(priv->plat->clk_ptp_ref);
++
++		ret = pm_runtime_force_suspend(dev);
++		if (ret)
++			return ret;
++	}
++
++	return 0;
++}
++
++static int __maybe_unused stmmac_pltfr_noirq_resume(struct device *dev)
++{
++	struct net_device *ndev = dev_get_drvdata(dev);
++	struct stmmac_priv *priv = netdev_priv(ndev);
++	int ret;
++
++	if (!netif_running(ndev))
++		return 0;
++
++	if (!device_may_wakeup(priv->device) || !priv->plat->pmt) {
++		/* enable the clk previously disabled */
++		ret = pm_runtime_force_resume(dev);
++		if (ret)
++			return ret;
++
++		clk_prepare_enable(priv->plat->clk_ptp_ref);
++	}
++
++	return 0;
++}
++
+ const struct dev_pm_ops stmmac_pltfr_pm_ops = {
+ 	SET_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_suspend, stmmac_pltfr_resume)
+ 	SET_RUNTIME_PM_OPS(stmmac_runtime_suspend, stmmac_runtime_resume, NULL)
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(stmmac_pltfr_noirq_suspend, stmmac_pltfr_noirq_resume)
+ };
+ EXPORT_SYMBOL_GPL(stmmac_pltfr_pm_ops);
+ 
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index c607ebec74567..656f6ef31b19e 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -430,7 +430,8 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
+ 	 * table region determines the number of entries it has.
+ 	 */
+ 	if (filter) {
+-		count = hweight32(ipa->filter_map);
++		/* Include one extra "slot" to hold the filter map itself */
++		count = 1 + hweight32(ipa->filter_map);
+ 		hash_count = hash_mem->size ? count : 0;
+ 	} else {
+ 		count = mem->size / sizeof(__le64);
+diff --git a/drivers/net/phy/dp83640_reg.h b/drivers/net/phy/dp83640_reg.h
+index 21aa24c741b96..daae7fa58fb82 100644
+--- a/drivers/net/phy/dp83640_reg.h
++++ b/drivers/net/phy/dp83640_reg.h
+@@ -5,7 +5,7 @@
+ #ifndef HAVE_DP83640_REGISTERS
+ #define HAVE_DP83640_REGISTERS
+ 
+-#define PAGE0                     0x0000
++/* #define PAGE0                  0x0000 */
+ #define PHYCR2                    0x001c /* PHY Control Register 2 */
+ 
+ #define PAGE4                     0x0004
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index eb29ef53d971d..42e5a681183f3 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -33,6 +33,7 @@
+ enum {
+ 	PHYLINK_DISABLE_STOPPED,
+ 	PHYLINK_DISABLE_LINK,
++	PHYLINK_DISABLE_MAC_WOL,
+ };
+ 
+ /**
+@@ -1281,6 +1282,9 @@ EXPORT_SYMBOL_GPL(phylink_start);
+  * network device driver's &struct net_device_ops ndo_stop() method.  The
+  * network device's carrier state should not be changed prior to calling this
+  * function.
++ *
++ * This will synchronously bring down the link if the link is not already
++ * down (in other words, it will trigger a mac_link_down() method call.)
+  */
+ void phylink_stop(struct phylink *pl)
+ {
+@@ -1300,6 +1304,84 @@ void phylink_stop(struct phylink *pl)
+ }
+ EXPORT_SYMBOL_GPL(phylink_stop);
+ 
++/**
++ * phylink_suspend() - handle a network device suspend event
++ * @pl: a pointer to a &struct phylink returned from phylink_create()
++ * @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan
++ *
++ * Handle a network device suspend event. There are several cases:
++ * - If Wake-on-Lan is not active, we can bring down the link between
++ *   the MAC and PHY by calling phylink_stop().
++ * - If Wake-on-Lan is active, and being handled only by the PHY, we
++ *   can also bring down the link between the MAC and PHY.
++ * - If Wake-on-Lan is active, but being handled by the MAC, the MAC
++ *   still needs to receive packets, so we can not bring the link down.
++ */
++void phylink_suspend(struct phylink *pl, bool mac_wol)
++{
++	ASSERT_RTNL();
++
++	if (mac_wol && (!pl->netdev || pl->netdev->wol_enabled)) {
++		/* Wake-on-Lan enabled, MAC handling */
++		mutex_lock(&pl->state_mutex);
++
++		/* Stop the resolver bringing the link up */
++		__set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);
++
++		/* Disable the carrier, to prevent transmit timeouts,
++		 * but one would hope all packets have been sent. This
++		 * also means phylink_resolve() will do nothing.
++		 */
++		netif_carrier_off(pl->netdev);
++
++		/* We do not call mac_link_down() here as we want the
++		 * link to remain up to receive the WoL packets.
++		 */
++		mutex_unlock(&pl->state_mutex);
++	} else {
++		phylink_stop(pl);
++	}
++}
++EXPORT_SYMBOL_GPL(phylink_suspend);
++
++/**
++ * phylink_resume() - handle a network device resume event
++ * @pl: a pointer to a &struct phylink returned from phylink_create()
++ *
++ * Undo the effects of phylink_suspend(), returning the link to an
++ * operational state.
++ */
++void phylink_resume(struct phylink *pl)
++{
++	ASSERT_RTNL();
++
++	if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) {
++		/* Wake-on-Lan enabled, MAC handling */
++
++		/* Call mac_link_down() so we keep the overall state balanced.
++		 * Do this under the state_mutex lock for consistency. This
++		 * will cause a "Link Down" message to be printed during
++		 * resume, which is harmless - the true link state will be
++		 * printed when we run a resolve.
++		 */
++		mutex_lock(&pl->state_mutex);
++		phylink_link_down(pl);
++		mutex_unlock(&pl->state_mutex);
++
++		/* Re-apply the link parameters so that all the settings get
++		 * restored to the MAC.
++		 */
++		phylink_mac_initial_config(pl, true);
++
++		/* Re-enable and re-resolve the link parameters */
++		clear_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);
++		phylink_run_resolve(pl);
++	} else {
++		phylink_start(pl);
++	}
++}
++EXPORT_SYMBOL_GPL(phylink_resume);
++
+ /**
+  * phylink_ethtool_get_wol() - get the wake on lan parameters for the PHY
+  * @pl: a pointer to a &struct phylink returned from phylink_create()
+diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
+index 4c4ab7b38d78c..82bb5ed94c485 100644
+--- a/drivers/net/usb/cdc_mbim.c
++++ b/drivers/net/usb/cdc_mbim.c
+@@ -654,6 +654,11 @@ static const struct usb_device_id mbim_devs[] = {
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
+ 	},
+ 
++	/* Telit LN920 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1061, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
++	  .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
++	},
++
+ 	/* default entry */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ 	  .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index dec96e8ab5679..18e0ca85f6537 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2536,13 +2536,17 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	if (!hso_net->mux_bulk_tx_buf)
+ 		goto err_free_tx_urb;
+ 
+-	add_net_device(hso_dev);
++	result = add_net_device(hso_dev);
++	if (result) {
++		dev_err(&interface->dev, "Failed to add net device\n");
++		goto err_free_tx_buf;
++	}
+ 
+ 	/* registering our net device */
+ 	result = register_netdev(net);
+ 	if (result) {
+ 		dev_err(&interface->dev, "Failed to register device\n");
+-		goto err_free_tx_buf;
++		goto err_rmv_ndev;
+ 	}
+ 
+ 	hso_log_port(hso_dev);
+@@ -2551,8 +2555,9 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 
+ 	return hso_dev;
+ 
+-err_free_tx_buf:
++err_rmv_ndev:
+ 	remove_net_device(hso_dev);
++err_free_tx_buf:
+ 	kfree(hso_net->mux_bulk_tx_buf);
+ err_free_tx_urb:
+ 	usb_free_urb(hso_net->mux_bulk_tx_urb);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index b4b1f75b9c2a8..513f9e5387290 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -230,19 +230,11 @@ static int iwl_pnvm_parse(struct iwl_trans *trans, const u8 *data,
+ static int iwl_pnvm_get_from_fs(struct iwl_trans *trans, u8 **data, size_t *len)
+ {
+ 	const struct firmware *pnvm;
+-	char pnvm_name[64];
++	char pnvm_name[MAX_PNVM_NAME];
++	size_t new_len;
+ 	int ret;
+ 
+-	/*
+-	 * The prefix unfortunately includes a hyphen at the end, so
+-	 * don't add the dot here...
+-	 */
+-	snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm",
+-		 trans->cfg->fw_name_pre);
+-
+-	/* ...but replace the hyphen with the dot here. */
+-	if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name))
+-		pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.';
++	iwl_pnvm_get_fs_name(trans, pnvm_name, sizeof(pnvm_name));
+ 
+ 	ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev);
+ 	if (ret) {
+@@ -251,11 +243,14 @@ static int iwl_pnvm_get_from_fs(struct iwl_trans *trans, u8 **data, size_t *len)
+ 		return ret;
+ 	}
+ 
++	new_len = pnvm->size;
+ 	*data = kmemdup(pnvm->data, pnvm->size, GFP_KERNEL);
++	release_firmware(pnvm);
++
+ 	if (!*data)
+ 		return -ENOMEM;
+ 
+-	*len = pnvm->size;
++	*len = new_len;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
+index 61d3d4e0b7d94..203c367dd4dee 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
+@@ -12,7 +12,27 @@
+ 
+ #define MVM_UCODE_PNVM_TIMEOUT	(HZ / 4)
+ 
++#define MAX_PNVM_NAME  64
++
+ int iwl_pnvm_load(struct iwl_trans *trans,
+ 		  struct iwl_notif_wait_data *notif_wait);
+ 
++static inline
++void iwl_pnvm_get_fs_name(struct iwl_trans *trans,
++			  u8 *pnvm_name, size_t max_len)
++{
++	int pre_len;
++
++	/*
++	 * The prefix unfortunately includes a hyphen at the end, so
++	 * don't add the dot here...
++	 */
++	snprintf(pnvm_name, max_len, "%spnvm", trans->cfg->fw_name_pre);
++
++	/* ...but replace the hyphen with the dot here. */
++	pre_len = strlen(trans->cfg->fw_name_pre);
++	if (pre_len < max_len && pre_len > 0)
++		pnvm_name[pre_len - 1] = '.';
++}
++
+ #endif /* __IWL_PNVM_H__ */
+diff --git a/drivers/ntb/test/ntb_msi_test.c b/drivers/ntb/test/ntb_msi_test.c
+index 7095ecd6223a7..4e18e08776c98 100644
+--- a/drivers/ntb/test/ntb_msi_test.c
++++ b/drivers/ntb/test/ntb_msi_test.c
+@@ -369,8 +369,10 @@ static int ntb_msit_probe(struct ntb_client *client, struct ntb_dev *ntb)
+ 	if (ret)
+ 		goto remove_dbgfs;
+ 
+-	if (!nm->isr_ctx)
++	if (!nm->isr_ctx) {
++		ret = -ENOMEM;
+ 		goto remove_dbgfs;
++	}
+ 
+ 	ntb_link_enable(ntb, NTB_SPEED_AUTO, NTB_WIDTH_AUTO);
+ 
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 89df1350fefd8..65e1e5cf1b29a 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -598,6 +598,7 @@ static int perf_setup_inbuf(struct perf_peer *peer)
+ 		return -ENOMEM;
+ 	}
+ 	if (!IS_ALIGNED(peer->inbuf_xlat, xlat_align)) {
++		ret = -EINVAL;
+ 		dev_err(&perf->ntb->dev, "Unaligned inbuf allocated\n");
+ 		goto err_free_inbuf;
+ 	}
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 2f0cbaba12ac4..84e7cb9f19681 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3496,7 +3496,9 @@ static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
+ 	lockdep_assert_held(&subsys->lock);
+ 
+ 	list_for_each_entry(h, &subsys->nsheads, entry) {
+-		if (h->ns_id == nsid && nvme_tryget_ns_head(h))
++		if (h->ns_id != nsid)
++			continue;
++		if (!list_empty(&h->list) && nvme_tryget_ns_head(h))
+ 			return h;
+ 	}
+ 
+@@ -3821,6 +3823,10 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 
+ 	mutex_lock(&ns->ctrl->subsys->lock);
+ 	list_del_rcu(&ns->siblings);
++	if (list_empty(&ns->head->list)) {
++		list_del_init(&ns->head->entry);
++		last_path = true;
++	}
+ 	mutex_unlock(&ns->ctrl->subsys->lock);
+ 
+ 	synchronize_rcu(); /* guarantee not available in head->list */
+@@ -3840,13 +3846,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 	list_del_init(&ns->list);
+ 	up_write(&ns->ctrl->namespaces_rwsem);
+ 
+-	/* Synchronize with nvme_init_ns_head() */
+-	mutex_lock(&ns->head->subsys->lock);
+-	if (list_empty(&ns->head->list)) {
+-		list_del_init(&ns->head->entry);
+-		last_path = true;
+-	}
+-	mutex_unlock(&ns->head->subsys->lock);
+ 	if (last_path)
+ 		nvme_mpath_shutdown_disk(ns->head);
+ 	nvme_put_ns(ns);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 48b70e5235a39..19a711395cdc3 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -273,6 +273,12 @@ static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
+ 	} while (ret > 0);
+ }
+ 
++static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
++{
++	return !list_empty(&queue->send_list) ||
++		!llist_empty(&queue->req_list) || queue->more_requests;
++}
++
+ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 		bool sync, bool last)
+ {
+@@ -293,9 +299,10 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
+ 		nvme_tcp_send_all(queue);
+ 		queue->more_requests = false;
+ 		mutex_unlock(&queue->send_mutex);
+-	} else if (last) {
+-		queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
+ 	}
++
++	if (last && nvme_tcp_queue_more(queue))
++		queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
+ }
+ 
+ static void nvme_tcp_process_req_list(struct nvme_tcp_queue *queue)
+@@ -893,12 +900,6 @@ done:
+ 	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+-static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+-{
+-	return !list_empty(&queue->send_list) ||
+-		!llist_empty(&queue->req_list) || queue->more_requests;
+-}
+-
+ static inline void nvme_tcp_done_send_req(struct nvme_tcp_queue *queue)
+ {
+ 	queue->request = NULL;
+@@ -1132,8 +1133,7 @@ static void nvme_tcp_io_work(struct work_struct *w)
+ 				pending = true;
+ 			else if (unlikely(result < 0))
+ 				break;
+-		} else
+-			pending = !llist_empty(&queue->req_list);
++		}
+ 
+ 		result = nvme_tcp_try_recv(queue);
+ 		if (result > 0)
+diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
+index 5e1e3796efa4e..326f7d13024f9 100644
+--- a/drivers/pci/controller/Kconfig
++++ b/drivers/pci/controller/Kconfig
+@@ -40,6 +40,7 @@ config PCI_FTPCI100
+ config PCI_IXP4XX
+ 	bool "Intel IXP4xx PCI controller"
+ 	depends on ARM && OF
++	depends on ARCH_IXP4XX || COMPILE_TEST
+ 	default ARCH_IXP4XX
+ 	help
+ 	  Say Y here if you want support for the PCI host controller found
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 35e61048e133c..ffb176d288cd9 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -27,6 +27,7 @@
+ #define STATUS_REG_SYS_2	0x508
+ #define STATUS_CLR_REG_SYS_2	0x708
+ #define LINK_DOWN		BIT(1)
++#define J7200_LINK_DOWN		BIT(10)
+ 
+ #define J721E_PCIE_USER_CMD_STATUS	0x4
+ #define LINK_TRAINING_ENABLE		BIT(0)
+@@ -57,6 +58,7 @@ struct j721e_pcie {
+ 	struct cdns_pcie	*cdns_pcie;
+ 	void __iomem		*user_cfg_base;
+ 	void __iomem		*intd_cfg_base;
++	u32			linkdown_irq_regfield;
+ };
+ 
+ enum j721e_pcie_mode {
+@@ -66,7 +68,10 @@ enum j721e_pcie_mode {
+ 
+ struct j721e_pcie_data {
+ 	enum j721e_pcie_mode	mode;
+-	bool quirk_retrain_flag;
++	unsigned int		quirk_retrain_flag:1;
++	unsigned int		quirk_detect_quiet_flag:1;
++	u32			linkdown_irq_regfield;
++	unsigned int		byte_access_allowed:1;
+ };
+ 
+ static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
+@@ -98,12 +103,12 @@ static irqreturn_t j721e_pcie_link_irq_handler(int irq, void *priv)
+ 	u32 reg;
+ 
+ 	reg = j721e_pcie_intd_readl(pcie, STATUS_REG_SYS_2);
+-	if (!(reg & LINK_DOWN))
++	if (!(reg & pcie->linkdown_irq_regfield))
+ 		return IRQ_NONE;
+ 
+ 	dev_err(dev, "LINK DOWN!\n");
+ 
+-	j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, LINK_DOWN);
++	j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, pcie->linkdown_irq_regfield);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -112,7 +117,7 @@ static void j721e_pcie_config_link_irq(struct j721e_pcie *pcie)
+ 	u32 reg;
+ 
+ 	reg = j721e_pcie_intd_readl(pcie, ENABLE_REG_SYS_2);
+-	reg |= LINK_DOWN;
++	reg |= pcie->linkdown_irq_regfield;
+ 	j721e_pcie_intd_writel(pcie, ENABLE_REG_SYS_2, reg);
+ }
+ 
+@@ -284,10 +289,36 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
+ static const struct j721e_pcie_data j721e_pcie_rc_data = {
+ 	.mode = PCI_MODE_RC,
+ 	.quirk_retrain_flag = true,
++	.byte_access_allowed = false,
++	.linkdown_irq_regfield = LINK_DOWN,
+ };
+ 
+ static const struct j721e_pcie_data j721e_pcie_ep_data = {
+ 	.mode = PCI_MODE_EP,
++	.linkdown_irq_regfield = LINK_DOWN,
++};
++
++static const struct j721e_pcie_data j7200_pcie_rc_data = {
++	.mode = PCI_MODE_RC,
++	.quirk_detect_quiet_flag = true,
++	.linkdown_irq_regfield = J7200_LINK_DOWN,
++	.byte_access_allowed = true,
++};
++
++static const struct j721e_pcie_data j7200_pcie_ep_data = {
++	.mode = PCI_MODE_EP,
++	.quirk_detect_quiet_flag = true,
++};
++
++static const struct j721e_pcie_data am64_pcie_rc_data = {
++	.mode = PCI_MODE_RC,
++	.linkdown_irq_regfield = J7200_LINK_DOWN,
++	.byte_access_allowed = true,
++};
++
++static const struct j721e_pcie_data am64_pcie_ep_data = {
++	.mode = PCI_MODE_EP,
++	.linkdown_irq_regfield = J7200_LINK_DOWN,
+ };
+ 
+ static const struct of_device_id of_j721e_pcie_match[] = {
+@@ -299,6 +330,22 @@ static const struct of_device_id of_j721e_pcie_match[] = {
+ 		.compatible = "ti,j721e-pcie-ep",
+ 		.data = &j721e_pcie_ep_data,
+ 	},
++	{
++		.compatible = "ti,j7200-pcie-host",
++		.data = &j7200_pcie_rc_data,
++	},
++	{
++		.compatible = "ti,j7200-pcie-ep",
++		.data = &j7200_pcie_ep_data,
++	},
++	{
++		.compatible = "ti,am64-pcie-host",
++		.data = &am64_pcie_rc_data,
++	},
++	{
++		.compatible = "ti,am64-pcie-ep",
++		.data = &am64_pcie_ep_data,
++	},
+ 	{},
+ };
+ 
+@@ -332,6 +379,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 
+ 	pcie->dev = dev;
+ 	pcie->mode = mode;
++	pcie->linkdown_irq_regfield = data->linkdown_irq_regfield;
+ 
+ 	base = devm_platform_ioremap_resource_byname(pdev, "intd_cfg");
+ 	if (IS_ERR(base))
+@@ -391,9 +439,11 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 			goto err_get_sync;
+ 		}
+ 
+-		bridge->ops = &cdns_ti_pcie_host_ops;
++		if (!data->byte_access_allowed)
++			bridge->ops = &cdns_ti_pcie_host_ops;
+ 		rc = pci_host_bridge_priv(bridge);
+ 		rc->quirk_retrain_flag = data->quirk_retrain_flag;
++		rc->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
+ 
+ 		cdns_pcie = &rc->pcie;
+ 		cdns_pcie->dev = dev;
+@@ -459,6 +509,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 			ret = -ENOMEM;
+ 			goto err_get_sync;
+ 		}
++		ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
+ 
+ 		cdns_pcie = &ep->pcie;
+ 		cdns_pcie->dev = dev;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 897cdde02bd80..dd7df1ac7fda2 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -623,6 +623,10 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ 	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
+ 	/* Reserve region 0 for IRQs */
+ 	set_bit(0, &ep->ob_region_map);
++
++	if (ep->quirk_detect_quiet_flag)
++		cdns_pcie_detect_quiet_min_delay_set(&ep->pcie);
++
+ 	spin_lock_init(&ep->lock);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index ae1c55503513a..fb96d37a135c1 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -498,6 +498,9 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ 		return PTR_ERR(rc->cfg_base);
+ 	rc->cfg_res = res;
+ 
++	if (rc->quirk_detect_quiet_flag)
++		cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
++
+ 	ret = cdns_pcie_start_link(pcie);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to start link\n");
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c
+index 3c3646502d05c..52767f26048fd 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.c
++++ b/drivers/pci/controller/cadence/pcie-cadence.c
+@@ -7,6 +7,22 @@
+ 
+ #include "pcie-cadence.h"
+ 
++void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
++{
++	u32 delay = 0x3;
++	u32 ltssm_control_cap;
++
++	/*
++	 * Set the LTSSM Detect Quiet state min. delay to 2ms.
++	 */
++	ltssm_control_cap = cdns_pcie_readl(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP);
++	ltssm_control_cap = ((ltssm_control_cap &
++			    ~CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) |
++			    CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay));
++
++	cdns_pcie_writel(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP, ltssm_control_cap);
++}
++
+ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ 				   u32 r, bool is_io,
+ 				   u64 cpu_addr, u64 pci_addr, size_t size)
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index 30db2d68c17a0..4bde99b74135d 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -189,6 +189,14 @@
+ /* AXI link down register */
+ #define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
+ 
++/* LTSSM Capabilities register */
++#define CDNS_PCIE_LTSSM_CONTROL_CAP             (CDNS_PCIE_LM_BASE + 0x0054)
++#define  CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK  GENMASK(2, 1)
++#define  CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
++#define  CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
++	 (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
++	 CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
++
+ enum cdns_pcie_rp_bar {
+ 	RP_BAR_UNDEFINED = -1,
+ 	RP_BAR0,
+@@ -295,6 +303,7 @@ struct cdns_pcie {
+  * @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and	RP_NO_BAR if it's free or
+  *                available
+  * @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
++ * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
+  */
+ struct cdns_pcie_rc {
+ 	struct cdns_pcie	pcie;
+@@ -303,7 +312,8 @@ struct cdns_pcie_rc {
+ 	u32			vendor_id;
+ 	u32			device_id;
+ 	bool			avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
+-	bool                    quirk_retrain_flag;
++	unsigned int		quirk_retrain_flag:1;
++	unsigned int		quirk_detect_quiet_flag:1;
+ };
+ 
+ /**
+@@ -334,6 +344,7 @@ struct cdns_pcie_epf {
+  *        registers fields (RMW) accessible by both remote RC and EP to
+  *        minimize time between read and write
+  * @epf: Structure to hold info about endpoint function
++ * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
+  */
+ struct cdns_pcie_ep {
+ 	struct cdns_pcie	pcie;
+@@ -348,6 +359,7 @@ struct cdns_pcie_ep {
+ 	/* protect writing to PCI_STATUS while raising legacy interrupts */
+ 	spinlock_t		lock;
+ 	struct cdns_pcie_epf	*epf;
++	unsigned int		quirk_detect_quiet_flag:1;
+ };
+ 
+ 
+@@ -508,6 +520,9 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
+ 	return 0;
+ }
+ #endif
++
++void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
++
+ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ 				   u32 r, bool is_io,
+ 				   u64 cpu_addr, u64 pci_addr, size_t size);
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index 3ec7b29d5dc72..55c8afb9a8996 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -497,19 +497,19 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
+ 	struct tegra_pcie_dw *pcie = arg;
+ 	struct dw_pcie_ep *ep = &pcie->pci.ep;
+ 	int spurious = 1;
+-	u32 val, tmp;
++	u32 status_l0, status_l1, link_status;
+ 
+-	val = appl_readl(pcie, APPL_INTR_STATUS_L0);
+-	if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
+-		appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
++	status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0);
++	if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
++		appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0);
+ 
+-		if (val & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
++		if (status_l1 & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
+ 			pex_ep_event_hot_rst_done(pcie);
+ 
+-		if (val & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
+-			tmp = appl_readl(pcie, APPL_LINK_STATUS);
+-			if (tmp & APPL_LINK_STATUS_RDLH_LINK_UP) {
++		if (status_l1 & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
++			link_status = appl_readl(pcie, APPL_LINK_STATUS);
++			if (link_status & APPL_LINK_STATUS_RDLH_LINK_UP) {
+ 				dev_dbg(pcie->dev, "Link is up with Host\n");
+ 				dw_pcie_ep_linkup(ep);
+ 			}
+@@ -518,11 +518,11 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
+ 		spurious = 0;
+ 	}
+ 
+-	if (val & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
+-		val = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
+-		appl_writel(pcie, val, APPL_INTR_STATUS_L1_15);
++	if (status_l0 & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
++		status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
++		appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_15);
+ 
+-		if (val & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
++		if (status_l1 & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
+ 			return IRQ_WAKE_THREAD;
+ 
+ 		spurious = 0;
+@@ -530,8 +530,8 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
+ 
+ 	if (spurious) {
+ 		dev_warn(pcie->dev, "Random interrupt (STATUS = 0x%08X)\n",
+-			 val);
+-		appl_writel(pcie, val, APPL_INTR_STATUS_L0);
++			 status_l0);
++		appl_writel(pcie, status_l0, APPL_INTR_STATUS_L0);
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -1763,7 +1763,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ 	val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK);
+ 	val |= MSIX_ADDR_MATCH_LOW_OFF_EN;
+ 	dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_LOW_OFF, val);
+-	val = (lower_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
++	val = (upper_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
+ 	dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val);
+ 
+ 	ret = dw_pcie_ep_init_complete(ep);
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index c979229a6d0df..b358212d71ab7 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -2193,13 +2193,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ 		rp->np = port;
+ 
+ 		rp->base = devm_pci_remap_cfg_resource(dev, &rp->regs);
+-		if (IS_ERR(rp->base))
+-			return PTR_ERR(rp->base);
++		if (IS_ERR(rp->base)) {
++			err = PTR_ERR(rp->base);
++			goto err_node_put;
++		}
+ 
+ 		label = devm_kasprintf(dev, GFP_KERNEL, "pex-reset-%u", index);
+ 		if (!label) {
+-			dev_err(dev, "failed to create reset GPIO label\n");
+-			return -ENOMEM;
++			err = -ENOMEM;
++			goto err_node_put;
+ 		}
+ 
+ 		/*
+@@ -2217,7 +2219,8 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
+ 			} else {
+ 				dev_err(dev, "failed to get reset GPIO: %ld\n",
+ 					PTR_ERR(rp->reset_gpio));
+-				return PTR_ERR(rp->reset_gpio);
++				err = PTR_ERR(rp->reset_gpio);
++				goto err_node_put;
+ 			}
+ 		}
+ 
+diff --git a/drivers/pci/controller/pcie-iproc-bcma.c b/drivers/pci/controller/pcie-iproc-bcma.c
+index 56b8ee7bf3307..f918c713afb08 100644
+--- a/drivers/pci/controller/pcie-iproc-bcma.c
++++ b/drivers/pci/controller/pcie-iproc-bcma.c
+@@ -35,7 +35,6 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
+ {
+ 	struct device *dev = &bdev->dev;
+ 	struct iproc_pcie *pcie;
+-	LIST_HEAD(resources);
+ 	struct pci_host_bridge *bridge;
+ 	int ret;
+ 
+@@ -60,19 +59,16 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
+ 	pcie->mem.end = bdev->addr_s[0] + SZ_128M - 1;
+ 	pcie->mem.name = "PCIe MEM space";
+ 	pcie->mem.flags = IORESOURCE_MEM;
+-	pci_add_resource(&resources, &pcie->mem);
++	pci_add_resource(&bridge->windows, &pcie->mem);
++	ret = devm_request_pci_bus_resources(dev, &bridge->windows);
++	if (ret)
++		return ret;
+ 
+ 	pcie->map_irq = iproc_pcie_bcma_map_irq;
+ 
+-	ret = iproc_pcie_setup(pcie, &resources);
+-	if (ret) {
+-		dev_err(dev, "PCIe controller setup failed\n");
+-		pci_free_resource_list(&resources);
+-		return ret;
+-	}
+-
+ 	bcma_set_drvdata(bdev, pcie);
+-	return 0;
++
++	return iproc_pcie_setup(pcie, &bridge->windows);
+ }
+ 
+ static void iproc_pcie_bcma_remove(struct bcma_device *bdev)
+diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c
+index b4a288e24aafb..c91d85b151290 100644
+--- a/drivers/pci/controller/pcie-rcar-ep.c
++++ b/drivers/pci/controller/pcie-rcar-ep.c
+@@ -492,9 +492,9 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev)
+ 	pcie->dev = dev;
+ 
+ 	pm_runtime_enable(dev);
+-	err = pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+-		dev_err(dev, "pm_runtime_get_sync failed\n");
++		dev_err(dev, "pm_runtime_resume_and_get failed\n");
+ 		goto err_pm_disable;
+ 	}
+ 
+diff --git a/drivers/pci/hotplug/TODO b/drivers/pci/hotplug/TODO
+index a32070be5adf9..cc6194aa24c15 100644
+--- a/drivers/pci/hotplug/TODO
++++ b/drivers/pci/hotplug/TODO
+@@ -40,9 +40,6 @@ ibmphp:
+ 
+ * The return value of pci_hp_register() is not checked.
+ 
+-* iounmap(io_mem) is called in the error path of ebda_rsrc_controller()
+-  and once more in the error path of its caller ibmphp_access_ebda().
+-
+ * The various slot data structures are difficult to follow and need to be
+   simplified.  A lot of functions are too large and too complex, they need
+   to be broken up into smaller, manageable pieces.  Negative examples are
+diff --git a/drivers/pci/hotplug/ibmphp_ebda.c b/drivers/pci/hotplug/ibmphp_ebda.c
+index 11a2661dc0627..7fb75401ad8a7 100644
+--- a/drivers/pci/hotplug/ibmphp_ebda.c
++++ b/drivers/pci/hotplug/ibmphp_ebda.c
+@@ -714,8 +714,7 @@ static int __init ebda_rsrc_controller(void)
+ 		/* init hpc structure */
+ 		hpc_ptr = alloc_ebda_hpc(slot_num, bus_num);
+ 		if (!hpc_ptr) {
+-			rc = -ENOMEM;
+-			goto error_no_hpc;
++			return -ENOMEM;
+ 		}
+ 		hpc_ptr->ctlr_id = ctlr_id;
+ 		hpc_ptr->ctlr_relative_id = ctlr;
+@@ -910,8 +909,6 @@ error:
+ 	kfree(tmp_slot);
+ error_no_slot:
+ 	free_ebda_hpc(hpc_ptr);
+-error_no_hpc:
+-	iounmap(io_mem);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/pci/of.c b/drivers/pci/of.c
+index a143b02b2dcdf..d84381ce82b52 100644
+--- a/drivers/pci/of.c
++++ b/drivers/pci/of.c
+@@ -310,7 +310,7 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
+ 	/* Check for ranges property */
+ 	err = of_pci_range_parser_init(&parser, dev_node);
+ 	if (err)
+-		goto failed;
++		return 0;
+ 
+ 	dev_dbg(dev, "Parsing ranges property...\n");
+ 	for_each_of_pci_range(&parser, &range) {
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index a5e6759c407b9..a4eb0c042ca3e 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -265,7 +265,7 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path,
+ 
+ 	*endptr = strchrnul(path, ';');
+ 
+-	wpath = kmemdup_nul(path, *endptr - path, GFP_KERNEL);
++	wpath = kmemdup_nul(path, *endptr - path, GFP_ATOMIC);
+ 	if (!wpath)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/pci/pcie/ptm.c b/drivers/pci/pcie/ptm.c
+index 95d4eef2c9e86..4810faa67f520 100644
+--- a/drivers/pci/pcie/ptm.c
++++ b/drivers/pci/pcie/ptm.c
+@@ -60,10 +60,8 @@ void pci_save_ptm_state(struct pci_dev *dev)
+ 		return;
+ 
+ 	save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_PTM);
+-	if (!save_state) {
+-		pci_err(dev, "no suspend buffer for PTM\n");
++	if (!save_state)
+ 		return;
+-	}
+ 
+ 	cap = (u16 *)&save_state->cap.data[0];
+ 	pci_read_config_word(dev, ptm + PCI_PTM_CTRL, cap);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 1905ee0297a4c..8c3c1ef92171f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4616,6 +4616,18 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
+ 		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+ 
++/*
++ * Each of these NXP Root Ports is in a Root Complex with a unique segment
++ * number and does provide isolation features to disable peer transactions
++ * and validate bus numbers in requests, but does not provide an ACS
++ * capability.
++ */
++static int pci_quirk_nxp_rp_acs(struct pci_dev *dev, u16 acs_flags)
++{
++	return pci_acs_ctrl_enabled(acs_flags,
++		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
++}
++
+ static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ 	if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT)
+@@ -4842,6 +4854,10 @@ static const struct pci_dev_acs_enabled {
+ 	{ 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */
+ 	/* Cavium ThunderX */
+ 	{ PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs },
++	/* Cavium multi-function devices */
++	{ PCI_VENDOR_ID_CAVIUM, 0xA026, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_CAVIUM, 0xA059, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_CAVIUM, 0xA060, pci_quirk_mf_endpoint_acs },
+ 	/* APM X-Gene */
+ 	{ PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs },
+ 	/* Ampere Computing */
+@@ -4862,6 +4878,39 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
+ 	{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
++	/* NXP root ports, xx=16, 12, or 08 cores */
++	/* LX2xx0A : without security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d81, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da1, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d83, pci_quirk_nxp_rp_acs },
++	/* LX2xx0C : security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d80, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da0, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d82, pci_quirk_nxp_rp_acs },
++	/* LX2xx0E : security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d90, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db0, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d92, pci_quirk_nxp_rp_acs },
++	/* LX2xx0N : without security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d91, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db1, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d93, pci_quirk_nxp_rp_acs },
++	/* LX2xx2A : without security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d89, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da9, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d8b, pci_quirk_nxp_rp_acs },
++	/* LX2xx2C : security features + CAN-FD */
++	{ PCI_VENDOR_ID_NXP, 0x8d88, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8da8, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d8a, pci_quirk_nxp_rp_acs },
++	/* LX2xx2E : security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d98, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db8, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d9a, pci_quirk_nxp_rp_acs },
++	/* LX2xx2N : without security features + CAN */
++	{ PCI_VENDOR_ID_NXP, 0x8d99, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8db9, pci_quirk_nxp_rp_acs },
++	{ PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs },
+ 	/* Zhaoxin Root/Downstream Ports */
+ 	{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
+ 	{ 0 }
+@@ -5350,7 +5399,7 @@ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ 			      PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda);
+ 
+ /*
+- * Create device link for NVIDIA GPU with integrated USB xHCI Host
++ * Create device link for GPUs with integrated USB xHCI Host
+  * controller to VGA.
+  */
+ static void quirk_gpu_usb(struct pci_dev *usb)
+@@ -5359,9 +5408,11 @@ static void quirk_gpu_usb(struct pci_dev *usb)
+ }
+ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ 			      PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb);
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
++			      PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb);
+ 
+ /*
+- * Create device link for NVIDIA GPU with integrated Type-C UCSI controller
++ * Create device link for GPUs with integrated Type-C UCSI controller
+  * to VGA. Currently there is no class code defined for UCSI device over PCI
+  * so using UNKNOWN class for now and it will be updated when UCSI
+  * over PCI gets a class code.
+@@ -5374,6 +5425,9 @@ static void quirk_gpu_usb_typec_ucsi(struct pci_dev *ucsi)
+ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ 			      PCI_CLASS_SERIAL_UNKNOWN, 8,
+ 			      quirk_gpu_usb_typec_ucsi);
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
++			      PCI_CLASS_SERIAL_UNKNOWN, 8,
++			      quirk_gpu_usb_typec_ucsi);
+ 
+ /*
+  * Enable the NVIDIA GPU integrated HDA controller if the BIOS left it
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index f1cbc6b2edbb3..ebadc6c08e116 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -142,18 +142,6 @@ static const struct wcnss_data pronto_v2_data = {
+ 	.num_vregs = 1,
+ };
+ 
+-void qcom_wcnss_assign_iris(struct qcom_wcnss *wcnss,
+-			    struct qcom_iris *iris,
+-			    bool use_48mhz_xo)
+-{
+-	mutex_lock(&wcnss->iris_lock);
+-
+-	wcnss->iris = iris;
+-	wcnss->use_48mhz_xo = use_48mhz_xo;
+-
+-	mutex_unlock(&wcnss->iris_lock);
+-}
+-
+ static int wcnss_load(struct rproc *rproc, const struct firmware *fw)
+ {
+ 	struct qcom_wcnss *wcnss = (struct qcom_wcnss *)rproc->priv;
+@@ -639,12 +627,20 @@ static int wcnss_probe(struct platform_device *pdev)
+ 		goto detach_pds;
+ 	}
+ 
++	wcnss->iris = qcom_iris_probe(&pdev->dev, &wcnss->use_48mhz_xo);
++	if (IS_ERR(wcnss->iris)) {
++		ret = PTR_ERR(wcnss->iris);
++		goto detach_pds;
++	}
++
+ 	ret = rproc_add(rproc);
+ 	if (ret)
+-		goto detach_pds;
++		goto remove_iris;
+ 
+-	return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
++	return 0;
+ 
++remove_iris:
++	qcom_iris_remove(wcnss->iris);
+ detach_pds:
+ 	wcnss_release_pds(wcnss);
+ free_rproc:
+@@ -657,7 +653,7 @@ static int wcnss_remove(struct platform_device *pdev)
+ {
+ 	struct qcom_wcnss *wcnss = platform_get_drvdata(pdev);
+ 
+-	of_platform_depopulate(&pdev->dev);
++	qcom_iris_remove(wcnss->iris);
+ 
+ 	rproc_del(wcnss->rproc);
+ 
+@@ -686,28 +682,7 @@ static struct platform_driver wcnss_driver = {
+ 	},
+ };
+ 
+-static int __init wcnss_init(void)
+-{
+-	int ret;
+-
+-	ret = platform_driver_register(&wcnss_driver);
+-	if (ret)
+-		return ret;
+-
+-	ret = platform_driver_register(&qcom_iris_driver);
+-	if (ret)
+-		platform_driver_unregister(&wcnss_driver);
+-
+-	return ret;
+-}
+-module_init(wcnss_init);
+-
+-static void __exit wcnss_exit(void)
+-{
+-	platform_driver_unregister(&qcom_iris_driver);
+-	platform_driver_unregister(&wcnss_driver);
+-}
+-module_exit(wcnss_exit);
++module_platform_driver(wcnss_driver);
+ 
+ MODULE_DESCRIPTION("Qualcomm Peripheral Image Loader for Wireless Subsystem");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/remoteproc/qcom_wcnss.h b/drivers/remoteproc/qcom_wcnss.h
+index 62c8682d0a92d..6d01ee6afa7f8 100644
+--- a/drivers/remoteproc/qcom_wcnss.h
++++ b/drivers/remoteproc/qcom_wcnss.h
+@@ -17,9 +17,9 @@ struct wcnss_vreg_info {
+ 	bool super_turbo;
+ };
+ 
++struct qcom_iris *qcom_iris_probe(struct device *parent, bool *use_48mhz_xo);
++void qcom_iris_remove(struct qcom_iris *iris);
+ int qcom_iris_enable(struct qcom_iris *iris);
+ void qcom_iris_disable(struct qcom_iris *iris);
+ 
+-void qcom_wcnss_assign_iris(struct qcom_wcnss *wcnss, struct qcom_iris *iris, bool use_48mhz_xo);
+-
+ #endif
+diff --git a/drivers/remoteproc/qcom_wcnss_iris.c b/drivers/remoteproc/qcom_wcnss_iris.c
+index 169acd305ae39..09720ddddc857 100644
+--- a/drivers/remoteproc/qcom_wcnss_iris.c
++++ b/drivers/remoteproc/qcom_wcnss_iris.c
+@@ -17,7 +17,7 @@
+ #include "qcom_wcnss.h"
+ 
+ struct qcom_iris {
+-	struct device *dev;
++	struct device dev;
+ 
+ 	struct clk *xo_clk;
+ 
+@@ -75,7 +75,7 @@ int qcom_iris_enable(struct qcom_iris *iris)
+ 
+ 	ret = clk_prepare_enable(iris->xo_clk);
+ 	if (ret) {
+-		dev_err(iris->dev, "failed to enable xo clk\n");
++		dev_err(&iris->dev, "failed to enable xo clk\n");
+ 		goto disable_regulators;
+ 	}
+ 
+@@ -93,43 +93,90 @@ void qcom_iris_disable(struct qcom_iris *iris)
+ 	regulator_bulk_disable(iris->num_vregs, iris->vregs);
+ }
+ 
+-static int qcom_iris_probe(struct platform_device *pdev)
++static const struct of_device_id iris_of_match[] = {
++	{ .compatible = "qcom,wcn3620", .data = &wcn3620_data },
++	{ .compatible = "qcom,wcn3660", .data = &wcn3660_data },
++	{ .compatible = "qcom,wcn3660b", .data = &wcn3680_data },
++	{ .compatible = "qcom,wcn3680", .data = &wcn3680_data },
++	{}
++};
++
++static void qcom_iris_release(struct device *dev)
++{
++	struct qcom_iris *iris = container_of(dev, struct qcom_iris, dev);
++
++	of_node_put(iris->dev.of_node);
++	kfree(iris);
++}
++
++struct qcom_iris *qcom_iris_probe(struct device *parent, bool *use_48mhz_xo)
+ {
++	const struct of_device_id *match;
+ 	const struct iris_data *data;
+-	struct qcom_wcnss *wcnss;
++	struct device_node *of_node;
+ 	struct qcom_iris *iris;
+ 	int ret;
+ 	int i;
+ 
+-	iris = devm_kzalloc(&pdev->dev, sizeof(struct qcom_iris), GFP_KERNEL);
+-	if (!iris)
+-		return -ENOMEM;
++	of_node = of_get_child_by_name(parent->of_node, "iris");
++	if (!of_node) {
++		dev_err(parent, "No child node \"iris\" found\n");
++		return ERR_PTR(-EINVAL);
++	}
++
++	iris = kzalloc(sizeof(*iris), GFP_KERNEL);
++	if (!iris) {
++		of_node_put(of_node);
++		return ERR_PTR(-ENOMEM);
++	}
++
++	device_initialize(&iris->dev);
++	iris->dev.parent = parent;
++	iris->dev.release = qcom_iris_release;
++	iris->dev.of_node = of_node;
++
++	dev_set_name(&iris->dev, "%s.iris", dev_name(parent));
++
++	ret = device_add(&iris->dev);
++	if (ret) {
++		put_device(&iris->dev);
++		return ERR_PTR(ret);
++	}
++
++	match = of_match_device(iris_of_match, &iris->dev);
++	if (!match) {
++		dev_err(&iris->dev, "no matching compatible for iris\n");
++		ret = -EINVAL;
++		goto err_device_del;
++	}
+ 
+-	data = of_device_get_match_data(&pdev->dev);
+-	wcnss = dev_get_drvdata(pdev->dev.parent);
++	data = match->data;
+ 
+-	iris->xo_clk = devm_clk_get(&pdev->dev, "xo");
++	iris->xo_clk = devm_clk_get(&iris->dev, "xo");
+ 	if (IS_ERR(iris->xo_clk)) {
+-		if (PTR_ERR(iris->xo_clk) != -EPROBE_DEFER)
+-			dev_err(&pdev->dev, "failed to acquire xo clk\n");
+-		return PTR_ERR(iris->xo_clk);
++		ret = PTR_ERR(iris->xo_clk);
++		if (ret != -EPROBE_DEFER)
++			dev_err(&iris->dev, "failed to acquire xo clk\n");
++		goto err_device_del;
+ 	}
+ 
+ 	iris->num_vregs = data->num_vregs;
+-	iris->vregs = devm_kcalloc(&pdev->dev,
++	iris->vregs = devm_kcalloc(&iris->dev,
+ 				   iris->num_vregs,
+ 				   sizeof(struct regulator_bulk_data),
+ 				   GFP_KERNEL);
+-	if (!iris->vregs)
+-		return -ENOMEM;
++	if (!iris->vregs) {
++		ret = -ENOMEM;
++		goto err_device_del;
++	}
+ 
+ 	for (i = 0; i < iris->num_vregs; i++)
+ 		iris->vregs[i].supply = data->vregs[i].name;
+ 
+-	ret = devm_regulator_bulk_get(&pdev->dev, iris->num_vregs, iris->vregs);
++	ret = devm_regulator_bulk_get(&iris->dev, iris->num_vregs, iris->vregs);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "failed to get regulators\n");
+-		return ret;
++		dev_err(&iris->dev, "failed to get regulators\n");
++		goto err_device_del;
+ 	}
+ 
+ 	for (i = 0; i < iris->num_vregs; i++) {
+@@ -143,34 +190,17 @@ static int qcom_iris_probe(struct platform_device *pdev)
+ 					   data->vregs[i].load_uA);
+ 	}
+ 
+-	qcom_wcnss_assign_iris(wcnss, iris, data->use_48mhz_xo);
+-
+-	return 0;
+-}
++	*use_48mhz_xo = data->use_48mhz_xo;
+ 
+-static int qcom_iris_remove(struct platform_device *pdev)
+-{
+-	struct qcom_wcnss *wcnss = dev_get_drvdata(pdev->dev.parent);
++	return iris;
+ 
+-	qcom_wcnss_assign_iris(wcnss, NULL, false);
++err_device_del:
++	device_del(&iris->dev);
+ 
+-	return 0;
++	return ERR_PTR(ret);
+ }
+ 
+-static const struct of_device_id iris_of_match[] = {
+-	{ .compatible = "qcom,wcn3620", .data = &wcn3620_data },
+-	{ .compatible = "qcom,wcn3660", .data = &wcn3660_data },
+-	{ .compatible = "qcom,wcn3660b", .data = &wcn3680_data },
+-	{ .compatible = "qcom,wcn3680", .data = &wcn3680_data },
+-	{}
+-};
+-MODULE_DEVICE_TABLE(of, iris_of_match);
+-
+-struct platform_driver qcom_iris_driver = {
+-	.probe = qcom_iris_probe,
+-	.remove = qcom_iris_remove,
+-	.driver = {
+-		.name = "qcom-iris",
+-		.of_match_table = iris_of_match,
+-	},
+-};
++void qcom_iris_remove(struct qcom_iris *iris)
++{
++	device_del(&iris->dev);
++}
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 670fd8a2970e3..6545afb2f20eb 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -1053,7 +1053,9 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	 * ACK the rtc irq here
+ 	 */
+ 	if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
++		local_irq_disable();
+ 		cmos_interrupt(0, (void *)cmos->rtc);
++		local_irq_enable();
+ 		return;
+ 	}
+ 
+diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c
+index 2f3515fa242a3..f3d5c7f4c13d2 100644
+--- a/drivers/s390/char/sclp_early.c
++++ b/drivers/s390/char/sclp_early.c
+@@ -45,13 +45,14 @@ static void __init sclp_early_facilities_detect(void)
+ 	sclp.has_gisaf = !!(sccb->fac118 & 0x08);
+ 	sclp.has_hvs = !!(sccb->fac119 & 0x80);
+ 	sclp.has_kss = !!(sccb->fac98 & 0x01);
+-	sclp.has_sipl = !!(sccb->cbl & 0x4000);
+ 	if (sccb->fac85 & 0x02)
+ 		S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP;
+ 	if (sccb->fac91 & 0x40)
+ 		S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_GUEST;
+ 	if (sccb->cpuoff > 134)
+ 		sclp.has_diag318 = !!(sccb->byte_134 & 0x80);
++	if (sccb->cpuoff > 137)
++		sclp.has_sipl = !!(sccb->cbl & 0x4000);
+ 	sclp.rnmax = sccb->rnmax ? sccb->rnmax : sccb->rnmax2;
+ 	sclp.rzm = sccb->rnsize ? sccb->rnsize : sccb->rnsize2;
+ 	sclp.rzm <<= 20;
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 6414bd5741b87..a38b0c39ea4be 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -467,7 +467,7 @@ static void vhost_tx_batch(struct vhost_net *net,
+ 		.num = nvq->batched_xdp,
+ 		.ptr = nvq->xdp,
+ 	};
+-	int err;
++	int i, err;
+ 
+ 	if (nvq->batched_xdp == 0)
+ 		goto signal_used;
+@@ -476,6 +476,15 @@ static void vhost_tx_batch(struct vhost_net *net,
+ 	err = sock->ops->sendmsg(sock, msghdr, 0);
+ 	if (unlikely(err < 0)) {
+ 		vq_err(&nvq->vq, "Fail to batch sending packets\n");
++
++		/* free pages owned by XDP; since this is an unlikely error path,
++		 * keep it simple and avoid more complex bulk update for the
++		 * used pages
++		 */
++		for (i = 0; i < nvq->batched_xdp; ++i)
++			put_page(virt_to_head_page(nvq->xdp[i].data));
++		nvq->batched_xdp = 0;
++		nvq->done_idx = 0;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/video/backlight/ktd253-backlight.c b/drivers/video/backlight/ktd253-backlight.c
+index a7df5bcca9da5..37aa5a6695309 100644
+--- a/drivers/video/backlight/ktd253-backlight.c
++++ b/drivers/video/backlight/ktd253-backlight.c
+@@ -25,6 +25,7 @@
+ 
+ #define KTD253_T_LOW_NS (200 + 10) /* Additional 10ns as safety factor */
+ #define KTD253_T_HIGH_NS (200 + 10) /* Additional 10ns as safety factor */
++#define KTD253_T_OFF_CRIT_NS 100000 /* 100 us, now it doesn't look good */
+ #define KTD253_T_OFF_MS 3
+ 
+ struct ktd253_backlight {
+@@ -34,13 +35,50 @@ struct ktd253_backlight {
+ 	u16 ratio;
+ };
+ 
++static void ktd253_backlight_set_max_ratio(struct ktd253_backlight *ktd253)
++{
++	gpiod_set_value_cansleep(ktd253->gpiod, 1);
++	ndelay(KTD253_T_HIGH_NS);
++	/* We always fall back to this when we power on */
++}
++
++static int ktd253_backlight_stepdown(struct ktd253_backlight *ktd253)
++{
++	/*
++	 * These GPIO operations absolutely can NOT sleep so no _cansleep
++	 * suffixes, and no using GPIO expanders on slow buses for this!
++	 *
++	 * The maximum number of cycles of the loop is 32  so the time taken
++	 * should nominally be:
++	 * (T_LOW_NS + T_HIGH_NS + loop_time) * 32
++	 *
++	 * Architectures do not always support ndelay() and we will get a few us
++	 * instead. If we get to a critical time limit an interrupt has likely
++	 * occured in the low part of the loop and we need to restart from the
++	 * top so we have the backlight in a known state.
++	 */
++	u64 ns;
++
++	ns = ktime_get_ns();
++	gpiod_set_value(ktd253->gpiod, 0);
++	ndelay(KTD253_T_LOW_NS);
++	gpiod_set_value(ktd253->gpiod, 1);
++	ns = ktime_get_ns() - ns;
++	if (ns >= KTD253_T_OFF_CRIT_NS) {
++		dev_err(ktd253->dev, "PCM on backlight took too long (%llu ns)\n", ns);
++		return -EAGAIN;
++	}
++	ndelay(KTD253_T_HIGH_NS);
++	return 0;
++}
++
+ static int ktd253_backlight_update_status(struct backlight_device *bl)
+ {
+ 	struct ktd253_backlight *ktd253 = bl_get_data(bl);
+ 	int brightness = backlight_get_brightness(bl);
+ 	u16 target_ratio;
+ 	u16 current_ratio = ktd253->ratio;
+-	unsigned long flags;
++	int ret;
+ 
+ 	dev_dbg(ktd253->dev, "new brightness/ratio: %d/32\n", brightness);
+ 
+@@ -62,37 +100,34 @@ static int ktd253_backlight_update_status(struct backlight_device *bl)
+ 	}
+ 
+ 	if (current_ratio == 0) {
+-		gpiod_set_value_cansleep(ktd253->gpiod, 1);
+-		ndelay(KTD253_T_HIGH_NS);
+-		/* We always fall back to this when we power on */
++		ktd253_backlight_set_max_ratio(ktd253);
+ 		current_ratio = KTD253_MAX_RATIO;
+ 	}
+ 
+-	/*
+-	 * WARNING:
+-	 * The loop to set the correct current level is performed
+-	 * with interrupts disabled as it is timing critical.
+-	 * The maximum number of cycles of the loop is 32
+-	 * so the time taken will be (T_LOW_NS + T_HIGH_NS + loop_time) * 32,
+-	 */
+-	local_irq_save(flags);
+ 	while (current_ratio != target_ratio) {
+ 		/*
+ 		 * These GPIO operations absolutely can NOT sleep so no
+ 		 * _cansleep suffixes, and no using GPIO expanders on
+ 		 * slow buses for this!
+ 		 */
+-		gpiod_set_value(ktd253->gpiod, 0);
+-		ndelay(KTD253_T_LOW_NS);
+-		gpiod_set_value(ktd253->gpiod, 1);
+-		ndelay(KTD253_T_HIGH_NS);
+-		/* After 1/32 we loop back to 32/32 */
+-		if (current_ratio == KTD253_MIN_RATIO)
++		ret = ktd253_backlight_stepdown(ktd253);
++		if (ret == -EAGAIN) {
++			/*
++			 * Something disturbed the backlight setting code when
++			 * running so we need to bring the PWM back to a known
++			 * state. This shouldn't happen too much.
++			 */
++			gpiod_set_value_cansleep(ktd253->gpiod, 0);
++			msleep(KTD253_T_OFF_MS);
++			ktd253_backlight_set_max_ratio(ktd253);
++			current_ratio = KTD253_MAX_RATIO;
++		} else if (current_ratio == KTD253_MIN_RATIO) {
++			/* After 1/32 we loop back to 32/32 */
+ 			current_ratio = KTD253_MAX_RATIO;
+-		else
++		} else {
+ 			current_ratio--;
++		}
+ 	}
+-	local_irq_restore(flags);
+ 	ktd253->ratio = current_ratio;
+ 
+ 	dev_dbg(ktd253->dev, "new ratio set to %d/32\n", target_ratio);
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 3bab324852732..0cc07d957b643 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -1096,6 +1096,8 @@ static void watchdog_cdev_unregister(struct watchdog_device *wdd)
+ 		watchdog_stop(wdd);
+ 	}
+ 
++	watchdog_hrtimer_pretimeout_stop(wdd);
++
+ 	mutex_lock(&wd_data->lock);
+ 	wd_data->wdd = NULL;
+ 	wdd->wd_data = NULL;
+@@ -1103,7 +1105,6 @@ static void watchdog_cdev_unregister(struct watchdog_device *wdd)
+ 
+ 	hrtimer_cancel(&wd_data->timer);
+ 	kthread_cancel_work_sync(&wd_data->work);
+-	watchdog_hrtimer_pretimeout_stop(wdd);
+ 
+ 	put_device(&wd_data->dev);
+ }
+@@ -1172,7 +1173,10 @@ int watchdog_set_last_hw_keepalive(struct watchdog_device *wdd,
+ 
+ 	wd_data->last_hw_keepalive = ktime_sub(now, ms_to_ktime(last_ping_ms));
+ 
+-	return __watchdog_ping(wdd);
++	if (watchdog_hw_running(wdd) && handle_boot_enabled)
++		return __watchdog_ping(wdd);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(watchdog_set_last_hw_keepalive);
+ 
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index 24d11861ac7d8..dbb18dc956f34 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -211,12 +211,11 @@ error:
+ 	if (repeat--) {
+ 		/* Min is 2MB */
+ 		nslabs = max(1024UL, (nslabs >> 1));
+-		pr_info("Lowering to %luMB\n",
+-			(nslabs << IO_TLB_SHIFT) >> 20);
++		bytes = nslabs << IO_TLB_SHIFT;
++		pr_info("Lowering to %luMB\n", bytes >> 20);
+ 		goto retry;
+ 	}
+ 	pr_err("%s (rc:%d)\n", xen_swiotlb_error(m_ret), rc);
+-	free_pages((unsigned long)start, order);
+ 	return rc;
+ }
+ 
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 1c8f79b3dd065..dde341a6388a1 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -288,10 +288,10 @@ void fuse_request_end(struct fuse_req *req)
+ 
+ 	/*
+ 	 * test_and_set_bit() implies smp_mb() between bit
+-	 * changing and below intr_entry check. Pairs with
++	 * changing and below FR_INTERRUPTED check. Pairs with
+ 	 * smp_mb() from queue_interrupt().
+ 	 */
+-	if (!list_empty(&req->intr_entry)) {
++	if (test_bit(FR_INTERRUPTED, &req->flags)) {
+ 		spin_lock(&fiq->lock);
+ 		list_del_init(&req->intr_entry);
+ 		spin_unlock(&fiq->lock);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index c5d4638f6d7fd..43aaa35664315 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2683,7 +2683,8 @@ static bool io_file_supports_async(struct io_kiocb *req, int rw)
+ 	return __io_file_supports_async(req->file, rw);
+ }
+ 
+-static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
++static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
++		      int rw)
+ {
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 	struct kiocb *kiocb = &req->rw.kiocb;
+@@ -2705,8 +2706,13 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (unlikely(ret))
+ 		return ret;
+ 
+-	/* don't allow async punt for O_NONBLOCK or RWF_NOWAIT */
+-	if ((kiocb->ki_flags & IOCB_NOWAIT) || (file->f_flags & O_NONBLOCK))
++	/*
++	 * If the file is marked O_NONBLOCK, still allow retry for it if it
++	 * supports async. Otherwise it's impossible to use O_NONBLOCK files
++	 * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
++	 */
++	if ((kiocb->ki_flags & IOCB_NOWAIT) ||
++	    ((file->f_flags & O_NONBLOCK) && !io_file_supports_async(req, rw)))
+ 		req->flags |= REQ_F_NOWAIT;
+ 
+ 	ioprio = READ_ONCE(sqe->ioprio);
+@@ -3107,12 +3113,15 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ 				ret = nr;
+ 			break;
+ 		}
++		if (!iov_iter_is_bvec(iter)) {
++			iov_iter_advance(iter, nr);
++		} else {
++			req->rw.len -= nr;
++			req->rw.addr += nr;
++		}
+ 		ret += nr;
+ 		if (nr != iovec.iov_len)
+ 			break;
+-		req->rw.len -= nr;
+-		req->rw.addr += nr;
+-		iov_iter_advance(iter, nr);
+ 	}
+ 
+ 	return ret;
+@@ -3190,7 +3199,7 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	if (unlikely(!(req->file->f_mode & FMODE_READ)))
+ 		return -EBADF;
+-	return io_prep_rw(req, sqe);
++	return io_prep_rw(req, sqe, READ);
+ }
+ 
+ /*
+@@ -3277,6 +3286,12 @@ static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
+ 		return -EINVAL;
+ }
+ 
++static bool need_read_all(struct io_kiocb *req)
++{
++	return req->flags & REQ_F_ISREG ||
++		S_ISBLK(file_inode(req->file)->i_mode);
++}
++
+ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
+@@ -3331,7 +3346,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ 	} else if (ret == -EIOCBQUEUED) {
+ 		goto out_free;
+ 	} else if (ret <= 0 || ret == io_size || !force_nonblock ||
+-		   (req->flags & REQ_F_NOWAIT) || !(req->flags & REQ_F_ISREG)) {
++		   (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
+ 		/* read all, failed, already did sync or don't want to retry */
+ 		goto done;
+ 	}
+@@ -3379,7 +3394,7 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
+ 		return -EBADF;
+-	return io_prep_rw(req, sqe);
++	return io_prep_rw(req, sqe, WRITE);
+ }
+ 
+ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 540b377ca8f61..acbed2ecf6e8c 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1740,8 +1740,9 @@ static inline void pci_disable_device(struct pci_dev *dev) { }
+ static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
+ static inline int pci_assign_resource(struct pci_dev *dev, int i)
+ { return -EBUSY; }
+-static inline int __pci_register_driver(struct pci_driver *drv,
+-					struct module *owner)
++static inline int __must_check __pci_register_driver(struct pci_driver *drv,
++						     struct module *owner,
++						     const char *mod_name)
+ { return 0; }
+ static inline int pci_register_driver(struct pci_driver *drv)
+ { return 0; }
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 4bac1831de802..1a9b8589391c0 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2451,7 +2451,8 @@
+ #define PCI_VENDOR_ID_TDI               0x192E
+ #define PCI_DEVICE_ID_TDI_EHCI          0x0101
+ 
+-#define PCI_VENDOR_ID_FREESCALE		0x1957
++#define PCI_VENDOR_ID_FREESCALE		0x1957	/* duplicate: NXP */
++#define PCI_VENDOR_ID_NXP		0x1957	/* duplicate: FREESCALE */
+ #define PCI_DEVICE_ID_MPC8308		0xc006
+ #define PCI_DEVICE_ID_MPC8315E		0x00b4
+ #define PCI_DEVICE_ID_MPC8315		0x00b5
+diff --git a/include/linux/phylink.h b/include/linux/phylink.h
+index afb3ded0b6912..237291196ce28 100644
+--- a/include/linux/phylink.h
++++ b/include/linux/phylink.h
+@@ -451,6 +451,9 @@ void phylink_mac_change(struct phylink *, bool up);
+ void phylink_start(struct phylink *);
+ void phylink_stop(struct phylink *);
+ 
++void phylink_suspend(struct phylink *pl, bool mac_wol);
++void phylink_resume(struct phylink *pl);
++
+ void phylink_ethtool_get_wol(struct phylink *, struct ethtool_wolinfo *);
+ int phylink_ethtool_set_wol(struct phylink *, struct ethtool_wolinfo *);
+ 
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ec8d07d88641c..f6935787e7e8b 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1394,6 +1394,7 @@ struct task_struct {
+ 					mce_whole_page : 1,
+ 					__mce_reserved : 62;
+ 	struct callback_head		mce_kill_me;
++	int				mce_count;
+ #endif
+ 
+ #ifdef CONFIG_KRETPROBES
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index b2db9cd9a73f3..4f7478c482738 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1935,7 +1935,7 @@ static inline void __skb_insert(struct sk_buff *newsk,
+ 	WRITE_ONCE(newsk->prev, prev);
+ 	WRITE_ONCE(next->prev, newsk);
+ 	WRITE_ONCE(prev->next, newsk);
+-	list->qlen++;
++	WRITE_ONCE(list->qlen, list->qlen + 1);
+ }
+ 
+ static inline void __skb_queue_splice(const struct sk_buff_head *list,
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index 048d297623c9a..d833f717e8022 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -437,6 +437,11 @@ static inline bool dsa_port_is_user(struct dsa_port *dp)
+ 	return dp->type == DSA_PORT_TYPE_USER;
+ }
+ 
++static inline bool dsa_port_is_unused(struct dsa_port *dp)
++{
++	return dp->type == DSA_PORT_TYPE_UNUSED;
++}
++
+ static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p)
+ {
+ 	return dsa_to_port(ds, p)->type == DSA_PORT_TYPE_UNUSED;
+diff --git a/include/net/flow.h b/include/net/flow.h
+index 6f5e702400717..58beb16a49b8d 100644
+--- a/include/net/flow.h
++++ b/include/net/flow.h
+@@ -194,7 +194,7 @@ static inline struct flowi *flowi4_to_flowi(struct flowi4 *fl4)
+ 
+ static inline struct flowi_common *flowi4_to_flowi_common(struct flowi4 *fl4)
+ {
+-	return &(flowi4_to_flowi(fl4)->u.__fl_common);
++	return &(fl4->__fl_common);
+ }
+ 
+ static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6)
+@@ -204,7 +204,7 @@ static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6)
+ 
+ static inline struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6)
+ {
+-	return &(flowi6_to_flowi(fl6)->u.__fl_common);
++	return &(fl6->__fl_common);
+ }
+ 
+ static inline struct flowi *flowidn_to_flowi(struct flowidn *fldn)
+diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
+index 79a699f106b14..ec88590b31984 100644
+--- a/include/uapi/linux/pkt_sched.h
++++ b/include/uapi/linux/pkt_sched.h
+@@ -827,6 +827,8 @@ struct tc_codel_xstats {
+ 
+ /* FQ_CODEL */
+ 
++#define FQ_CODEL_QUANTUM_MAX (1 << 20)
++
+ enum {
+ 	TCA_FQ_CODEL_UNSPEC,
+ 	TCA_FQ_CODEL_TARGET,
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 1cb1f9b8392e2..e5c4aca620c58 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10192,7 +10192,7 @@ static void perf_event_addr_filters_apply(struct perf_event *event)
+ 		return;
+ 
+ 	if (ifh->nr_file_filters) {
+-		mm = get_task_mm(event->ctx->task);
++		mm = get_task_mm(task);
+ 		if (!mm)
+ 			goto restart;
+ 
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index 94ef2d099e322..d713714cba67f 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -205,12 +205,15 @@ trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode,
+ 			pr_err("Failed to apply filter: %s\n", buf);
+ 	}
+ 
+-	xbc_node_for_each_array_value(enode, "actions", anode, p) {
+-		if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf))
+-			pr_err("action string is too long: %s\n", p);
+-		else if (trigger_process_regex(file, buf) < 0)
+-			pr_err("Failed to apply an action: %s\n", buf);
+-	}
++	if (IS_ENABLED(CONFIG_HIST_TRIGGERS)) {
++		xbc_node_for_each_array_value(enode, "actions", anode, p) {
++			if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf))
++				pr_err("action string is too long: %s\n", p);
++			else if (trigger_process_regex(file, buf) < 0)
++				pr_err("Failed to apply an action: %s\n", buf);
++		}
++	} else if (xbc_node_find_value(enode, "actions", NULL))
++		pr_err("Failed to apply event actions because CONFIG_HIST_TRIGGERS is not set.\n");
+ 
+ 	if (xbc_node_find_value(enode, "enable", NULL)) {
+ 		if (trace_event_enable_disable(file, 1, 0) < 0)
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index ea6178cb5e334..032191977e34c 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -647,7 +647,11 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
+ 	/* Register new event */
+ 	ret = register_kprobe_event(tk);
+ 	if (ret) {
+-		pr_warn("Failed to register probe event(%d)\n", ret);
++		if (ret == -EEXIST) {
++			trace_probe_log_set_index(0);
++			trace_probe_log_err(0, EVENT_EXIST);
++		} else
++			pr_warn("Failed to register probe event(%d)\n", ret);
+ 		goto end;
+ 	}
+ 
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 15413ad7cef2b..0e29bb14fc8be 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -1029,11 +1029,36 @@ error:
+ 	return ret;
+ }
+ 
++static struct trace_event_call *
++find_trace_event_call(const char *system, const char *event_name)
++{
++	struct trace_event_call *tp_event;
++	const char *name;
++
++	list_for_each_entry(tp_event, &ftrace_events, list) {
++		if (!tp_event->class->system ||
++		    strcmp(system, tp_event->class->system))
++			continue;
++		name = trace_event_name(tp_event);
++		if (!name || strcmp(event_name, name))
++			continue;
++		return tp_event;
++	}
++
++	return NULL;
++}
++
+ int trace_probe_register_event_call(struct trace_probe *tp)
+ {
+ 	struct trace_event_call *call = trace_probe_event_call(tp);
+ 	int ret;
+ 
++	lockdep_assert_held(&event_mutex);
++
++	if (find_trace_event_call(trace_probe_group_name(tp),
++				  trace_probe_name(tp)))
++		return -EEXIST;
++
+ 	ret = register_trace_event(&call->event);
+ 	if (!ret)
+ 		return -ENODEV;
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index 227d518e5ba52..9f14186d132ed 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -399,6 +399,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
+ 	C(NO_EVENT_NAME,	"Event name is not specified"),		\
+ 	C(EVENT_TOO_LONG,	"Event name is too long"),		\
+ 	C(BAD_EVENT_NAME,	"Event name must follow the same rules as C identifiers"), \
++	C(EVENT_EXIST,		"Given group/event name is already used by another event"), \
+ 	C(RETVAL_ON_PROBE,	"$retval is not available on probe"),	\
+ 	C(BAD_STACK_NUM,	"Invalid stack number"),		\
+ 	C(BAD_ARG_NUM,		"Invalid argument number"),		\
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 9b50869a5ddb5..957244ee07c8d 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -514,7 +514,11 @@ static int register_trace_uprobe(struct trace_uprobe *tu)
+ 
+ 	ret = register_uprobe_event(tu);
+ 	if (ret) {
+-		pr_warn("Failed to register probe event(%d)\n", ret);
++		if (ret == -EEXIST) {
++			trace_probe_log_set_index(0);
++			trace_probe_log_err(0, EVENT_EXIST);
++		} else
++			pr_warn("Failed to register probe event(%d)\n", ret);
+ 		goto end;
+ 	}
+ 
+diff --git a/net/caif/chnl_net.c b/net/caif/chnl_net.c
+index 37b67194c0dfe..414dc5671c45e 100644
+--- a/net/caif/chnl_net.c
++++ b/net/caif/chnl_net.c
+@@ -53,20 +53,6 @@ struct chnl_net {
+ 	enum caif_states state;
+ };
+ 
+-static void robust_list_del(struct list_head *delete_node)
+-{
+-	struct list_head *list_node;
+-	struct list_head *n;
+-	ASSERT_RTNL();
+-	list_for_each_safe(list_node, n, &chnl_net_list) {
+-		if (list_node == delete_node) {
+-			list_del(list_node);
+-			return;
+-		}
+-	}
+-	WARN_ON(1);
+-}
+-
+ static int chnl_recv_cb(struct cflayer *layr, struct cfpkt *pkt)
+ {
+ 	struct sk_buff *skb;
+@@ -364,6 +350,7 @@ static int chnl_net_init(struct net_device *dev)
+ 	ASSERT_RTNL();
+ 	priv = netdev_priv(dev);
+ 	strncpy(priv->name, dev->name, sizeof(priv->name));
++	INIT_LIST_HEAD(&priv->list_field);
+ 	return 0;
+ }
+ 
+@@ -372,7 +359,7 @@ static void chnl_net_uninit(struct net_device *dev)
+ 	struct chnl_net *priv;
+ 	ASSERT_RTNL();
+ 	priv = netdev_priv(dev);
+-	robust_list_del(&priv->list_field);
++	list_del_init(&priv->list_field);
+ }
+ 
+ static const struct net_device_ops netdev_ops = {
+@@ -537,7 +524,7 @@ static void __exit chnl_exit_module(void)
+ 	rtnl_lock();
+ 	list_for_each_safe(list_node, _tmp, &chnl_net_list) {
+ 		dev = list_entry(list_node, struct chnl_net, list_field);
+-		list_del(list_node);
++		list_del_init(list_node);
+ 		delete_device(dev);
+ 	}
+ 	rtnl_unlock();
+diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c
+index c5c74a34d139d..91e7a22026971 100644
+--- a/net/dccp/minisocks.c
++++ b/net/dccp/minisocks.c
+@@ -94,6 +94,8 @@ struct sock *dccp_create_openreq_child(const struct sock *sk,
+ 		newdp->dccps_role	    = DCCP_ROLE_SERVER;
+ 		newdp->dccps_hc_rx_ackvec   = NULL;
+ 		newdp->dccps_service_list   = NULL;
++		newdp->dccps_hc_rx_ccid     = NULL;
++		newdp->dccps_hc_tx_ccid     = NULL;
+ 		newdp->dccps_service	    = dreq->dreq_service;
+ 		newdp->dccps_timestamp_echo = dreq->dreq_timestamp_echo;
+ 		newdp->dccps_timestamp_time = dreq->dreq_timestamp_time;
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 84cad1be9ce48..e058a2e320e35 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -345,6 +345,11 @@ bool dsa_schedule_work(struct work_struct *work)
+ 	return queue_work(dsa_owq, work);
+ }
+ 
++void dsa_flush_workqueue(void)
++{
++	flush_workqueue(dsa_owq);
++}
++
+ int dsa_devlink_param_get(struct devlink *dl, u32 id,
+ 			  struct devlink_param_gset_ctx *ctx)
+ {
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 185629f27f803..79267b00af68f 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -809,6 +809,33 @@ static void dsa_switch_teardown(struct dsa_switch *ds)
+ 	ds->setup = false;
+ }
+ 
++/* First tear down the non-shared, then the shared ports. This ensures that
++ * all work items scheduled by our switchdev handlers for user ports have
++ * completed before we destroy the refcounting kept on the shared ports.
++ */
++static void dsa_tree_teardown_ports(struct dsa_switch_tree *dst)
++{
++	struct dsa_port *dp;
++
++	list_for_each_entry(dp, &dst->ports, list)
++		if (dsa_port_is_user(dp) || dsa_port_is_unused(dp))
++			dsa_port_teardown(dp);
++
++	dsa_flush_workqueue();
++
++	list_for_each_entry(dp, &dst->ports, list)
++		if (dsa_port_is_dsa(dp) || dsa_port_is_cpu(dp))
++			dsa_port_teardown(dp);
++}
++
++static void dsa_tree_teardown_switches(struct dsa_switch_tree *dst)
++{
++	struct dsa_port *dp;
++
++	list_for_each_entry(dp, &dst->ports, list)
++		dsa_switch_teardown(dp->ds);
++}
++
+ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
+ {
+ 	struct dsa_port *dp;
+@@ -835,26 +862,13 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
+ 	return 0;
+ 
+ teardown:
+-	list_for_each_entry(dp, &dst->ports, list)
+-		dsa_port_teardown(dp);
++	dsa_tree_teardown_ports(dst);
+ 
+-	list_for_each_entry(dp, &dst->ports, list)
+-		dsa_switch_teardown(dp->ds);
++	dsa_tree_teardown_switches(dst);
+ 
+ 	return err;
+ }
+ 
+-static void dsa_tree_teardown_switches(struct dsa_switch_tree *dst)
+-{
+-	struct dsa_port *dp;
+-
+-	list_for_each_entry(dp, &dst->ports, list)
+-		dsa_port_teardown(dp);
+-
+-	list_for_each_entry(dp, &dst->ports, list)
+-		dsa_switch_teardown(dp->ds);
+-}
+-
+ static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
+ {
+ 	struct dsa_port *dp;
+@@ -964,6 +978,8 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
+ 
+ 	dsa_tree_teardown_master(dst);
+ 
++	dsa_tree_teardown_ports(dst);
++
+ 	dsa_tree_teardown_switches(dst);
+ 
+ 	dsa_tree_teardown_default_cpu(dst);
+diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
+index cddf7cb0f398f..6c00557ca9bf4 100644
+--- a/net/dsa/dsa_priv.h
++++ b/net/dsa/dsa_priv.h
+@@ -158,6 +158,7 @@ void dsa_tag_driver_put(const struct dsa_device_ops *ops);
+ const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf);
+ 
+ bool dsa_schedule_work(struct work_struct *work);
++void dsa_flush_workqueue(void);
+ const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops);
+ 
+ static inline int dsa_tag_protocol_overhead(const struct dsa_device_ops *ops)
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index b34116b15d436..527fc20d47adf 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -1784,13 +1784,11 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
+ 		 * use the switch internal MDIO bus instead
+ 		 */
+ 		ret = dsa_slave_phy_connect(slave_dev, dp->index, phy_flags);
+-		if (ret) {
+-			netdev_err(slave_dev,
+-				   "failed to connect to port %d: %d\n",
+-				   dp->index, ret);
+-			phylink_destroy(dp->pl);
+-			return ret;
+-		}
++	}
++	if (ret) {
++		netdev_err(slave_dev, "failed to connect to PHY: %pe\n",
++			   ERR_PTR(ret));
++		phylink_destroy(dp->pl);
+ 	}
+ 
+ 	return ret;
+diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
+index 57c46b4ab2b3f..e34b80fa52e1d 100644
+--- a/net/dsa/tag_rtl4_a.c
++++ b/net/dsa/tag_rtl4_a.c
+@@ -54,9 +54,10 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
+ 	p = (__be16 *)tag;
+ 	*p = htons(RTL4_A_ETHERTYPE);
+ 
+-	out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
+-	/* The lower bits is the port number */
+-	out |= (u8)dp->index;
++	out = (RTL4_A_PROTOCOL_RTL8366RB << RTL4_A_PROTOCOL_SHIFT) | (2 << 8);
++	/* The lower bits indicate the port number */
++	out |= BIT(dp->index);
++
+ 	p = (__be16 *)(tag + 2);
+ 	*p = htons(out);
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 6134b180f59f8..af011534bcb24 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -906,7 +906,7 @@ static int ethtool_rxnfc_copy_to_user(void __user *useraddr,
+ 						   rule_buf);
+ 		useraddr += offsetof(struct compat_ethtool_rxnfc, rule_locs);
+ 	} else {
+-		ret = copy_to_user(useraddr, &rxnfc, size);
++		ret = copy_to_user(useraddr, rxnfc, size);
+ 		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
+ 	}
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 7fbd0b532f529..099259fc826aa 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -465,16 +465,14 @@ void cipso_v4_doi_free(struct cipso_v4_doi *doi_def)
+ 	if (!doi_def)
+ 		return;
+ 
+-	if (doi_def->map.std) {
+-		switch (doi_def->type) {
+-		case CIPSO_V4_MAP_TRANS:
+-			kfree(doi_def->map.std->lvl.cipso);
+-			kfree(doi_def->map.std->lvl.local);
+-			kfree(doi_def->map.std->cat.cipso);
+-			kfree(doi_def->map.std->cat.local);
+-			kfree(doi_def->map.std);
+-			break;
+-		}
++	switch (doi_def->type) {
++	case CIPSO_V4_MAP_TRANS:
++		kfree(doi_def->map.std->lvl.cipso);
++		kfree(doi_def->map.std->lvl.local);
++		kfree(doi_def->map.std->cat.cipso);
++		kfree(doi_def->map.std->cat.local);
++		kfree(doi_def->map.std);
++		break;
+ 	}
+ 	kfree(doi_def);
+ }
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 95419b7adf5ce..6480c6dfe1bf9 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -473,8 +473,6 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ static int gre_handle_offloads(struct sk_buff *skb, bool csum)
+ {
+-	if (csum && skb_checksum_start(skb) < skb->data)
+-		return -EINVAL;
+ 	return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
+ }
+ 
+@@ -632,15 +630,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ 	}
+ 
+ 	if (dev->header_ops) {
++		const int pull_len = tunnel->hlen + sizeof(struct iphdr);
++
+ 		if (skb_cow_head(skb, 0))
+ 			goto free_skb;
+ 
+ 		tnl_params = (const struct iphdr *)skb->data;
+ 
++		if (pull_len > skb_transport_offset(skb))
++			goto free_skb;
++
+ 		/* Pull skb since ip_tunnel_xmit() needs skb->data pointing
+ 		 * to gre header.
+ 		 */
+-		skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
++		skb_pull(skb, pull_len);
+ 		skb_reset_mac_header(skb);
+ 	} else {
+ 		if (skb_cow_head(skb, dev->needed_headroom))
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 4075230b14c63..75ca4b6e484f4 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -2490,6 +2490,7 @@ static int nh_create_ipv4(struct net *net, struct nexthop *nh,
+ 		.fc_gw4   = cfg->gw.ipv4,
+ 		.fc_gw_family = cfg->gw.ipv4 ? AF_INET : 0,
+ 		.fc_flags = cfg->nh_flags,
++		.fc_nlinfo = cfg->nlinfo,
+ 		.fc_encap = cfg->nh_encap,
+ 		.fc_encap_type = cfg->nh_encap_type,
+ 	};
+@@ -2528,6 +2529,7 @@ static int nh_create_ipv6(struct net *net,  struct nexthop *nh,
+ 		.fc_ifindex = cfg->nh_ifindex,
+ 		.fc_gateway = cfg->gw.ipv6,
+ 		.fc_flags = cfg->nh_flags,
++		.fc_nlinfo = cfg->nlinfo,
+ 		.fc_encap = cfg->nh_encap,
+ 		.fc_encap_type = cfg->nh_encap_type,
+ 		.fc_is_fdb = cfg->nh_fdb,
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 149ceb5c94ffc..66d9085da87ed 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1314,7 +1314,7 @@ static u8 tcp_sacktag_one(struct sock *sk,
+ 	if (dup_sack && (sacked & TCPCB_RETRANS)) {
+ 		if (tp->undo_marker && tp->undo_retrans > 0 &&
+ 		    after(end_seq, tp->undo_marker))
+-			tp->undo_retrans--;
++			tp->undo_retrans = max_t(int, 0, tp->undo_retrans - pcount);
+ 		if ((sacked & TCPCB_SACKED_ACKED) &&
+ 		    before(start_seq, state->reord))
+ 				state->reord = start_seq;
+diff --git a/net/ipv4/udp_tunnel_nic.c b/net/ipv4/udp_tunnel_nic.c
+index 0d122edc368dd..b91003538d87a 100644
+--- a/net/ipv4/udp_tunnel_nic.c
++++ b/net/ipv4/udp_tunnel_nic.c
+@@ -935,7 +935,7 @@ static int __init udp_tunnel_nic_init_module(void)
+ {
+ 	int err;
+ 
+-	udp_tunnel_nic_workqueue = alloc_workqueue("udp_tunnel_nic", 0, 0);
++	udp_tunnel_nic_workqueue = alloc_ordered_workqueue("udp_tunnel_nic", 0);
+ 	if (!udp_tunnel_nic_workqueue)
+ 		return -ENOMEM;
+ 
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 7a5e90e093630..bc224f917bbd5 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -629,8 +629,6 @@ drop:
+ 
+ static int gre_handle_offloads(struct sk_buff *skb, bool csum)
+ {
+-	if (csum && skb_checksum_start(skb) < skb->data)
+-		return -EINVAL;
+ 	return iptunnel_handle_offloads(skb,
+ 					csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
+ }
+diff --git a/net/ipv6/netfilter/nf_socket_ipv6.c b/net/ipv6/netfilter/nf_socket_ipv6.c
+index 6fd54744cbc38..aa5bb8789ba0b 100644
+--- a/net/ipv6/netfilter/nf_socket_ipv6.c
++++ b/net/ipv6/netfilter/nf_socket_ipv6.c
+@@ -99,7 +99,7 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ {
+ 	__be16 dport, sport;
+ 	const struct in6_addr *daddr = NULL, *saddr = NULL;
+-	struct ipv6hdr *iph = ipv6_hdr(skb);
++	struct ipv6hdr *iph = ipv6_hdr(skb), ipv6_var;
+ 	struct sk_buff *data_skb = NULL;
+ 	int doff = 0;
+ 	int thoff = 0, tproto;
+@@ -129,8 +129,6 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ 			thoff + sizeof(*hp);
+ 
+ 	} else if (tproto == IPPROTO_ICMPV6) {
+-		struct ipv6hdr ipv6_var;
+-
+ 		if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr,
+ 					 &sport, &dport, &ipv6_var))
+ 			return NULL;
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 53486b162f01c..93271a2632b8e 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -869,8 +869,10 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
+ 	}
+ 
+ 	if (tunnel->version == L2TP_HDR_VER_3 &&
+-	    l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr))
++	    l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) {
++		l2tp_session_dec_refcount(session);
+ 		goto invalid;
++	}
+ 
+ 	l2tp_recv_common(session, skb, ptr, optr, hdrflags, length);
+ 	l2tp_session_dec_refcount(session);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 7b37944597833..89251cbe9f1a7 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -540,7 +540,6 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
+ 	subflow = list_first_entry_or_null(&msk->conn_list, typeof(*subflow), node);
+ 	if (subflow) {
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+-		bool slow;
+ 
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		pr_debug("send ack for %s%s%s",
+@@ -548,9 +547,7 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
+ 			 mptcp_pm_should_add_signal_ipv6(msk) ? " [ipv6]" : "",
+ 			 mptcp_pm_should_add_signal_port(msk) ? " [port]" : "");
+ 
+-		slow = lock_sock_fast(ssk);
+-		tcp_send_ack(ssk);
+-		unlock_sock_fast(ssk, slow);
++		mptcp_subflow_send_ack(ssk);
+ 		spin_lock_bh(&msk->pm.lock);
+ 	}
+ }
+@@ -567,7 +564,6 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 		struct sock *sk = (struct sock *)msk;
+ 		struct mptcp_addr_info local;
+-		bool slow;
+ 
+ 		local_address((struct sock_common *)ssk, &local);
+ 		if (!addresses_equal(&local, addr, addr->port))
+@@ -580,9 +576,7 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ 
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		pr_debug("send ack for mp_prio");
+-		slow = lock_sock_fast(ssk);
+-		tcp_send_ack(ssk);
+-		unlock_sock_fast(ssk, slow);
++		mptcp_subflow_send_ack(ssk);
+ 		spin_lock_bh(&msk->pm.lock);
+ 
+ 		return 0;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index a889249478152..acbead7cf50f0 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -427,19 +427,22 @@ static bool tcp_can_send_ack(const struct sock *ssk)
+ 	       (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN));
+ }
+ 
++void mptcp_subflow_send_ack(struct sock *ssk)
++{
++	bool slow;
++
++	slow = lock_sock_fast(ssk);
++	if (tcp_can_send_ack(ssk))
++		tcp_send_ack(ssk);
++	unlock_sock_fast(ssk, slow);
++}
++
+ static void mptcp_send_ack(struct mptcp_sock *msk)
+ {
+ 	struct mptcp_subflow_context *subflow;
+ 
+-	mptcp_for_each_subflow(msk, subflow) {
+-		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+-		bool slow;
+-
+-		slow = lock_sock_fast(ssk);
+-		if (tcp_can_send_ack(ssk))
+-			tcp_send_ack(ssk);
+-		unlock_sock_fast(ssk, slow);
+-	}
++	mptcp_for_each_subflow(msk, subflow)
++		mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow));
+ }
+ 
+ static void mptcp_subflow_cleanup_rbuf(struct sock *ssk)
+@@ -994,6 +997,13 @@ static void mptcp_wmem_uncharge(struct sock *sk, int size)
+ 	msk->wmem_reserved += size;
+ }
+ 
++static void __mptcp_mem_reclaim_partial(struct sock *sk)
++{
++	lockdep_assert_held_once(&sk->sk_lock.slock);
++	__mptcp_update_wmem(sk);
++	sk_mem_reclaim_partial(sk);
++}
++
+ static void mptcp_mem_reclaim_partial(struct sock *sk)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+@@ -1069,12 +1079,8 @@ static void __mptcp_clean_una(struct sock *sk)
+ 	}
+ 
+ out:
+-	if (cleaned) {
+-		if (tcp_under_memory_pressure(sk)) {
+-			__mptcp_update_wmem(sk);
+-			sk_mem_reclaim_partial(sk);
+-		}
+-	}
++	if (cleaned && tcp_under_memory_pressure(sk))
++		__mptcp_mem_reclaim_partial(sk);
+ 
+ 	if (snd_una == READ_ONCE(msk->snd_nxt)) {
+ 		if (msk->timer_ival && !mptcp_data_fin_enabled(msk))
+@@ -1154,6 +1160,7 @@ struct mptcp_sendmsg_info {
+ 	u16 limit;
+ 	u16 sent;
+ 	unsigned int flags;
++	bool data_lock_held;
+ };
+ 
+ static int mptcp_check_allowed_size(struct mptcp_sock *msk, u64 data_seq,
+@@ -1225,17 +1232,17 @@ static bool __mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, gfp_t gfp)
+ 	return false;
+ }
+ 
+-static bool mptcp_must_reclaim_memory(struct sock *sk, struct sock *ssk)
++static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, bool data_lock_held)
+ {
+-	return !ssk->sk_tx_skb_cache &&
+-	       tcp_under_memory_pressure(sk);
+-}
++	gfp_t gfp = data_lock_held ? GFP_ATOMIC : sk->sk_allocation;
+ 
+-static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk)
+-{
+-	if (unlikely(mptcp_must_reclaim_memory(sk, ssk)))
+-		mptcp_mem_reclaim_partial(sk);
+-	return __mptcp_alloc_tx_skb(sk, ssk, sk->sk_allocation);
++	if (unlikely(tcp_under_memory_pressure(sk))) {
++		if (data_lock_held)
++			__mptcp_mem_reclaim_partial(sk);
++		else
++			mptcp_mem_reclaim_partial(sk);
++	}
++	return __mptcp_alloc_tx_skb(sk, ssk, gfp);
+ }
+ 
+ /* note: this always recompute the csum on the whole skb, even
+@@ -1259,7 +1266,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	bool zero_window_probe = false;
+ 	struct mptcp_ext *mpext = NULL;
+ 	struct sk_buff *skb, *tail;
+-	bool can_collapse = false;
++	bool must_collapse = false;
+ 	int size_bias = 0;
+ 	int avail_size;
+ 	size_t ret = 0;
+@@ -1279,16 +1286,24 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 		 * SSN association set here
+ 		 */
+ 		mpext = skb_ext_find(skb, SKB_EXT_MPTCP);
+-		can_collapse = (info->size_goal - skb->len > 0) &&
+-			 mptcp_skb_can_collapse_to(data_seq, skb, mpext);
+-		if (!can_collapse) {
++		if (!mptcp_skb_can_collapse_to(data_seq, skb, mpext)) {
+ 			TCP_SKB_CB(skb)->eor = 1;
+-		} else {
++			goto alloc_skb;
++		}
++
++		must_collapse = (info->size_goal - skb->len > 0) &&
++				(skb_shinfo(skb)->nr_frags < sysctl_max_skb_frags);
++		if (must_collapse) {
+ 			size_bias = skb->len;
+ 			avail_size = info->size_goal - skb->len;
+ 		}
+ 	}
+ 
++alloc_skb:
++	if (!must_collapse && !ssk->sk_tx_skb_cache &&
++	    !mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held))
++		return 0;
++
+ 	/* Zero window and all data acked? Probe. */
+ 	avail_size = mptcp_check_allowed_size(msk, data_seq, avail_size);
+ 	if (avail_size == 0) {
+@@ -1318,7 +1333,6 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	if (skb == tail) {
+ 		TCP_SKB_CB(tail)->tcp_flags &= ~TCPHDR_PSH;
+ 		mpext->data_len += ret;
+-		WARN_ON_ONCE(!can_collapse);
+ 		WARN_ON_ONCE(zero_window_probe);
+ 		goto out;
+ 	}
+@@ -1470,15 +1484,6 @@ static void __mptcp_push_pending(struct sock *sk, unsigned int flags)
+ 			if (ssk != prev_ssk || !prev_ssk)
+ 				lock_sock(ssk);
+ 
+-			/* keep it simple and always provide a new skb for the
+-			 * subflow, even if we will not use it when collapsing
+-			 * on the pending one
+-			 */
+-			if (!mptcp_alloc_tx_skb(sk, ssk)) {
+-				mptcp_push_release(sk, ssk, &info);
+-				goto out;
+-			}
+-
+ 			ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ 			if (ret <= 0) {
+ 				mptcp_push_release(sk, ssk, &info);
+@@ -1512,7 +1517,9 @@ out:
+ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
+ {
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+-	struct mptcp_sendmsg_info info;
++	struct mptcp_sendmsg_info info = {
++		.data_lock_held = true,
++	};
+ 	struct mptcp_data_frag *dfrag;
+ 	struct sock *xmit_ssk;
+ 	int len, copied = 0;
+@@ -1538,13 +1545,6 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk)
+ 				goto out;
+ 			}
+ 
+-			if (unlikely(mptcp_must_reclaim_memory(sk, ssk))) {
+-				__mptcp_update_wmem(sk);
+-				sk_mem_reclaim_partial(sk);
+-			}
+-			if (!__mptcp_alloc_tx_skb(sk, ssk, GFP_ATOMIC))
+-				goto out;
+-
+ 			ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ 			if (ret <= 0)
+ 				goto out;
+@@ -2296,9 +2296,6 @@ static void __mptcp_retrans(struct sock *sk)
+ 	info.sent = 0;
+ 	info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len : dfrag->already_sent;
+ 	while (info.sent < info.limit) {
+-		if (!mptcp_alloc_tx_skb(sk, ssk))
+-			break;
+-
+ 		ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ 		if (ret <= 0)
+ 			break;
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 0f0c026c5f8bb..6ac564d584c19 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -560,6 +560,7 @@ void __init mptcp_subflow_init(void);
+ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how);
+ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 		     struct mptcp_subflow_context *subflow);
++void mptcp_subflow_send_ack(struct sock *ssk);
+ void mptcp_subflow_reset(struct sock *ssk);
+ void mptcp_sock_graft(struct sock *sk, struct socket *parent);
+ struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk);
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 337e22d8b40b1..99b1de14ff7ee 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -41,6 +41,7 @@ struct nft_ct_helper_obj  {
+ #ifdef CONFIG_NF_CONNTRACK_ZONES
+ static DEFINE_PER_CPU(struct nf_conn *, nft_ct_pcpu_template);
+ static unsigned int nft_ct_pcpu_template_refcnt __read_mostly;
++static DEFINE_MUTEX(nft_ct_pcpu_mutex);
+ #endif
+ 
+ static u64 nft_ct_get_eval_counter(const struct nf_conn_counter *c,
+@@ -525,8 +526,10 @@ static void __nft_ct_set_destroy(const struct nft_ctx *ctx, struct nft_ct *priv)
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_ZONES
+ 	case NFT_CT_ZONE:
++		mutex_lock(&nft_ct_pcpu_mutex);
+ 		if (--nft_ct_pcpu_template_refcnt == 0)
+ 			nft_ct_tmpl_put_pcpu();
++		mutex_unlock(&nft_ct_pcpu_mutex);
+ 		break;
+ #endif
+ 	default:
+@@ -564,9 +567,13 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_ZONES
+ 	case NFT_CT_ZONE:
+-		if (!nft_ct_tmpl_alloc_pcpu())
++		mutex_lock(&nft_ct_pcpu_mutex);
++		if (!nft_ct_tmpl_alloc_pcpu()) {
++			mutex_unlock(&nft_ct_pcpu_mutex);
+ 			return -ENOMEM;
++		}
+ 		nft_ct_pcpu_template_refcnt++;
++		mutex_unlock(&nft_ct_pcpu_mutex);
+ 		len = sizeof(u16);
+ 		break;
+ #endif
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index bdbda61db8b96..d3c0cae813c65 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -493,7 +493,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		goto err;
+ 	}
+ 
+-	if (!size || size & 3 || len != size + hdrlen)
++	if (!size || len != ALIGN(size, 4) + hdrlen)
+ 		goto err;
+ 
+ 	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index bbd5f87536006..99e8db2621984 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -369,6 +369,7 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ {
+ 	struct fq_codel_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_FQ_CODEL_MAX + 1];
++	u32 quantum = 0;
+ 	int err;
+ 
+ 	if (!opt)
+@@ -386,6 +387,13 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 		    q->flows_cnt > 65536)
+ 			return -EINVAL;
+ 	}
++	if (tb[TCA_FQ_CODEL_QUANTUM]) {
++		quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
++		if (quantum > FQ_CODEL_QUANTUM_MAX) {
++			NL_SET_ERR_MSG(extack, "Invalid quantum");
++			return -EINVAL;
++		}
++	}
+ 	sch_tree_lock(sch);
+ 
+ 	if (tb[TCA_FQ_CODEL_TARGET]) {
+@@ -412,8 +420,8 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (tb[TCA_FQ_CODEL_ECN])
+ 		q->cparams.ecn = !!nla_get_u32(tb[TCA_FQ_CODEL_ECN]);
+ 
+-	if (tb[TCA_FQ_CODEL_QUANTUM])
+-		q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
++	if (quantum)
++		q->quantum = quantum;
+ 
+ 	if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])
+ 		q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index a155cfaf01f2e..50762be9c115e 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1979,10 +1979,12 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		tipc_node_distr_xmit(sock_net(sk), &xmitq);
+ 	}
+ 
+-	if (!skb_cb->bytes_read)
+-		tsk_advance_rx_queue(sk);
++	if (skb_cb->bytes_read)
++		goto exit;
++
++	tsk_advance_rx_queue(sk);
+ 
+-	if (likely(!connected) || skb_cb->bytes_read)
++	if (likely(!connected))
+ 		goto exit;
+ 
+ 	/* Send connection flow control advertisement when applicable */
+@@ -2421,7 +2423,7 @@ static int tipc_sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
+ 			    u32 dport, struct sk_buff_head *xmitq)
+ {
+-	unsigned long time_limit = jiffies + 2;
++	unsigned long time_limit = jiffies + usecs_to_jiffies(20000);
+ 	struct sk_buff *skb;
+ 	unsigned int lim;
+ 	atomic_t *dcnt;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index ba7ced947e51c..91ff09d833e8f 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2774,7 +2774,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock,
+ 
+ 		other = unix_peer(sk);
+ 		if (other && unix_peer(other) != sk &&
+-		    unix_recvq_full(other) &&
++		    unix_recvq_full_lockless(other) &&
+ 		    unix_dgram_peer_wake_me(sk, other))
+ 			writable = 0;
+ 
+diff --git a/scripts/clang-tools/gen_compile_commands.py b/scripts/clang-tools/gen_compile_commands.py
+index b7e9ecf16e569..a70cd064bfc4b 100755
+--- a/scripts/clang-tools/gen_compile_commands.py
++++ b/scripts/clang-tools/gen_compile_commands.py
+@@ -13,6 +13,7 @@ import logging
+ import os
+ import re
+ import subprocess
++import sys
+ 
+ _DEFAULT_OUTPUT = 'compile_commands.json'
+ _DEFAULT_LOG_LEVEL = 'WARNING'
+diff --git a/tools/build/Makefile b/tools/build/Makefile
+index 5ed41b96fcded..6f11e6fc9ffe3 100644
+--- a/tools/build/Makefile
++++ b/tools/build/Makefile
+@@ -32,7 +32,7 @@ all: $(OUTPUT)fixdep
+ 
+ # Make sure there's anything to clean,
+ # feature contains check for existing OUTPUT
+-TMP_O := $(if $(OUTPUT),$(OUTPUT)/feature,./)
++TMP_O := $(if $(OUTPUT),$(OUTPUT)feature/,./)
+ 
+ clean:
+ 	$(call QUIET_CLEAN, fixdep)
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index eb8e487ef90b0..29ffd57f5cd8d 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -133,10 +133,10 @@ FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
+ FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS)
+ FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
+ 
+-FEATURE_CHECK_LDFLAGS-libunwind-arm = -lunwind -lunwind-arm
+-FEATURE_CHECK_LDFLAGS-libunwind-aarch64 = -lunwind -lunwind-aarch64
+-FEATURE_CHECK_LDFLAGS-libunwind-x86 = -lunwind -llzma -lunwind-x86
+-FEATURE_CHECK_LDFLAGS-libunwind-x86_64 = -lunwind -llzma -lunwind-x86_64
++FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm
++FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64
++FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86
++FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64
+ 
+ FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto
+ 
+diff --git a/tools/perf/bench/inject-buildid.c b/tools/perf/bench/inject-buildid.c
+index 55d373b75791b..17672790f1231 100644
+--- a/tools/perf/bench/inject-buildid.c
++++ b/tools/perf/bench/inject-buildid.c
+@@ -133,7 +133,7 @@ static u64 dso_map_addr(struct bench_dso *dso)
+ 	return 0x400000ULL + dso->ino * 8192ULL;
+ }
+ 
+-static u32 synthesize_attr(struct bench_data *data)
++static ssize_t synthesize_attr(struct bench_data *data)
+ {
+ 	union perf_event event;
+ 
+@@ -151,7 +151,7 @@ static u32 synthesize_attr(struct bench_data *data)
+ 	return writen(data->input_pipe[1], &event, event.header.size);
+ }
+ 
+-static u32 synthesize_fork(struct bench_data *data)
++static ssize_t synthesize_fork(struct bench_data *data)
+ {
+ 	union perf_event event;
+ 
+@@ -169,8 +169,7 @@ static u32 synthesize_fork(struct bench_data *data)
+ 	return writen(data->input_pipe[1], &event, event.header.size);
+ }
+ 
+-static u32 synthesize_mmap(struct bench_data *data, struct bench_dso *dso,
+-			   u64 timestamp)
++static ssize_t synthesize_mmap(struct bench_data *data, struct bench_dso *dso, u64 timestamp)
+ {
+ 	union perf_event event;
+ 	size_t len = offsetof(struct perf_record_mmap2, filename);
+@@ -198,23 +197,25 @@ static u32 synthesize_mmap(struct bench_data *data, struct bench_dso *dso,
+ 
+ 	if (len > sizeof(event.mmap2)) {
+ 		/* write mmap2 event first */
+-		writen(data->input_pipe[1], &event, len - bench_id_hdr_size);
++		if (writen(data->input_pipe[1], &event, len - bench_id_hdr_size) < 0)
++			return -1;
+ 		/* zero-fill sample id header */
+ 		memset(id_hdr_ptr, 0, bench_id_hdr_size);
+ 		/* put timestamp in the right position */
+ 		ts_idx = (bench_id_hdr_size / sizeof(u64)) - 2;
+ 		id_hdr_ptr[ts_idx] = timestamp;
+-		writen(data->input_pipe[1], id_hdr_ptr, bench_id_hdr_size);
+-	} else {
+-		ts_idx = (len / sizeof(u64)) - 2;
+-		id_hdr_ptr[ts_idx] = timestamp;
+-		writen(data->input_pipe[1], &event, len);
++		if (writen(data->input_pipe[1], id_hdr_ptr, bench_id_hdr_size) < 0)
++			return -1;
++
++		return len;
+ 	}
+-	return len;
++
++	ts_idx = (len / sizeof(u64)) - 2;
++	id_hdr_ptr[ts_idx] = timestamp;
++	return writen(data->input_pipe[1], &event, len);
+ }
+ 
+-static u32 synthesize_sample(struct bench_data *data, struct bench_dso *dso,
+-			     u64 timestamp)
++static ssize_t synthesize_sample(struct bench_data *data, struct bench_dso *dso, u64 timestamp)
+ {
+ 	union perf_event event;
+ 	struct perf_sample sample = {
+@@ -233,7 +234,7 @@ static u32 synthesize_sample(struct bench_data *data, struct bench_dso *dso,
+ 	return writen(data->input_pipe[1], &event, event.header.size);
+ }
+ 
+-static u32 synthesize_flush(struct bench_data *data)
++static ssize_t synthesize_flush(struct bench_data *data)
+ {
+ 	struct perf_event_header header = {
+ 		.size = sizeof(header),
+@@ -348,14 +349,16 @@ static int inject_build_id(struct bench_data *data, u64 *max_rss)
+ 	int status;
+ 	unsigned int i, k;
+ 	struct rusage rusage;
+-	u64 len = 0;
+ 
+ 	/* this makes the child to run */
+ 	if (perf_header__write_pipe(data->input_pipe[1]) < 0)
+ 		return -1;
+ 
+-	len += synthesize_attr(data);
+-	len += synthesize_fork(data);
++	if (synthesize_attr(data) < 0)
++		return -1;
++
++	if (synthesize_fork(data) < 0)
++		return -1;
+ 
+ 	for (i = 0; i < nr_mmaps; i++) {
+ 		int idx = rand() % (nr_dsos - 1);
+@@ -363,13 +366,18 @@ static int inject_build_id(struct bench_data *data, u64 *max_rss)
+ 		u64 timestamp = rand() % 1000000;
+ 
+ 		pr_debug2("   [%d] injecting: %s\n", i+1, dso->name);
+-		len += synthesize_mmap(data, dso, timestamp);
++		if (synthesize_mmap(data, dso, timestamp) < 0)
++			return -1;
+ 
+-		for (k = 0; k < nr_samples; k++)
+-			len += synthesize_sample(data, dso, timestamp + k * 1000);
++		for (k = 0; k < nr_samples; k++) {
++			if (synthesize_sample(data, dso, timestamp + k * 1000) < 0)
++				return -1;
++		}
+ 
+-		if ((i + 1) % 10 == 0)
+-			len += synthesize_flush(data);
++		if ((i + 1) % 10 == 0) {
++			if (synthesize_flush(data) < 0)
++				return -1;
++		}
+ 	}
+ 
+ 	/* this makes the child to finish */
+diff --git a/tools/perf/util/config.c b/tools/perf/util/config.c
+index 63d472b336de2..4fb5e90d7a57a 100644
+--- a/tools/perf/util/config.c
++++ b/tools/perf/util/config.c
+@@ -581,7 +581,10 @@ const char *perf_home_perfconfig(void)
+ 	static const char *config;
+ 	static bool failed;
+ 
+-	config = failed ? NULL : home_perfconfig();
++	if (failed || config)
++		return config;
++
++	config = home_perfconfig();
+ 	if (!config)
+ 		failed = true;
+ 
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index da19be7da284c..44e40bad0e336 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2149,6 +2149,7 @@ static int add_callchain_ip(struct thread *thread,
+ 
+ 	al.filtered = 0;
+ 	al.sym = NULL;
++	al.srcline = NULL;
+ 	if (!cpumode) {
+ 		thread__find_cpumode_addr_location(thread, ip, &al);
+ 	} else {
+diff --git a/tools/testing/selftests/net/altnames.sh b/tools/testing/selftests/net/altnames.sh
+index 4254ddc3f70b5..1ef9e4159bba8 100755
+--- a/tools/testing/selftests/net/altnames.sh
++++ b/tools/testing/selftests/net/altnames.sh
+@@ -45,7 +45,7 @@ altnames_test()
+ 	check_err $? "Got unexpected long alternative name from link show JSON"
+ 
+ 	ip link property del $DUMMY_DEV altname $SHORT_NAME
+-	check_err $? "Failed to add short alternative name"
++	check_err $? "Failed to delete short alternative name"
+ 
+ 	ip -j -p link show $SHORT_NAME &>/dev/null
+ 	check_fail $? "Unexpected success while trying to do link show with deleted short alternative name"
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index fd63ebfe9a2b7..910d8126af8f2 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -22,8 +22,8 @@ usage() {
+ 
+ cleanup()
+ {
+-	rm -f "$cin" "$cout"
+-	rm -f "$sin" "$sout"
++	rm -f "$cout" "$sout"
++	rm -f "$large" "$small"
+ 	rm -f "$capout"
+ 
+ 	local netns


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-26 14:11 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-26 14:11 UTC (permalink / raw
  To: gentoo-commits

commit:     225c81bb151672296321343fb081bac386ff4dee
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 26 14:11:30 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep 26 14:11:30 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=225c81bb

Linux patch 5.14.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-5.14.8.patch | 6040 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6044 insertions(+)

diff --git a/0000_README b/0000_README
index 0c8fa67..dcc9f9a 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1006_linux-5.14.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.7
 
+Patch:  1007_linux-5.14.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.14.8.patch b/1007_linux-5.14.8.patch
new file mode 100644
index 0000000..15b9ec2
--- /dev/null
+++ b/1007_linux-5.14.8.patch
@@ -0,0 +1,6040 @@
+diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
+index 487ce4f41d770..a86e2c7c551ab 100644
+--- a/Documentation/driver-api/cxl/memory-devices.rst
++++ b/Documentation/driver-api/cxl/memory-devices.rst
+@@ -36,7 +36,7 @@ CXL Core
+ .. kernel-doc:: drivers/cxl/cxl.h
+    :internal:
+ 
+-.. kernel-doc:: drivers/cxl/core.c
++.. kernel-doc:: drivers/cxl/core/bus.c
+    :doc: cxl core
+ 
+ External Interfaces
+diff --git a/Makefile b/Makefile
+index efb603f06e711..d6b4737194b88 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
+index 7fa6828bb488a..587543c6c51cb 100644
+--- a/arch/arm64/kernel/cacheinfo.c
++++ b/arch/arm64/kernel/cacheinfo.c
+@@ -43,7 +43,7 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+ 	this_leaf->type = type;
+ }
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	unsigned int ctype, level, leaves, fw_level;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -78,7 +78,7 @@ static int __init_cache_level(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	unsigned int level, idx;
+ 	enum cache_type type;
+@@ -97,6 +97,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 	}
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 1fdb7bb7c1984..0ad4afc9359b5 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -319,7 +319,21 @@ static void __init fdt_enforce_memory_region(void)
+ 
+ void __init arm64_memblock_init(void)
+ {
+-	const s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);
++	s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);
++
++	/*
++	 * Corner case: 52-bit VA capable systems running KVM in nVHE mode may
++	 * be limited in their ability to support a linear map that exceeds 51
++	 * bits of VA space, depending on the placement of the ID map. Given
++	 * that the placement of the ID map may be randomized, let's simply
++	 * limit the kernel's linear map to 51 bits as well if we detect this
++	 * configuration.
++	 */
++	if (IS_ENABLED(CONFIG_KVM) && vabits_actual == 52 &&
++	    is_hyp_mode_available() && !is_kernel_in_hyp_mode()) {
++		pr_info("Capping linear region to 51 bits for KVM in nVHE mode on LVA capable hardware.\n");
++		linear_region_size = min_t(u64, linear_region_size, BIT(51));
++	}
+ 
+ 	/* Handle linux,usable-memory-range property */
+ 	fdt_enforce_memory_region();
+diff --git a/arch/mips/kernel/cacheinfo.c b/arch/mips/kernel/cacheinfo.c
+index 53d8ea7d36e6d..495dd058231d9 100644
+--- a/arch/mips/kernel/cacheinfo.c
++++ b/arch/mips/kernel/cacheinfo.c
+@@ -17,7 +17,7 @@ do {								\
+ 	leaf++;							\
+ } while (0)
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -74,7 +74,7 @@ static void fill_cpumask_cluster(int cpu, cpumask_t *cpu_map)
+ 			cpumask_set_cpu(cpu1, cpu_map);
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	struct cpuinfo_mips *c = &current_cpu_data;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -114,6 +114,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts b/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts
+index baea7d204639a..b254c60589a1c 100644
+--- a/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts
++++ b/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts
+@@ -16,10 +16,14 @@
+ 
+ 	aliases {
+ 		ethernet0 = &emac1;
++		serial0 = &serial0;
++		serial1 = &serial1;
++		serial2 = &serial2;
++		serial3 = &serial3;
+ 	};
+ 
+ 	chosen {
+-		stdout-path = &serial0;
++		stdout-path = "serial0:115200n8";
+ 	};
+ 
+ 	cpus {
+diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
+index d867813570442..90deabfe63eaa 100644
+--- a/arch/riscv/kernel/cacheinfo.c
++++ b/arch/riscv/kernel/cacheinfo.c
+@@ -113,7 +113,7 @@ static void fill_cacheinfo(struct cacheinfo **this_leaf,
+ 	}
+ }
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 	struct device_node *np = of_cpu_device_node_get(cpu);
+@@ -155,7 +155,7 @@ static int __init_cache_level(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 	struct cacheinfo *this_leaf = this_cpu_ci->info_list;
+@@ -187,6 +187,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/s390/include/asm/stacktrace.h b/arch/s390/include/asm/stacktrace.h
+index 3d8a4b94c620b..dd00d98804ec2 100644
+--- a/arch/s390/include/asm/stacktrace.h
++++ b/arch/s390/include/asm/stacktrace.h
+@@ -34,16 +34,6 @@ static inline bool on_stack(struct stack_info *info,
+ 	return addr >= info->begin && addr + len <= info->end;
+ }
+ 
+-static __always_inline unsigned long get_stack_pointer(struct task_struct *task,
+-						       struct pt_regs *regs)
+-{
+-	if (regs)
+-		return (unsigned long) kernel_stack_pointer(regs);
+-	if (task == current)
+-		return current_stack_pointer();
+-	return (unsigned long) task->thread.ksp;
+-}
+-
+ /*
+  * Stack layout of a C stack frame.
+  */
+@@ -74,6 +64,16 @@ struct stack_frame {
+ 	((unsigned long)__builtin_frame_address(0) -			\
+ 	 offsetof(struct stack_frame, back_chain))
+ 
++static __always_inline unsigned long get_stack_pointer(struct task_struct *task,
++						       struct pt_regs *regs)
++{
++	if (regs)
++		return (unsigned long)kernel_stack_pointer(regs);
++	if (task == current)
++		return current_frame_address();
++	return (unsigned long)task->thread.ksp;
++}
++
+ /*
+  * To keep this simple mark register 2-6 as being changed (volatile)
+  * by the called function, even though register 6 is saved/nonvolatile.
+diff --git a/arch/s390/include/asm/unwind.h b/arch/s390/include/asm/unwind.h
+index de9006b0cfebb..5ebf534ef7533 100644
+--- a/arch/s390/include/asm/unwind.h
++++ b/arch/s390/include/asm/unwind.h
+@@ -55,10 +55,10 @@ static inline bool unwind_error(struct unwind_state *state)
+ 	return state->error;
+ }
+ 
+-static inline void unwind_start(struct unwind_state *state,
+-				struct task_struct *task,
+-				struct pt_regs *regs,
+-				unsigned long first_frame)
++static __always_inline void unwind_start(struct unwind_state *state,
++					 struct task_struct *task,
++					 struct pt_regs *regs,
++					 unsigned long first_frame)
+ {
+ 	task = task ?: current;
+ 	first_frame = first_frame ?: get_stack_pointer(task, regs);
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index b9716a7e326d0..4c9b967290ae0 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -140,10 +140,10 @@ _LPP_OFFSET	= __LC_LPP
+ 	TSTMSK	__LC_MCCK_CODE,(MCCK_CODE_STG_ERROR|MCCK_CODE_STG_KEY_ERROR)
+ 	jnz	\errlabel
+ 	TSTMSK	__LC_MCCK_CODE,MCCK_CODE_STG_DEGRAD
+-	jz	oklabel\@
++	jz	.Loklabel\@
+ 	TSTMSK	__LC_MCCK_CODE,MCCK_CODE_STG_FAIL_ADDR
+ 	jnz	\errlabel
+-oklabel\@:
++.Loklabel\@:
+ 	.endm
+ 
+ #if IS_ENABLED(CONFIG_KVM)
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index ee23908f1b960..6f0d2d4dea74a 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -50,6 +50,7 @@
+ #include <linux/compat.h>
+ #include <linux/start_kernel.h>
+ #include <linux/hugetlb.h>
++#include <linux/kmemleak.h>
+ 
+ #include <asm/boot_data.h>
+ #include <asm/ipl.h>
+@@ -312,9 +313,12 @@ void *restart_stack;
+ unsigned long stack_alloc(void)
+ {
+ #ifdef CONFIG_VMAP_STACK
+-	return (unsigned long)__vmalloc_node(THREAD_SIZE, THREAD_SIZE,
+-			THREADINFO_GFP, NUMA_NO_NODE,
+-			__builtin_return_address(0));
++	void *ret;
++
++	ret = __vmalloc_node(THREAD_SIZE, THREAD_SIZE, THREADINFO_GFP,
++			     NUMA_NO_NODE, __builtin_return_address(0));
++	kmemleak_not_leak(ret);
++	return (unsigned long)ret;
+ #else
+ 	return __get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER);
+ #endif
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index 4412d6febadef..6bf7bd4479aee 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1139,7 +1139,7 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 		rc = os_connect_socket(pdata->socket_path);
+ 	} while (rc == -EINTR);
+ 	if (rc < 0)
+-		return rc;
++		goto error_free;
+ 	vu_dev->sock = rc;
+ 
+ 	spin_lock_init(&vu_dev->sock_lock);
+@@ -1160,6 +1160,8 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 
+ error_init:
+ 	os_close_file(vu_dev->sock);
++error_free:
++	kfree(vu_dev);
+ 	return rc;
+ }
+ 
+diff --git a/arch/um/kernel/skas/clone.c b/arch/um/kernel/skas/clone.c
+index 5afac0fef24ea..ff5061f291674 100644
+--- a/arch/um/kernel/skas/clone.c
++++ b/arch/um/kernel/skas/clone.c
+@@ -24,8 +24,7 @@
+ void __attribute__ ((__section__ (".__syscall_stub")))
+ stub_clone_handler(void)
+ {
+-	int stack;
+-	struct stub_data *data = (void *) ((unsigned long)&stack & ~(UM_KERN_PAGE_SIZE - 1));
++	struct stub_data *data = get_stub_page();
+ 	long err;
+ 
+ 	err = stub_syscall2(__NR_clone, CLONE_PARENT | CLONE_FILES | SIGCHLD,
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index d66af2950e06e..b5e36bd0425b5 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -985,7 +985,7 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+ 	this_leaf->priv = base->nb;
+ }
+ 
+-static int __init_cache_level(unsigned int cpu)
++int init_cache_level(unsigned int cpu)
+ {
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+ 
+@@ -1014,7 +1014,7 @@ static void get_cache_id(int cpu, struct _cpuid4_info_regs *id4_regs)
+ 	id4_regs->id = c->apicid >> index_msb;
+ }
+ 
+-static int __populate_cache_leaves(unsigned int cpu)
++int populate_cache_leaves(unsigned int cpu)
+ {
+ 	unsigned int idx, ret;
+ 	struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+@@ -1033,6 +1033,3 @@ static int __populate_cache_leaves(unsigned int cpu)
+ 
+ 	return 0;
+ }
+-
+-DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level)
+-DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves)
+diff --git a/arch/x86/um/shared/sysdep/stub_32.h b/arch/x86/um/shared/sysdep/stub_32.h
+index b95db9daf0e82..4c6c2be0c8997 100644
+--- a/arch/x86/um/shared/sysdep/stub_32.h
++++ b/arch/x86/um/shared/sysdep/stub_32.h
+@@ -101,4 +101,16 @@ static inline void remap_stack_and_trap(void)
+ 		"memory");
+ }
+ 
++static __always_inline void *get_stub_page(void)
++{
++	unsigned long ret;
++
++	asm volatile (
++		"movl %%esp,%0 ;"
++		"andl %1,%0"
++		: "=a" (ret)
++		: "g" (~(UM_KERN_PAGE_SIZE - 1)));
++
++	return (void *)ret;
++}
+ #endif
+diff --git a/arch/x86/um/shared/sysdep/stub_64.h b/arch/x86/um/shared/sysdep/stub_64.h
+index 6e2626b77a2e4..e9c4b2b388039 100644
+--- a/arch/x86/um/shared/sysdep/stub_64.h
++++ b/arch/x86/um/shared/sysdep/stub_64.h
+@@ -108,4 +108,16 @@ static inline void remap_stack_and_trap(void)
+ 		__syscall_clobber, "r10", "r8", "r9");
+ }
+ 
++static __always_inline void *get_stub_page(void)
++{
++	unsigned long ret;
++
++	asm volatile (
++		"movq %%rsp,%0 ;"
++		"andq %1,%0"
++		: "=a" (ret)
++		: "g" (~(UM_KERN_PAGE_SIZE - 1)));
++
++	return (void *)ret;
++}
+ #endif
+diff --git a/arch/x86/um/stub_segv.c b/arch/x86/um/stub_segv.c
+index 21836eaf17259..f7eefba034f96 100644
+--- a/arch/x86/um/stub_segv.c
++++ b/arch/x86/um/stub_segv.c
+@@ -11,9 +11,8 @@
+ void __attribute__ ((__section__ (".__syscall_stub")))
+ stub_segv_handler(int sig, siginfo_t *info, void *p)
+ {
+-	int stack;
++	struct faultinfo *f = get_stub_page();
+ 	ucontext_t *uc = p;
+-	struct faultinfo *f = (void *)(((unsigned long)&stack) & ~(UM_KERN_PAGE_SIZE - 1));
+ 
+ 	GET_FAULTINFO_FROM_MC(*f, &uc->uc_mcontext);
+ 	trap_myself();
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9d4fdc2be88a5..9c64f0025a562 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2135,6 +2135,18 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
+ 	}
+ }
+ 
++/*
++ * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
++ * queues. This is important for md arrays to benefit from merging
++ * requests.
++ */
++static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
++{
++	if (plug->multiple_queues)
++		return BLK_MAX_REQUEST_COUNT * 4;
++	return BLK_MAX_REQUEST_COUNT;
++}
++
+ /**
+  * blk_mq_submit_bio - Create and send a request to block device.
+  * @bio: Bio pointer.
+@@ -2231,7 +2243,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
+ 		else
+ 			last = list_entry_rq(plug->mq_list.prev);
+ 
+-		if (request_count >= BLK_MAX_REQUEST_COUNT || (last &&
++		if (request_count >= blk_plug_max_rq_count(plug) || (last &&
+ 		    blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
+ 			blk_flush_plug_list(plug, false);
+ 			trace_block_plug(q);
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 55c49015e5333..7c4e7993ba970 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -2458,6 +2458,7 @@ int blk_throtl_init(struct request_queue *q)
+ void blk_throtl_exit(struct request_queue *q)
+ {
+ 	BUG_ON(!q->td);
++	del_timer_sync(&q->td->service_queue.pending_timer);
+ 	throtl_shutdown_wq(q);
+ 	blkcg_deactivate_policy(q, &blkcg_policy_throtl);
+ 	free_percpu(q->td->latency_buckets[READ]);
+diff --git a/block/genhd.c b/block/genhd.c
+index 298ee78c1bdac..9aba654044169 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -164,6 +164,7 @@ static struct blk_major_name {
+ 	void (*probe)(dev_t devt);
+ } *major_names[BLKDEV_MAJOR_HASH_SIZE];
+ static DEFINE_MUTEX(major_names_lock);
++static DEFINE_SPINLOCK(major_names_spinlock);
+ 
+ /* index in the above - for now: assume no multimajor ranges */
+ static inline int major_to_index(unsigned major)
+@@ -176,11 +177,11 @@ void blkdev_show(struct seq_file *seqf, off_t offset)
+ {
+ 	struct blk_major_name *dp;
+ 
+-	mutex_lock(&major_names_lock);
++	spin_lock(&major_names_spinlock);
+ 	for (dp = major_names[major_to_index(offset)]; dp; dp = dp->next)
+ 		if (dp->major == offset)
+ 			seq_printf(seqf, "%3d %s\n", dp->major, dp->name);
+-	mutex_unlock(&major_names_lock);
++	spin_unlock(&major_names_spinlock);
+ }
+ #endif /* CONFIG_PROC_FS */
+ 
+@@ -252,6 +253,7 @@ int __register_blkdev(unsigned int major, const char *name,
+ 	p->next = NULL;
+ 	index = major_to_index(major);
+ 
++	spin_lock(&major_names_spinlock);
+ 	for (n = &major_names[index]; *n; n = &(*n)->next) {
+ 		if ((*n)->major == major)
+ 			break;
+@@ -260,6 +262,7 @@ int __register_blkdev(unsigned int major, const char *name,
+ 		*n = p;
+ 	else
+ 		ret = -EBUSY;
++	spin_unlock(&major_names_spinlock);
+ 
+ 	if (ret < 0) {
+ 		printk("register_blkdev: cannot get major %u for %s\n",
+@@ -279,6 +282,7 @@ void unregister_blkdev(unsigned int major, const char *name)
+ 	int index = major_to_index(major);
+ 
+ 	mutex_lock(&major_names_lock);
++	spin_lock(&major_names_spinlock);
+ 	for (n = &major_names[index]; *n; n = &(*n)->next)
+ 		if ((*n)->major == major)
+ 			break;
+@@ -288,6 +292,7 @@ void unregister_blkdev(unsigned int major, const char *name)
+ 		p = *n;
+ 		*n = p->next;
+ 	}
++	spin_unlock(&major_names_spinlock);
+ 	mutex_unlock(&major_names_lock);
+ 	kfree(p);
+ }
+diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
+index 3a308461246a8..bd92b549fd5a4 100644
+--- a/drivers/acpi/x86/s2idle.c
++++ b/drivers/acpi/x86/s2idle.c
+@@ -449,25 +449,30 @@ int acpi_s2idle_prepare_late(void)
+ 	if (pm_debug_messages_on)
+ 		lpi_check_constraints();
+ 
+-	if (lps0_dsm_func_mask_microsoft > 0) {
++	/* Screen off */
++	if (lps0_dsm_func_mask > 0)
++		acpi_sleep_run_lps0_dsm(acpi_s2idle_vendor_amd() ?
++					ACPI_LPS0_SCREEN_OFF_AMD :
++					ACPI_LPS0_SCREEN_OFF,
++					lps0_dsm_func_mask, lps0_dsm_guid);
++
++	if (lps0_dsm_func_mask_microsoft > 0)
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF,
+ 				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_MS_ENTRY,
+-				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
++
++	/* LPS0 entry */
++	if (lps0_dsm_func_mask > 0)
++		acpi_sleep_run_lps0_dsm(acpi_s2idle_vendor_amd() ?
++					ACPI_LPS0_ENTRY_AMD :
++					ACPI_LPS0_ENTRY,
++					lps0_dsm_func_mask, lps0_dsm_guid);
++	if (lps0_dsm_func_mask_microsoft > 0) {
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY,
+ 				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
+-	} else if (acpi_s2idle_vendor_amd()) {
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF_AMD,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY_AMD,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
+-	} else {
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
++		/* modern standby entry */
++		acpi_sleep_run_lps0_dsm(ACPI_LPS0_MS_ENTRY,
++				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
+ 	}
+-
+ 	return 0;
+ }
+ 
+@@ -476,24 +481,30 @@ void acpi_s2idle_restore_early(void)
+ 	if (!lps0_device_handle || sleep_no_lps0)
+ 		return;
+ 
+-	if (lps0_dsm_func_mask_microsoft > 0) {
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT,
+-				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
++	/* Modern standby exit */
++	if (lps0_dsm_func_mask_microsoft > 0)
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_MS_EXIT,
+ 				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON,
+-				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
+-	} else if (acpi_s2idle_vendor_amd()) {
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT_AMD,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
+-		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON_AMD,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
+-	} else {
++
++	/* LPS0 exit */
++	if (lps0_dsm_func_mask > 0)
++		acpi_sleep_run_lps0_dsm(acpi_s2idle_vendor_amd() ?
++					ACPI_LPS0_EXIT_AMD :
++					ACPI_LPS0_EXIT,
++					lps0_dsm_func_mask, lps0_dsm_guid);
++	if (lps0_dsm_func_mask_microsoft > 0)
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
++				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
++
++	/* Screen on */
++	if (lps0_dsm_func_mask_microsoft > 0)
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON,
+-				lps0_dsm_func_mask, lps0_dsm_guid);
+-	}
++				lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
++	if (lps0_dsm_func_mask > 0)
++		acpi_sleep_run_lps0_dsm(acpi_s2idle_vendor_amd() ?
++					ACPI_LPS0_SCREEN_ON_AMD :
++					ACPI_LPS0_SCREEN_ON,
++					lps0_dsm_func_mask, lps0_dsm_guid);
+ }
+ 
+ static const struct platform_s2idle_ops acpi_s2idle_ops_lps0 = {
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index d568772152c2d..cbea78e79f3df 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1642,7 +1642,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	}
+ 
+ 	dev->power.may_skip_resume = true;
+-	dev->power.must_resume = false;
++	dev->power.must_resume = !dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME);
+ 
+ 	dpm_watchdog_set(&wd, dev);
+ 	device_lock(dev);
+diff --git a/drivers/block/n64cart.c b/drivers/block/n64cart.c
+index c84be0028f635..26798da661bd4 100644
+--- a/drivers/block/n64cart.c
++++ b/drivers/block/n64cart.c
+@@ -129,8 +129,8 @@ static int __init n64cart_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	reg_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (!reg_base)
+-		return -EINVAL;
++	if (IS_ERR(reg_base))
++		return PTR_ERR(reg_base);
+ 
+ 	disk = blk_alloc_disk(NUMA_NO_NODE);
+ 	if (!disk)
+diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
+index 32954059b37ba..d1aaabc940f3c 100644
+--- a/drivers/cxl/Makefile
++++ b/drivers/cxl/Makefile
+@@ -1,11 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+-obj-$(CONFIG_CXL_BUS) += cxl_core.o
++obj-$(CONFIG_CXL_BUS) += core/
+ obj-$(CONFIG_CXL_MEM) += cxl_pci.o
+ obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
+ obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o
+ 
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
+-cxl_core-y := core.o
+ cxl_pci-y := pci.o
+ cxl_acpi-y := acpi.o
+ cxl_pmem-y := pmem.o
+diff --git a/drivers/cxl/core.c b/drivers/cxl/core.c
+deleted file mode 100644
+index a2e4d54fc7bc4..0000000000000
+--- a/drivers/cxl/core.c
++++ /dev/null
+@@ -1,1067 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+-#include <linux/io-64-nonatomic-lo-hi.h>
+-#include <linux/device.h>
+-#include <linux/module.h>
+-#include <linux/pci.h>
+-#include <linux/slab.h>
+-#include <linux/idr.h>
+-#include "cxl.h"
+-#include "mem.h"
+-
+-/**
+- * DOC: cxl core
+- *
+- * The CXL core provides a sysfs hierarchy for control devices and a rendezvous
+- * point for cross-device interleave coordination through cxl ports.
+- */
+-
+-static DEFINE_IDA(cxl_port_ida);
+-
+-static ssize_t devtype_show(struct device *dev, struct device_attribute *attr,
+-			    char *buf)
+-{
+-	return sysfs_emit(buf, "%s\n", dev->type->name);
+-}
+-static DEVICE_ATTR_RO(devtype);
+-
+-static struct attribute *cxl_base_attributes[] = {
+-	&dev_attr_devtype.attr,
+-	NULL,
+-};
+-
+-static struct attribute_group cxl_base_attribute_group = {
+-	.attrs = cxl_base_attributes,
+-};
+-
+-static ssize_t start_show(struct device *dev, struct device_attribute *attr,
+-			  char *buf)
+-{
+-	struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-
+-	return sysfs_emit(buf, "%#llx\n", cxld->range.start);
+-}
+-static DEVICE_ATTR_RO(start);
+-
+-static ssize_t size_show(struct device *dev, struct device_attribute *attr,
+-			char *buf)
+-{
+-	struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-
+-	return sysfs_emit(buf, "%#llx\n", range_len(&cxld->range));
+-}
+-static DEVICE_ATTR_RO(size);
+-
+-#define CXL_DECODER_FLAG_ATTR(name, flag)                            \
+-static ssize_t name##_show(struct device *dev,                       \
+-			   struct device_attribute *attr, char *buf) \
+-{                                                                    \
+-	struct cxl_decoder *cxld = to_cxl_decoder(dev);              \
+-                                                                     \
+-	return sysfs_emit(buf, "%s\n",                               \
+-			  (cxld->flags & (flag)) ? "1" : "0");       \
+-}                                                                    \
+-static DEVICE_ATTR_RO(name)
+-
+-CXL_DECODER_FLAG_ATTR(cap_pmem, CXL_DECODER_F_PMEM);
+-CXL_DECODER_FLAG_ATTR(cap_ram, CXL_DECODER_F_RAM);
+-CXL_DECODER_FLAG_ATTR(cap_type2, CXL_DECODER_F_TYPE2);
+-CXL_DECODER_FLAG_ATTR(cap_type3, CXL_DECODER_F_TYPE3);
+-CXL_DECODER_FLAG_ATTR(locked, CXL_DECODER_F_LOCK);
+-
+-static ssize_t target_type_show(struct device *dev,
+-				struct device_attribute *attr, char *buf)
+-{
+-	struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-
+-	switch (cxld->target_type) {
+-	case CXL_DECODER_ACCELERATOR:
+-		return sysfs_emit(buf, "accelerator\n");
+-	case CXL_DECODER_EXPANDER:
+-		return sysfs_emit(buf, "expander\n");
+-	}
+-	return -ENXIO;
+-}
+-static DEVICE_ATTR_RO(target_type);
+-
+-static ssize_t target_list_show(struct device *dev,
+-			       struct device_attribute *attr, char *buf)
+-{
+-	struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-	ssize_t offset = 0;
+-	int i, rc = 0;
+-
+-	device_lock(dev);
+-	for (i = 0; i < cxld->interleave_ways; i++) {
+-		struct cxl_dport *dport = cxld->target[i];
+-		struct cxl_dport *next = NULL;
+-
+-		if (!dport)
+-			break;
+-
+-		if (i + 1 < cxld->interleave_ways)
+-			next = cxld->target[i + 1];
+-		rc = sysfs_emit_at(buf, offset, "%d%s", dport->port_id,
+-				   next ? "," : "");
+-		if (rc < 0)
+-			break;
+-		offset += rc;
+-	}
+-	device_unlock(dev);
+-
+-	if (rc < 0)
+-		return rc;
+-
+-	rc = sysfs_emit_at(buf, offset, "\n");
+-	if (rc < 0)
+-		return rc;
+-
+-	return offset + rc;
+-}
+-static DEVICE_ATTR_RO(target_list);
+-
+-static struct attribute *cxl_decoder_base_attrs[] = {
+-	&dev_attr_start.attr,
+-	&dev_attr_size.attr,
+-	&dev_attr_locked.attr,
+-	&dev_attr_target_list.attr,
+-	NULL,
+-};
+-
+-static struct attribute_group cxl_decoder_base_attribute_group = {
+-	.attrs = cxl_decoder_base_attrs,
+-};
+-
+-static struct attribute *cxl_decoder_root_attrs[] = {
+-	&dev_attr_cap_pmem.attr,
+-	&dev_attr_cap_ram.attr,
+-	&dev_attr_cap_type2.attr,
+-	&dev_attr_cap_type3.attr,
+-	NULL,
+-};
+-
+-static struct attribute_group cxl_decoder_root_attribute_group = {
+-	.attrs = cxl_decoder_root_attrs,
+-};
+-
+-static const struct attribute_group *cxl_decoder_root_attribute_groups[] = {
+-	&cxl_decoder_root_attribute_group,
+-	&cxl_decoder_base_attribute_group,
+-	&cxl_base_attribute_group,
+-	NULL,
+-};
+-
+-static struct attribute *cxl_decoder_switch_attrs[] = {
+-	&dev_attr_target_type.attr,
+-	NULL,
+-};
+-
+-static struct attribute_group cxl_decoder_switch_attribute_group = {
+-	.attrs = cxl_decoder_switch_attrs,
+-};
+-
+-static const struct attribute_group *cxl_decoder_switch_attribute_groups[] = {
+-	&cxl_decoder_switch_attribute_group,
+-	&cxl_decoder_base_attribute_group,
+-	&cxl_base_attribute_group,
+-	NULL,
+-};
+-
+-static void cxl_decoder_release(struct device *dev)
+-{
+-	struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-	struct cxl_port *port = to_cxl_port(dev->parent);
+-
+-	ida_free(&port->decoder_ida, cxld->id);
+-	kfree(cxld);
+-}
+-
+-static const struct device_type cxl_decoder_switch_type = {
+-	.name = "cxl_decoder_switch",
+-	.release = cxl_decoder_release,
+-	.groups = cxl_decoder_switch_attribute_groups,
+-};
+-
+-static const struct device_type cxl_decoder_root_type = {
+-	.name = "cxl_decoder_root",
+-	.release = cxl_decoder_release,
+-	.groups = cxl_decoder_root_attribute_groups,
+-};
+-
+-bool is_root_decoder(struct device *dev)
+-{
+-	return dev->type == &cxl_decoder_root_type;
+-}
+-EXPORT_SYMBOL_GPL(is_root_decoder);
+-
+-struct cxl_decoder *to_cxl_decoder(struct device *dev)
+-{
+-	if (dev_WARN_ONCE(dev, dev->type->release != cxl_decoder_release,
+-			  "not a cxl_decoder device\n"))
+-		return NULL;
+-	return container_of(dev, struct cxl_decoder, dev);
+-}
+-EXPORT_SYMBOL_GPL(to_cxl_decoder);
+-
+-static void cxl_dport_release(struct cxl_dport *dport)
+-{
+-	list_del(&dport->list);
+-	put_device(dport->dport);
+-	kfree(dport);
+-}
+-
+-static void cxl_port_release(struct device *dev)
+-{
+-	struct cxl_port *port = to_cxl_port(dev);
+-	struct cxl_dport *dport, *_d;
+-
+-	device_lock(dev);
+-	list_for_each_entry_safe(dport, _d, &port->dports, list)
+-		cxl_dport_release(dport);
+-	device_unlock(dev);
+-	ida_free(&cxl_port_ida, port->id);
+-	kfree(port);
+-}
+-
+-static const struct attribute_group *cxl_port_attribute_groups[] = {
+-	&cxl_base_attribute_group,
+-	NULL,
+-};
+-
+-static const struct device_type cxl_port_type = {
+-	.name = "cxl_port",
+-	.release = cxl_port_release,
+-	.groups = cxl_port_attribute_groups,
+-};
+-
+-struct cxl_port *to_cxl_port(struct device *dev)
+-{
+-	if (dev_WARN_ONCE(dev, dev->type != &cxl_port_type,
+-			  "not a cxl_port device\n"))
+-		return NULL;
+-	return container_of(dev, struct cxl_port, dev);
+-}
+-
+-static void unregister_port(void *_port)
+-{
+-	struct cxl_port *port = _port;
+-	struct cxl_dport *dport;
+-
+-	device_lock(&port->dev);
+-	list_for_each_entry(dport, &port->dports, list) {
+-		char link_name[CXL_TARGET_STRLEN];
+-
+-		if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d",
+-			     dport->port_id) >= CXL_TARGET_STRLEN)
+-			continue;
+-		sysfs_remove_link(&port->dev.kobj, link_name);
+-	}
+-	device_unlock(&port->dev);
+-	device_unregister(&port->dev);
+-}
+-
+-static void cxl_unlink_uport(void *_port)
+-{
+-	struct cxl_port *port = _port;
+-
+-	sysfs_remove_link(&port->dev.kobj, "uport");
+-}
+-
+-static int devm_cxl_link_uport(struct device *host, struct cxl_port *port)
+-{
+-	int rc;
+-
+-	rc = sysfs_create_link(&port->dev.kobj, &port->uport->kobj, "uport");
+-	if (rc)
+-		return rc;
+-	return devm_add_action_or_reset(host, cxl_unlink_uport, port);
+-}
+-
+-static struct cxl_port *cxl_port_alloc(struct device *uport,
+-				       resource_size_t component_reg_phys,
+-				       struct cxl_port *parent_port)
+-{
+-	struct cxl_port *port;
+-	struct device *dev;
+-	int rc;
+-
+-	port = kzalloc(sizeof(*port), GFP_KERNEL);
+-	if (!port)
+-		return ERR_PTR(-ENOMEM);
+-
+-	rc = ida_alloc(&cxl_port_ida, GFP_KERNEL);
+-	if (rc < 0)
+-		goto err;
+-	port->id = rc;
+-
+-	/*
+-	 * The top-level cxl_port "cxl_root" does not have a cxl_port as
+-	 * its parent and it does not have any corresponding component
+-	 * registers as its decode is described by a fixed platform
+-	 * description.
+-	 */
+-	dev = &port->dev;
+-	if (parent_port)
+-		dev->parent = &parent_port->dev;
+-	else
+-		dev->parent = uport;
+-
+-	port->uport = uport;
+-	port->component_reg_phys = component_reg_phys;
+-	ida_init(&port->decoder_ida);
+-	INIT_LIST_HEAD(&port->dports);
+-
+-	device_initialize(dev);
+-	device_set_pm_not_required(dev);
+-	dev->bus = &cxl_bus_type;
+-	dev->type = &cxl_port_type;
+-
+-	return port;
+-
+-err:
+-	kfree(port);
+-	return ERR_PTR(rc);
+-}
+-
+-/**
+- * devm_cxl_add_port - register a cxl_port in CXL memory decode hierarchy
+- * @host: host device for devm operations
+- * @uport: "physical" device implementing this upstream port
+- * @component_reg_phys: (optional) for configurable cxl_port instances
+- * @parent_port: next hop up in the CXL memory decode hierarchy
+- */
+-struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport,
+-				   resource_size_t component_reg_phys,
+-				   struct cxl_port *parent_port)
+-{
+-	struct cxl_port *port;
+-	struct device *dev;
+-	int rc;
+-
+-	port = cxl_port_alloc(uport, component_reg_phys, parent_port);
+-	if (IS_ERR(port))
+-		return port;
+-
+-	dev = &port->dev;
+-	if (parent_port)
+-		rc = dev_set_name(dev, "port%d", port->id);
+-	else
+-		rc = dev_set_name(dev, "root%d", port->id);
+-	if (rc)
+-		goto err;
+-
+-	rc = device_add(dev);
+-	if (rc)
+-		goto err;
+-
+-	rc = devm_add_action_or_reset(host, unregister_port, port);
+-	if (rc)
+-		return ERR_PTR(rc);
+-
+-	rc = devm_cxl_link_uport(host, port);
+-	if (rc)
+-		return ERR_PTR(rc);
+-
+-	return port;
+-
+-err:
+-	put_device(dev);
+-	return ERR_PTR(rc);
+-}
+-EXPORT_SYMBOL_GPL(devm_cxl_add_port);
+-
+-static struct cxl_dport *find_dport(struct cxl_port *port, int id)
+-{
+-	struct cxl_dport *dport;
+-
+-	device_lock_assert(&port->dev);
+-	list_for_each_entry (dport, &port->dports, list)
+-		if (dport->port_id == id)
+-			return dport;
+-	return NULL;
+-}
+-
+-static int add_dport(struct cxl_port *port, struct cxl_dport *new)
+-{
+-	struct cxl_dport *dup;
+-
+-	device_lock(&port->dev);
+-	dup = find_dport(port, new->port_id);
+-	if (dup)
+-		dev_err(&port->dev,
+-			"unable to add dport%d-%s non-unique port id (%s)\n",
+-			new->port_id, dev_name(new->dport),
+-			dev_name(dup->dport));
+-	else
+-		list_add_tail(&new->list, &port->dports);
+-	device_unlock(&port->dev);
+-
+-	return dup ? -EEXIST : 0;
+-}
+-
+-/**
+- * cxl_add_dport - append downstream port data to a cxl_port
+- * @port: the cxl_port that references this dport
+- * @dport_dev: firmware or PCI device representing the dport
+- * @port_id: identifier for this dport in a decoder's target list
+- * @component_reg_phys: optional location of CXL component registers
+- *
+- * Note that all allocations and links are undone by cxl_port deletion
+- * and release.
+- */
+-int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id,
+-		  resource_size_t component_reg_phys)
+-{
+-	char link_name[CXL_TARGET_STRLEN];
+-	struct cxl_dport *dport;
+-	int rc;
+-
+-	if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d", port_id) >=
+-	    CXL_TARGET_STRLEN)
+-		return -EINVAL;
+-
+-	dport = kzalloc(sizeof(*dport), GFP_KERNEL);
+-	if (!dport)
+-		return -ENOMEM;
+-
+-	INIT_LIST_HEAD(&dport->list);
+-	dport->dport = get_device(dport_dev);
+-	dport->port_id = port_id;
+-	dport->component_reg_phys = component_reg_phys;
+-	dport->port = port;
+-
+-	rc = add_dport(port, dport);
+-	if (rc)
+-		goto err;
+-
+-	rc = sysfs_create_link(&port->dev.kobj, &dport_dev->kobj, link_name);
+-	if (rc)
+-		goto err;
+-
+-	return 0;
+-err:
+-	cxl_dport_release(dport);
+-	return rc;
+-}
+-EXPORT_SYMBOL_GPL(cxl_add_dport);
+-
+-static struct cxl_decoder *
+-cxl_decoder_alloc(struct cxl_port *port, int nr_targets, resource_size_t base,
+-		  resource_size_t len, int interleave_ways,
+-		  int interleave_granularity, enum cxl_decoder_type type,
+-		  unsigned long flags)
+-{
+-	struct cxl_decoder *cxld;
+-	struct device *dev;
+-	int rc = 0;
+-
+-	if (interleave_ways < 1)
+-		return ERR_PTR(-EINVAL);
+-
+-	device_lock(&port->dev);
+-	if (list_empty(&port->dports))
+-		rc = -EINVAL;
+-	device_unlock(&port->dev);
+-	if (rc)
+-		return ERR_PTR(rc);
+-
+-	cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL);
+-	if (!cxld)
+-		return ERR_PTR(-ENOMEM);
+-
+-	rc = ida_alloc(&port->decoder_ida, GFP_KERNEL);
+-	if (rc < 0)
+-		goto err;
+-
+-	*cxld = (struct cxl_decoder) {
+-		.id = rc,
+-		.range = {
+-			.start = base,
+-			.end = base + len - 1,
+-		},
+-		.flags = flags,
+-		.interleave_ways = interleave_ways,
+-		.interleave_granularity = interleave_granularity,
+-		.target_type = type,
+-	};
+-
+-	/* handle implied target_list */
+-	if (interleave_ways == 1)
+-		cxld->target[0] =
+-			list_first_entry(&port->dports, struct cxl_dport, list);
+-	dev = &cxld->dev;
+-	device_initialize(dev);
+-	device_set_pm_not_required(dev);
+-	dev->parent = &port->dev;
+-	dev->bus = &cxl_bus_type;
+-
+-	/* root ports do not have a cxl_port_type parent */
+-	if (port->dev.parent->type == &cxl_port_type)
+-		dev->type = &cxl_decoder_switch_type;
+-	else
+-		dev->type = &cxl_decoder_root_type;
+-
+-	return cxld;
+-err:
+-	kfree(cxld);
+-	return ERR_PTR(rc);
+-}
+-
+-static void unregister_dev(void *dev)
+-{
+-	device_unregister(dev);
+-}
+-
+-struct cxl_decoder *
+-devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
+-		     resource_size_t base, resource_size_t len,
+-		     int interleave_ways, int interleave_granularity,
+-		     enum cxl_decoder_type type, unsigned long flags)
+-{
+-	struct cxl_decoder *cxld;
+-	struct device *dev;
+-	int rc;
+-
+-	cxld = cxl_decoder_alloc(port, nr_targets, base, len, interleave_ways,
+-				 interleave_granularity, type, flags);
+-	if (IS_ERR(cxld))
+-		return cxld;
+-
+-	dev = &cxld->dev;
+-	rc = dev_set_name(dev, "decoder%d.%d", port->id, cxld->id);
+-	if (rc)
+-		goto err;
+-
+-	rc = device_add(dev);
+-	if (rc)
+-		goto err;
+-
+-	rc = devm_add_action_or_reset(host, unregister_dev, dev);
+-	if (rc)
+-		return ERR_PTR(rc);
+-	return cxld;
+-
+-err:
+-	put_device(dev);
+-	return ERR_PTR(rc);
+-}
+-EXPORT_SYMBOL_GPL(devm_cxl_add_decoder);
+-
+-/**
+- * cxl_probe_component_regs() - Detect CXL Component register blocks
+- * @dev: Host device of the @base mapping
+- * @base: Mapping containing the HDM Decoder Capability Header
+- * @map: Map object describing the register block information found
+- *
+- * See CXL 2.0 8.2.4 Component Register Layout and Definition
+- * See CXL 2.0 8.2.5.5 CXL Device Register Interface
+- *
+- * Probe for component register information and return it in map object.
+- */
+-void cxl_probe_component_regs(struct device *dev, void __iomem *base,
+-			      struct cxl_component_reg_map *map)
+-{
+-	int cap, cap_count;
+-	u64 cap_array;
+-
+-	*map = (struct cxl_component_reg_map) { 0 };
+-
+-	/*
+-	 * CXL.cache and CXL.mem registers are at offset 0x1000 as defined in
+-	 * CXL 2.0 8.2.4 Table 141.
+-	 */
+-	base += CXL_CM_OFFSET;
+-
+-	cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET);
+-
+-	if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
+-		dev_err(dev,
+-			"Couldn't locate the CXL.cache and CXL.mem capability array header./n");
+-		return;
+-	}
+-
+-	/* It's assumed that future versions will be backward compatible */
+-	cap_count = FIELD_GET(CXL_CM_CAP_HDR_ARRAY_SIZE_MASK, cap_array);
+-
+-	for (cap = 1; cap <= cap_count; cap++) {
+-		void __iomem *register_block;
+-		u32 hdr;
+-		int decoder_cnt;
+-		u16 cap_id, offset;
+-		u32 length;
+-
+-		hdr = readl(base + cap * 0x4);
+-
+-		cap_id = FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, hdr);
+-		offset = FIELD_GET(CXL_CM_CAP_PTR_MASK, hdr);
+-		register_block = base + offset;
+-
+-		switch (cap_id) {
+-		case CXL_CM_CAP_CAP_ID_HDM:
+-			dev_dbg(dev, "found HDM decoder capability (0x%x)\n",
+-				offset);
+-
+-			hdr = readl(register_block);
+-
+-			decoder_cnt = cxl_hdm_decoder_count(hdr);
+-			length = 0x20 * decoder_cnt + 0x10;
+-
+-			map->hdm_decoder.valid = true;
+-			map->hdm_decoder.offset = CXL_CM_OFFSET + offset;
+-			map->hdm_decoder.size = length;
+-			break;
+-		default:
+-			dev_dbg(dev, "Unknown CM cap ID: %d (0x%x)\n", cap_id,
+-				offset);
+-			break;
+-		}
+-	}
+-}
+-EXPORT_SYMBOL_GPL(cxl_probe_component_regs);
+-
+-static void cxl_nvdimm_bridge_release(struct device *dev)
+-{
+-	struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
+-
+-	kfree(cxl_nvb);
+-}
+-
+-static const struct attribute_group *cxl_nvdimm_bridge_attribute_groups[] = {
+-	&cxl_base_attribute_group,
+-	NULL,
+-};
+-
+-static const struct device_type cxl_nvdimm_bridge_type = {
+-	.name = "cxl_nvdimm_bridge",
+-	.release = cxl_nvdimm_bridge_release,
+-	.groups = cxl_nvdimm_bridge_attribute_groups,
+-};
+-
+-struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
+-{
+-	if (dev_WARN_ONCE(dev, dev->type != &cxl_nvdimm_bridge_type,
+-			  "not a cxl_nvdimm_bridge device\n"))
+-		return NULL;
+-	return container_of(dev, struct cxl_nvdimm_bridge, dev);
+-}
+-EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
+-
+-static struct cxl_nvdimm_bridge *
+-cxl_nvdimm_bridge_alloc(struct cxl_port *port)
+-{
+-	struct cxl_nvdimm_bridge *cxl_nvb;
+-	struct device *dev;
+-
+-	cxl_nvb = kzalloc(sizeof(*cxl_nvb), GFP_KERNEL);
+-	if (!cxl_nvb)
+-		return ERR_PTR(-ENOMEM);
+-
+-	dev = &cxl_nvb->dev;
+-	cxl_nvb->port = port;
+-	cxl_nvb->state = CXL_NVB_NEW;
+-	device_initialize(dev);
+-	device_set_pm_not_required(dev);
+-	dev->parent = &port->dev;
+-	dev->bus = &cxl_bus_type;
+-	dev->type = &cxl_nvdimm_bridge_type;
+-
+-	return cxl_nvb;
+-}
+-
+-static void unregister_nvb(void *_cxl_nvb)
+-{
+-	struct cxl_nvdimm_bridge *cxl_nvb = _cxl_nvb;
+-	bool flush;
+-
+-	/*
+-	 * If the bridge was ever activated then there might be in-flight state
+-	 * work to flush. Once the state has been changed to 'dead' then no new
+-	 * work can be queued by user-triggered bind.
+-	 */
+-	device_lock(&cxl_nvb->dev);
+-	flush = cxl_nvb->state != CXL_NVB_NEW;
+-	cxl_nvb->state = CXL_NVB_DEAD;
+-	device_unlock(&cxl_nvb->dev);
+-
+-	/*
+-	 * Even though the device core will trigger device_release_driver()
+-	 * before the unregister, it does not know about the fact that
+-	 * cxl_nvdimm_bridge_driver defers ->remove() work. So, do the driver
+-	 * release not and flush it before tearing down the nvdimm device
+-	 * hierarchy.
+-	 */
+-	device_release_driver(&cxl_nvb->dev);
+-	if (flush)
+-		flush_work(&cxl_nvb->state_work);
+-	device_unregister(&cxl_nvb->dev);
+-}
+-
+-struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
+-						     struct cxl_port *port)
+-{
+-	struct cxl_nvdimm_bridge *cxl_nvb;
+-	struct device *dev;
+-	int rc;
+-
+-	if (!IS_ENABLED(CONFIG_CXL_PMEM))
+-		return ERR_PTR(-ENXIO);
+-
+-	cxl_nvb = cxl_nvdimm_bridge_alloc(port);
+-	if (IS_ERR(cxl_nvb))
+-		return cxl_nvb;
+-
+-	dev = &cxl_nvb->dev;
+-	rc = dev_set_name(dev, "nvdimm-bridge");
+-	if (rc)
+-		goto err;
+-
+-	rc = device_add(dev);
+-	if (rc)
+-		goto err;
+-
+-	rc = devm_add_action_or_reset(host, unregister_nvb, cxl_nvb);
+-	if (rc)
+-		return ERR_PTR(rc);
+-
+-	return cxl_nvb;
+-
+-err:
+-	put_device(dev);
+-	return ERR_PTR(rc);
+-}
+-EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm_bridge);
+-
+-static void cxl_nvdimm_release(struct device *dev)
+-{
+-	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
+-
+-	kfree(cxl_nvd);
+-}
+-
+-static const struct attribute_group *cxl_nvdimm_attribute_groups[] = {
+-	&cxl_base_attribute_group,
+-	NULL,
+-};
+-
+-static const struct device_type cxl_nvdimm_type = {
+-	.name = "cxl_nvdimm",
+-	.release = cxl_nvdimm_release,
+-	.groups = cxl_nvdimm_attribute_groups,
+-};
+-
+-bool is_cxl_nvdimm(struct device *dev)
+-{
+-	return dev->type == &cxl_nvdimm_type;
+-}
+-EXPORT_SYMBOL_GPL(is_cxl_nvdimm);
+-
+-struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev)
+-{
+-	if (dev_WARN_ONCE(dev, !is_cxl_nvdimm(dev),
+-			  "not a cxl_nvdimm device\n"))
+-		return NULL;
+-	return container_of(dev, struct cxl_nvdimm, dev);
+-}
+-EXPORT_SYMBOL_GPL(to_cxl_nvdimm);
+-
+-static struct cxl_nvdimm *cxl_nvdimm_alloc(struct cxl_memdev *cxlmd)
+-{
+-	struct cxl_nvdimm *cxl_nvd;
+-	struct device *dev;
+-
+-	cxl_nvd = kzalloc(sizeof(*cxl_nvd), GFP_KERNEL);
+-	if (!cxl_nvd)
+-		return ERR_PTR(-ENOMEM);
+-
+-	dev = &cxl_nvd->dev;
+-	cxl_nvd->cxlmd = cxlmd;
+-	device_initialize(dev);
+-	device_set_pm_not_required(dev);
+-	dev->parent = &cxlmd->dev;
+-	dev->bus = &cxl_bus_type;
+-	dev->type = &cxl_nvdimm_type;
+-
+-	return cxl_nvd;
+-}
+-
+-int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd)
+-{
+-	struct cxl_nvdimm *cxl_nvd;
+-	struct device *dev;
+-	int rc;
+-
+-	cxl_nvd = cxl_nvdimm_alloc(cxlmd);
+-	if (IS_ERR(cxl_nvd))
+-		return PTR_ERR(cxl_nvd);
+-
+-	dev = &cxl_nvd->dev;
+-	rc = dev_set_name(dev, "pmem%d", cxlmd->id);
+-	if (rc)
+-		goto err;
+-
+-	rc = device_add(dev);
+-	if (rc)
+-		goto err;
+-
+-	dev_dbg(host, "%s: register %s\n", dev_name(dev->parent),
+-		dev_name(dev));
+-
+-	return devm_add_action_or_reset(host, unregister_dev, dev);
+-
+-err:
+-	put_device(dev);
+-	return rc;
+-}
+-EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm);
+-
+-/**
+- * cxl_probe_device_regs() - Detect CXL Device register blocks
+- * @dev: Host device of the @base mapping
+- * @base: Mapping of CXL 2.0 8.2.8 CXL Device Register Interface
+- * @map: Map object describing the register block information found
+- *
+- * Probe for device register information and return it in map object.
+- */
+-void cxl_probe_device_regs(struct device *dev, void __iomem *base,
+-			   struct cxl_device_reg_map *map)
+-{
+-	int cap, cap_count;
+-	u64 cap_array;
+-
+-	*map = (struct cxl_device_reg_map){ 0 };
+-
+-	cap_array = readq(base + CXLDEV_CAP_ARRAY_OFFSET);
+-	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
+-	    CXLDEV_CAP_ARRAY_CAP_ID)
+-		return;
+-
+-	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
+-
+-	for (cap = 1; cap <= cap_count; cap++) {
+-		u32 offset, length;
+-		u16 cap_id;
+-
+-		cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK,
+-				   readl(base + cap * 0x10));
+-		offset = readl(base + cap * 0x10 + 0x4);
+-		length = readl(base + cap * 0x10 + 0x8);
+-
+-		switch (cap_id) {
+-		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
+-			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
+-
+-			map->status.valid = true;
+-			map->status.offset = offset;
+-			map->status.size = length;
+-			break;
+-		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
+-			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
+-			map->mbox.valid = true;
+-			map->mbox.offset = offset;
+-			map->mbox.size = length;
+-			break;
+-		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
+-			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
+-			break;
+-		case CXLDEV_CAP_CAP_ID_MEMDEV:
+-			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
+-			map->memdev.valid = true;
+-			map->memdev.offset = offset;
+-			map->memdev.size = length;
+-			break;
+-		default:
+-			if (cap_id >= 0x8000)
+-				dev_dbg(dev, "Vendor cap ID: %#x offset: %#x\n", cap_id, offset);
+-			else
+-				dev_dbg(dev, "Unknown cap ID: %#x offset: %#x\n", cap_id, offset);
+-			break;
+-		}
+-	}
+-}
+-EXPORT_SYMBOL_GPL(cxl_probe_device_regs);
+-
+-static void __iomem *devm_cxl_iomap_block(struct device *dev,
+-					  resource_size_t addr,
+-					  resource_size_t length)
+-{
+-	void __iomem *ret_val;
+-	struct resource *res;
+-
+-	res = devm_request_mem_region(dev, addr, length, dev_name(dev));
+-	if (!res) {
+-		resource_size_t end = addr + length - 1;
+-
+-		dev_err(dev, "Failed to request region %pa-%pa\n", &addr, &end);
+-		return NULL;
+-	}
+-
+-	ret_val = devm_ioremap(dev, addr, length);
+-	if (!ret_val)
+-		dev_err(dev, "Failed to map region %pr\n", res);
+-
+-	return ret_val;
+-}
+-
+-int cxl_map_component_regs(struct pci_dev *pdev,
+-			   struct cxl_component_regs *regs,
+-			   struct cxl_register_map *map)
+-{
+-	struct device *dev = &pdev->dev;
+-	resource_size_t phys_addr;
+-	resource_size_t length;
+-
+-	phys_addr = pci_resource_start(pdev, map->barno);
+-	phys_addr += map->block_offset;
+-
+-	phys_addr += map->component_map.hdm_decoder.offset;
+-	length = map->component_map.hdm_decoder.size;
+-	regs->hdm_decoder = devm_cxl_iomap_block(dev, phys_addr, length);
+-	if (!regs->hdm_decoder)
+-		return -ENOMEM;
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(cxl_map_component_regs);
+-
+-int cxl_map_device_regs(struct pci_dev *pdev,
+-			struct cxl_device_regs *regs,
+-			struct cxl_register_map *map)
+-{
+-	struct device *dev = &pdev->dev;
+-	resource_size_t phys_addr;
+-
+-	phys_addr = pci_resource_start(pdev, map->barno);
+-	phys_addr += map->block_offset;
+-
+-	if (map->device_map.status.valid) {
+-		resource_size_t addr;
+-		resource_size_t length;
+-
+-		addr = phys_addr + map->device_map.status.offset;
+-		length = map->device_map.status.size;
+-		regs->status = devm_cxl_iomap_block(dev, addr, length);
+-		if (!regs->status)
+-			return -ENOMEM;
+-	}
+-
+-	if (map->device_map.mbox.valid) {
+-		resource_size_t addr;
+-		resource_size_t length;
+-
+-		addr = phys_addr + map->device_map.mbox.offset;
+-		length = map->device_map.mbox.size;
+-		regs->mbox = devm_cxl_iomap_block(dev, addr, length);
+-		if (!regs->mbox)
+-			return -ENOMEM;
+-	}
+-
+-	if (map->device_map.memdev.valid) {
+-		resource_size_t addr;
+-		resource_size_t length;
+-
+-		addr = phys_addr + map->device_map.memdev.offset;
+-		length = map->device_map.memdev.size;
+-		regs->memdev = devm_cxl_iomap_block(dev, addr, length);
+-		if (!regs->memdev)
+-			return -ENOMEM;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(cxl_map_device_regs);
+-
+-/**
+- * __cxl_driver_register - register a driver for the cxl bus
+- * @cxl_drv: cxl driver structure to attach
+- * @owner: owning module/driver
+- * @modname: KBUILD_MODNAME for parent driver
+- */
+-int __cxl_driver_register(struct cxl_driver *cxl_drv, struct module *owner,
+-			  const char *modname)
+-{
+-	if (!cxl_drv->probe) {
+-		pr_debug("%s ->probe() must be specified\n", modname);
+-		return -EINVAL;
+-	}
+-
+-	if (!cxl_drv->name) {
+-		pr_debug("%s ->name must be specified\n", modname);
+-		return -EINVAL;
+-	}
+-
+-	if (!cxl_drv->id) {
+-		pr_debug("%s ->id must be specified\n", modname);
+-		return -EINVAL;
+-	}
+-
+-	cxl_drv->drv.bus = &cxl_bus_type;
+-	cxl_drv->drv.owner = owner;
+-	cxl_drv->drv.mod_name = modname;
+-	cxl_drv->drv.name = cxl_drv->name;
+-
+-	return driver_register(&cxl_drv->drv);
+-}
+-EXPORT_SYMBOL_GPL(__cxl_driver_register);
+-
+-void cxl_driver_unregister(struct cxl_driver *cxl_drv)
+-{
+-	driver_unregister(&cxl_drv->drv);
+-}
+-EXPORT_SYMBOL_GPL(cxl_driver_unregister);
+-
+-static int cxl_device_id(struct device *dev)
+-{
+-	if (dev->type == &cxl_nvdimm_bridge_type)
+-		return CXL_DEVICE_NVDIMM_BRIDGE;
+-	if (dev->type == &cxl_nvdimm_type)
+-		return CXL_DEVICE_NVDIMM;
+-	return 0;
+-}
+-
+-static int cxl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
+-{
+-	return add_uevent_var(env, "MODALIAS=" CXL_MODALIAS_FMT,
+-			      cxl_device_id(dev));
+-}
+-
+-static int cxl_bus_match(struct device *dev, struct device_driver *drv)
+-{
+-	return cxl_device_id(dev) == to_cxl_drv(drv)->id;
+-}
+-
+-static int cxl_bus_probe(struct device *dev)
+-{
+-	return to_cxl_drv(dev->driver)->probe(dev);
+-}
+-
+-static int cxl_bus_remove(struct device *dev)
+-{
+-	struct cxl_driver *cxl_drv = to_cxl_drv(dev->driver);
+-
+-	if (cxl_drv->remove)
+-		cxl_drv->remove(dev);
+-	return 0;
+-}
+-
+-struct bus_type cxl_bus_type = {
+-	.name = "cxl",
+-	.uevent = cxl_bus_uevent,
+-	.match = cxl_bus_match,
+-	.probe = cxl_bus_probe,
+-	.remove = cxl_bus_remove,
+-};
+-EXPORT_SYMBOL_GPL(cxl_bus_type);
+-
+-static __init int cxl_core_init(void)
+-{
+-	return bus_register(&cxl_bus_type);
+-}
+-
+-static void cxl_core_exit(void)
+-{
+-	bus_unregister(&cxl_bus_type);
+-}
+-
+-module_init(cxl_core_init);
+-module_exit(cxl_core_exit);
+-MODULE_LICENSE("GPL v2");
+diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
+new file mode 100644
+index 0000000000000..ad137f96e5c84
+--- /dev/null
++++ b/drivers/cxl/core/Makefile
+@@ -0,0 +1,5 @@
++# SPDX-License-Identifier: GPL-2.0
++obj-$(CONFIG_CXL_BUS) += cxl_core.o
++
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL -I$(srctree)/drivers/cxl
++cxl_core-y := bus.o
+diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
+new file mode 100644
+index 0000000000000..0815eec239443
+--- /dev/null
++++ b/drivers/cxl/core/bus.c
+@@ -0,0 +1,1067 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
++#include <linux/io-64-nonatomic-lo-hi.h>
++#include <linux/device.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++#include <linux/slab.h>
++#include <linux/idr.h>
++#include <cxlmem.h>
++#include <cxl.h>
++
++/**
++ * DOC: cxl core
++ *
++ * The CXL core provides a sysfs hierarchy for control devices and a rendezvous
++ * point for cross-device interleave coordination through cxl ports.
++ */
++
++static DEFINE_IDA(cxl_port_ida);
++
++static ssize_t devtype_show(struct device *dev, struct device_attribute *attr,
++			    char *buf)
++{
++	return sysfs_emit(buf, "%s\n", dev->type->name);
++}
++static DEVICE_ATTR_RO(devtype);
++
++static struct attribute *cxl_base_attributes[] = {
++	&dev_attr_devtype.attr,
++	NULL,
++};
++
++static struct attribute_group cxl_base_attribute_group = {
++	.attrs = cxl_base_attributes,
++};
++
++static ssize_t start_show(struct device *dev, struct device_attribute *attr,
++			  char *buf)
++{
++	struct cxl_decoder *cxld = to_cxl_decoder(dev);
++
++	return sysfs_emit(buf, "%#llx\n", cxld->range.start);
++}
++static DEVICE_ATTR_RO(start);
++
++static ssize_t size_show(struct device *dev, struct device_attribute *attr,
++			char *buf)
++{
++	struct cxl_decoder *cxld = to_cxl_decoder(dev);
++
++	return sysfs_emit(buf, "%#llx\n", range_len(&cxld->range));
++}
++static DEVICE_ATTR_RO(size);
++
++#define CXL_DECODER_FLAG_ATTR(name, flag)                            \
++static ssize_t name##_show(struct device *dev,                       \
++			   struct device_attribute *attr, char *buf) \
++{                                                                    \
++	struct cxl_decoder *cxld = to_cxl_decoder(dev);              \
++                                                                     \
++	return sysfs_emit(buf, "%s\n",                               \
++			  (cxld->flags & (flag)) ? "1" : "0");       \
++}                                                                    \
++static DEVICE_ATTR_RO(name)
++
++CXL_DECODER_FLAG_ATTR(cap_pmem, CXL_DECODER_F_PMEM);
++CXL_DECODER_FLAG_ATTR(cap_ram, CXL_DECODER_F_RAM);
++CXL_DECODER_FLAG_ATTR(cap_type2, CXL_DECODER_F_TYPE2);
++CXL_DECODER_FLAG_ATTR(cap_type3, CXL_DECODER_F_TYPE3);
++CXL_DECODER_FLAG_ATTR(locked, CXL_DECODER_F_LOCK);
++
++static ssize_t target_type_show(struct device *dev,
++				struct device_attribute *attr, char *buf)
++{
++	struct cxl_decoder *cxld = to_cxl_decoder(dev);
++
++	switch (cxld->target_type) {
++	case CXL_DECODER_ACCELERATOR:
++		return sysfs_emit(buf, "accelerator\n");
++	case CXL_DECODER_EXPANDER:
++		return sysfs_emit(buf, "expander\n");
++	}
++	return -ENXIO;
++}
++static DEVICE_ATTR_RO(target_type);
++
++static ssize_t target_list_show(struct device *dev,
++			       struct device_attribute *attr, char *buf)
++{
++	struct cxl_decoder *cxld = to_cxl_decoder(dev);
++	ssize_t offset = 0;
++	int i, rc = 0;
++
++	device_lock(dev);
++	for (i = 0; i < cxld->interleave_ways; i++) {
++		struct cxl_dport *dport = cxld->target[i];
++		struct cxl_dport *next = NULL;
++
++		if (!dport)
++			break;
++
++		if (i + 1 < cxld->interleave_ways)
++			next = cxld->target[i + 1];
++		rc = sysfs_emit_at(buf, offset, "%d%s", dport->port_id,
++				   next ? "," : "");
++		if (rc < 0)
++			break;
++		offset += rc;
++	}
++	device_unlock(dev);
++
++	if (rc < 0)
++		return rc;
++
++	rc = sysfs_emit_at(buf, offset, "\n");
++	if (rc < 0)
++		return rc;
++
++	return offset + rc;
++}
++static DEVICE_ATTR_RO(target_list);
++
++static struct attribute *cxl_decoder_base_attrs[] = {
++	&dev_attr_start.attr,
++	&dev_attr_size.attr,
++	&dev_attr_locked.attr,
++	&dev_attr_target_list.attr,
++	NULL,
++};
++
++static struct attribute_group cxl_decoder_base_attribute_group = {
++	.attrs = cxl_decoder_base_attrs,
++};
++
++static struct attribute *cxl_decoder_root_attrs[] = {
++	&dev_attr_cap_pmem.attr,
++	&dev_attr_cap_ram.attr,
++	&dev_attr_cap_type2.attr,
++	&dev_attr_cap_type3.attr,
++	NULL,
++};
++
++static struct attribute_group cxl_decoder_root_attribute_group = {
++	.attrs = cxl_decoder_root_attrs,
++};
++
++static const struct attribute_group *cxl_decoder_root_attribute_groups[] = {
++	&cxl_decoder_root_attribute_group,
++	&cxl_decoder_base_attribute_group,
++	&cxl_base_attribute_group,
++	NULL,
++};
++
++static struct attribute *cxl_decoder_switch_attrs[] = {
++	&dev_attr_target_type.attr,
++	NULL,
++};
++
++static struct attribute_group cxl_decoder_switch_attribute_group = {
++	.attrs = cxl_decoder_switch_attrs,
++};
++
++static const struct attribute_group *cxl_decoder_switch_attribute_groups[] = {
++	&cxl_decoder_switch_attribute_group,
++	&cxl_decoder_base_attribute_group,
++	&cxl_base_attribute_group,
++	NULL,
++};
++
++static void cxl_decoder_release(struct device *dev)
++{
++	struct cxl_decoder *cxld = to_cxl_decoder(dev);
++	struct cxl_port *port = to_cxl_port(dev->parent);
++
++	ida_free(&port->decoder_ida, cxld->id);
++	kfree(cxld);
++}
++
++static const struct device_type cxl_decoder_switch_type = {
++	.name = "cxl_decoder_switch",
++	.release = cxl_decoder_release,
++	.groups = cxl_decoder_switch_attribute_groups,
++};
++
++static const struct device_type cxl_decoder_root_type = {
++	.name = "cxl_decoder_root",
++	.release = cxl_decoder_release,
++	.groups = cxl_decoder_root_attribute_groups,
++};
++
++bool is_root_decoder(struct device *dev)
++{
++	return dev->type == &cxl_decoder_root_type;
++}
++EXPORT_SYMBOL_GPL(is_root_decoder);
++
++struct cxl_decoder *to_cxl_decoder(struct device *dev)
++{
++	if (dev_WARN_ONCE(dev, dev->type->release != cxl_decoder_release,
++			  "not a cxl_decoder device\n"))
++		return NULL;
++	return container_of(dev, struct cxl_decoder, dev);
++}
++EXPORT_SYMBOL_GPL(to_cxl_decoder);
++
++static void cxl_dport_release(struct cxl_dport *dport)
++{
++	list_del(&dport->list);
++	put_device(dport->dport);
++	kfree(dport);
++}
++
++static void cxl_port_release(struct device *dev)
++{
++	struct cxl_port *port = to_cxl_port(dev);
++	struct cxl_dport *dport, *_d;
++
++	device_lock(dev);
++	list_for_each_entry_safe(dport, _d, &port->dports, list)
++		cxl_dport_release(dport);
++	device_unlock(dev);
++	ida_free(&cxl_port_ida, port->id);
++	kfree(port);
++}
++
++static const struct attribute_group *cxl_port_attribute_groups[] = {
++	&cxl_base_attribute_group,
++	NULL,
++};
++
++static const struct device_type cxl_port_type = {
++	.name = "cxl_port",
++	.release = cxl_port_release,
++	.groups = cxl_port_attribute_groups,
++};
++
++struct cxl_port *to_cxl_port(struct device *dev)
++{
++	if (dev_WARN_ONCE(dev, dev->type != &cxl_port_type,
++			  "not a cxl_port device\n"))
++		return NULL;
++	return container_of(dev, struct cxl_port, dev);
++}
++
++static void unregister_port(void *_port)
++{
++	struct cxl_port *port = _port;
++	struct cxl_dport *dport;
++
++	device_lock(&port->dev);
++	list_for_each_entry(dport, &port->dports, list) {
++		char link_name[CXL_TARGET_STRLEN];
++
++		if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d",
++			     dport->port_id) >= CXL_TARGET_STRLEN)
++			continue;
++		sysfs_remove_link(&port->dev.kobj, link_name);
++	}
++	device_unlock(&port->dev);
++	device_unregister(&port->dev);
++}
++
++static void cxl_unlink_uport(void *_port)
++{
++	struct cxl_port *port = _port;
++
++	sysfs_remove_link(&port->dev.kobj, "uport");
++}
++
++static int devm_cxl_link_uport(struct device *host, struct cxl_port *port)
++{
++	int rc;
++
++	rc = sysfs_create_link(&port->dev.kobj, &port->uport->kobj, "uport");
++	if (rc)
++		return rc;
++	return devm_add_action_or_reset(host, cxl_unlink_uport, port);
++}
++
++static struct cxl_port *cxl_port_alloc(struct device *uport,
++				       resource_size_t component_reg_phys,
++				       struct cxl_port *parent_port)
++{
++	struct cxl_port *port;
++	struct device *dev;
++	int rc;
++
++	port = kzalloc(sizeof(*port), GFP_KERNEL);
++	if (!port)
++		return ERR_PTR(-ENOMEM);
++
++	rc = ida_alloc(&cxl_port_ida, GFP_KERNEL);
++	if (rc < 0)
++		goto err;
++	port->id = rc;
++
++	/*
++	 * The top-level cxl_port "cxl_root" does not have a cxl_port as
++	 * its parent and it does not have any corresponding component
++	 * registers as its decode is described by a fixed platform
++	 * description.
++	 */
++	dev = &port->dev;
++	if (parent_port)
++		dev->parent = &parent_port->dev;
++	else
++		dev->parent = uport;
++
++	port->uport = uport;
++	port->component_reg_phys = component_reg_phys;
++	ida_init(&port->decoder_ida);
++	INIT_LIST_HEAD(&port->dports);
++
++	device_initialize(dev);
++	device_set_pm_not_required(dev);
++	dev->bus = &cxl_bus_type;
++	dev->type = &cxl_port_type;
++
++	return port;
++
++err:
++	kfree(port);
++	return ERR_PTR(rc);
++}
++
++/**
++ * devm_cxl_add_port - register a cxl_port in CXL memory decode hierarchy
++ * @host: host device for devm operations
++ * @uport: "physical" device implementing this upstream port
++ * @component_reg_phys: (optional) for configurable cxl_port instances
++ * @parent_port: next hop up in the CXL memory decode hierarchy
++ */
++struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport,
++				   resource_size_t component_reg_phys,
++				   struct cxl_port *parent_port)
++{
++	struct cxl_port *port;
++	struct device *dev;
++	int rc;
++
++	port = cxl_port_alloc(uport, component_reg_phys, parent_port);
++	if (IS_ERR(port))
++		return port;
++
++	dev = &port->dev;
++	if (parent_port)
++		rc = dev_set_name(dev, "port%d", port->id);
++	else
++		rc = dev_set_name(dev, "root%d", port->id);
++	if (rc)
++		goto err;
++
++	rc = device_add(dev);
++	if (rc)
++		goto err;
++
++	rc = devm_add_action_or_reset(host, unregister_port, port);
++	if (rc)
++		return ERR_PTR(rc);
++
++	rc = devm_cxl_link_uport(host, port);
++	if (rc)
++		return ERR_PTR(rc);
++
++	return port;
++
++err:
++	put_device(dev);
++	return ERR_PTR(rc);
++}
++EXPORT_SYMBOL_GPL(devm_cxl_add_port);
++
++static struct cxl_dport *find_dport(struct cxl_port *port, int id)
++{
++	struct cxl_dport *dport;
++
++	device_lock_assert(&port->dev);
++	list_for_each_entry (dport, &port->dports, list)
++		if (dport->port_id == id)
++			return dport;
++	return NULL;
++}
++
++static int add_dport(struct cxl_port *port, struct cxl_dport *new)
++{
++	struct cxl_dport *dup;
++
++	device_lock(&port->dev);
++	dup = find_dport(port, new->port_id);
++	if (dup)
++		dev_err(&port->dev,
++			"unable to add dport%d-%s non-unique port id (%s)\n",
++			new->port_id, dev_name(new->dport),
++			dev_name(dup->dport));
++	else
++		list_add_tail(&new->list, &port->dports);
++	device_unlock(&port->dev);
++
++	return dup ? -EEXIST : 0;
++}
++
++/**
++ * cxl_add_dport - append downstream port data to a cxl_port
++ * @port: the cxl_port that references this dport
++ * @dport_dev: firmware or PCI device representing the dport
++ * @port_id: identifier for this dport in a decoder's target list
++ * @component_reg_phys: optional location of CXL component registers
++ *
++ * Note that all allocations and links are undone by cxl_port deletion
++ * and release.
++ */
++int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id,
++		  resource_size_t component_reg_phys)
++{
++	char link_name[CXL_TARGET_STRLEN];
++	struct cxl_dport *dport;
++	int rc;
++
++	if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d", port_id) >=
++	    CXL_TARGET_STRLEN)
++		return -EINVAL;
++
++	dport = kzalloc(sizeof(*dport), GFP_KERNEL);
++	if (!dport)
++		return -ENOMEM;
++
++	INIT_LIST_HEAD(&dport->list);
++	dport->dport = get_device(dport_dev);
++	dport->port_id = port_id;
++	dport->component_reg_phys = component_reg_phys;
++	dport->port = port;
++
++	rc = add_dport(port, dport);
++	if (rc)
++		goto err;
++
++	rc = sysfs_create_link(&port->dev.kobj, &dport_dev->kobj, link_name);
++	if (rc)
++		goto err;
++
++	return 0;
++err:
++	cxl_dport_release(dport);
++	return rc;
++}
++EXPORT_SYMBOL_GPL(cxl_add_dport);
++
++static struct cxl_decoder *
++cxl_decoder_alloc(struct cxl_port *port, int nr_targets, resource_size_t base,
++		  resource_size_t len, int interleave_ways,
++		  int interleave_granularity, enum cxl_decoder_type type,
++		  unsigned long flags)
++{
++	struct cxl_decoder *cxld;
++	struct device *dev;
++	int rc = 0;
++
++	if (interleave_ways < 1)
++		return ERR_PTR(-EINVAL);
++
++	device_lock(&port->dev);
++	if (list_empty(&port->dports))
++		rc = -EINVAL;
++	device_unlock(&port->dev);
++	if (rc)
++		return ERR_PTR(rc);
++
++	cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL);
++	if (!cxld)
++		return ERR_PTR(-ENOMEM);
++
++	rc = ida_alloc(&port->decoder_ida, GFP_KERNEL);
++	if (rc < 0)
++		goto err;
++
++	*cxld = (struct cxl_decoder) {
++		.id = rc,
++		.range = {
++			.start = base,
++			.end = base + len - 1,
++		},
++		.flags = flags,
++		.interleave_ways = interleave_ways,
++		.interleave_granularity = interleave_granularity,
++		.target_type = type,
++	};
++
++	/* handle implied target_list */
++	if (interleave_ways == 1)
++		cxld->target[0] =
++			list_first_entry(&port->dports, struct cxl_dport, list);
++	dev = &cxld->dev;
++	device_initialize(dev);
++	device_set_pm_not_required(dev);
++	dev->parent = &port->dev;
++	dev->bus = &cxl_bus_type;
++
++	/* root ports do not have a cxl_port_type parent */
++	if (port->dev.parent->type == &cxl_port_type)
++		dev->type = &cxl_decoder_switch_type;
++	else
++		dev->type = &cxl_decoder_root_type;
++
++	return cxld;
++err:
++	kfree(cxld);
++	return ERR_PTR(rc);
++}
++
++static void unregister_dev(void *dev)
++{
++	device_unregister(dev);
++}
++
++struct cxl_decoder *
++devm_cxl_add_decoder(struct device *host, struct cxl_port *port, int nr_targets,
++		     resource_size_t base, resource_size_t len,
++		     int interleave_ways, int interleave_granularity,
++		     enum cxl_decoder_type type, unsigned long flags)
++{
++	struct cxl_decoder *cxld;
++	struct device *dev;
++	int rc;
++
++	cxld = cxl_decoder_alloc(port, nr_targets, base, len, interleave_ways,
++				 interleave_granularity, type, flags);
++	if (IS_ERR(cxld))
++		return cxld;
++
++	dev = &cxld->dev;
++	rc = dev_set_name(dev, "decoder%d.%d", port->id, cxld->id);
++	if (rc)
++		goto err;
++
++	rc = device_add(dev);
++	if (rc)
++		goto err;
++
++	rc = devm_add_action_or_reset(host, unregister_dev, dev);
++	if (rc)
++		return ERR_PTR(rc);
++	return cxld;
++
++err:
++	put_device(dev);
++	return ERR_PTR(rc);
++}
++EXPORT_SYMBOL_GPL(devm_cxl_add_decoder);
++
++/**
++ * cxl_probe_component_regs() - Detect CXL Component register blocks
++ * @dev: Host device of the @base mapping
++ * @base: Mapping containing the HDM Decoder Capability Header
++ * @map: Map object describing the register block information found
++ *
++ * See CXL 2.0 8.2.4 Component Register Layout and Definition
++ * See CXL 2.0 8.2.5.5 CXL Device Register Interface
++ *
++ * Probe for component register information and return it in map object.
++ */
++void cxl_probe_component_regs(struct device *dev, void __iomem *base,
++			      struct cxl_component_reg_map *map)
++{
++	int cap, cap_count;
++	u64 cap_array;
++
++	*map = (struct cxl_component_reg_map) { 0 };
++
++	/*
++	 * CXL.cache and CXL.mem registers are at offset 0x1000 as defined in
++	 * CXL 2.0 8.2.4 Table 141.
++	 */
++	base += CXL_CM_OFFSET;
++
++	cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET);
++
++	if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
++		dev_err(dev,
++			"Couldn't locate the CXL.cache and CXL.mem capability array header./n");
++		return;
++	}
++
++	/* It's assumed that future versions will be backward compatible */
++	cap_count = FIELD_GET(CXL_CM_CAP_HDR_ARRAY_SIZE_MASK, cap_array);
++
++	for (cap = 1; cap <= cap_count; cap++) {
++		void __iomem *register_block;
++		u32 hdr;
++		int decoder_cnt;
++		u16 cap_id, offset;
++		u32 length;
++
++		hdr = readl(base + cap * 0x4);
++
++		cap_id = FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, hdr);
++		offset = FIELD_GET(CXL_CM_CAP_PTR_MASK, hdr);
++		register_block = base + offset;
++
++		switch (cap_id) {
++		case CXL_CM_CAP_CAP_ID_HDM:
++			dev_dbg(dev, "found HDM decoder capability (0x%x)\n",
++				offset);
++
++			hdr = readl(register_block);
++
++			decoder_cnt = cxl_hdm_decoder_count(hdr);
++			length = 0x20 * decoder_cnt + 0x10;
++
++			map->hdm_decoder.valid = true;
++			map->hdm_decoder.offset = CXL_CM_OFFSET + offset;
++			map->hdm_decoder.size = length;
++			break;
++		default:
++			dev_dbg(dev, "Unknown CM cap ID: %d (0x%x)\n", cap_id,
++				offset);
++			break;
++		}
++	}
++}
++EXPORT_SYMBOL_GPL(cxl_probe_component_regs);
++
++static void cxl_nvdimm_bridge_release(struct device *dev)
++{
++	struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
++
++	kfree(cxl_nvb);
++}
++
++static const struct attribute_group *cxl_nvdimm_bridge_attribute_groups[] = {
++	&cxl_base_attribute_group,
++	NULL,
++};
++
++static const struct device_type cxl_nvdimm_bridge_type = {
++	.name = "cxl_nvdimm_bridge",
++	.release = cxl_nvdimm_bridge_release,
++	.groups = cxl_nvdimm_bridge_attribute_groups,
++};
++
++struct cxl_nvdimm_bridge *to_cxl_nvdimm_bridge(struct device *dev)
++{
++	if (dev_WARN_ONCE(dev, dev->type != &cxl_nvdimm_bridge_type,
++			  "not a cxl_nvdimm_bridge device\n"))
++		return NULL;
++	return container_of(dev, struct cxl_nvdimm_bridge, dev);
++}
++EXPORT_SYMBOL_GPL(to_cxl_nvdimm_bridge);
++
++static struct cxl_nvdimm_bridge *
++cxl_nvdimm_bridge_alloc(struct cxl_port *port)
++{
++	struct cxl_nvdimm_bridge *cxl_nvb;
++	struct device *dev;
++
++	cxl_nvb = kzalloc(sizeof(*cxl_nvb), GFP_KERNEL);
++	if (!cxl_nvb)
++		return ERR_PTR(-ENOMEM);
++
++	dev = &cxl_nvb->dev;
++	cxl_nvb->port = port;
++	cxl_nvb->state = CXL_NVB_NEW;
++	device_initialize(dev);
++	device_set_pm_not_required(dev);
++	dev->parent = &port->dev;
++	dev->bus = &cxl_bus_type;
++	dev->type = &cxl_nvdimm_bridge_type;
++
++	return cxl_nvb;
++}
++
++static void unregister_nvb(void *_cxl_nvb)
++{
++	struct cxl_nvdimm_bridge *cxl_nvb = _cxl_nvb;
++	bool flush;
++
++	/*
++	 * If the bridge was ever activated then there might be in-flight state
++	 * work to flush. Once the state has been changed to 'dead' then no new
++	 * work can be queued by user-triggered bind.
++	 */
++	device_lock(&cxl_nvb->dev);
++	flush = cxl_nvb->state != CXL_NVB_NEW;
++	cxl_nvb->state = CXL_NVB_DEAD;
++	device_unlock(&cxl_nvb->dev);
++
++	/*
++	 * Even though the device core will trigger device_release_driver()
++	 * before the unregister, it does not know about the fact that
++	 * cxl_nvdimm_bridge_driver defers ->remove() work. So, do the driver
++	 * release not and flush it before tearing down the nvdimm device
++	 * hierarchy.
++	 */
++	device_release_driver(&cxl_nvb->dev);
++	if (flush)
++		flush_work(&cxl_nvb->state_work);
++	device_unregister(&cxl_nvb->dev);
++}
++
++struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,
++						     struct cxl_port *port)
++{
++	struct cxl_nvdimm_bridge *cxl_nvb;
++	struct device *dev;
++	int rc;
++
++	if (!IS_ENABLED(CONFIG_CXL_PMEM))
++		return ERR_PTR(-ENXIO);
++
++	cxl_nvb = cxl_nvdimm_bridge_alloc(port);
++	if (IS_ERR(cxl_nvb))
++		return cxl_nvb;
++
++	dev = &cxl_nvb->dev;
++	rc = dev_set_name(dev, "nvdimm-bridge");
++	if (rc)
++		goto err;
++
++	rc = device_add(dev);
++	if (rc)
++		goto err;
++
++	rc = devm_add_action_or_reset(host, unregister_nvb, cxl_nvb);
++	if (rc)
++		return ERR_PTR(rc);
++
++	return cxl_nvb;
++
++err:
++	put_device(dev);
++	return ERR_PTR(rc);
++}
++EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm_bridge);
++
++static void cxl_nvdimm_release(struct device *dev)
++{
++	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
++
++	kfree(cxl_nvd);
++}
++
++static const struct attribute_group *cxl_nvdimm_attribute_groups[] = {
++	&cxl_base_attribute_group,
++	NULL,
++};
++
++static const struct device_type cxl_nvdimm_type = {
++	.name = "cxl_nvdimm",
++	.release = cxl_nvdimm_release,
++	.groups = cxl_nvdimm_attribute_groups,
++};
++
++bool is_cxl_nvdimm(struct device *dev)
++{
++	return dev->type == &cxl_nvdimm_type;
++}
++EXPORT_SYMBOL_GPL(is_cxl_nvdimm);
++
++struct cxl_nvdimm *to_cxl_nvdimm(struct device *dev)
++{
++	if (dev_WARN_ONCE(dev, !is_cxl_nvdimm(dev),
++			  "not a cxl_nvdimm device\n"))
++		return NULL;
++	return container_of(dev, struct cxl_nvdimm, dev);
++}
++EXPORT_SYMBOL_GPL(to_cxl_nvdimm);
++
++static struct cxl_nvdimm *cxl_nvdimm_alloc(struct cxl_memdev *cxlmd)
++{
++	struct cxl_nvdimm *cxl_nvd;
++	struct device *dev;
++
++	cxl_nvd = kzalloc(sizeof(*cxl_nvd), GFP_KERNEL);
++	if (!cxl_nvd)
++		return ERR_PTR(-ENOMEM);
++
++	dev = &cxl_nvd->dev;
++	cxl_nvd->cxlmd = cxlmd;
++	device_initialize(dev);
++	device_set_pm_not_required(dev);
++	dev->parent = &cxlmd->dev;
++	dev->bus = &cxl_bus_type;
++	dev->type = &cxl_nvdimm_type;
++
++	return cxl_nvd;
++}
++
++int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd)
++{
++	struct cxl_nvdimm *cxl_nvd;
++	struct device *dev;
++	int rc;
++
++	cxl_nvd = cxl_nvdimm_alloc(cxlmd);
++	if (IS_ERR(cxl_nvd))
++		return PTR_ERR(cxl_nvd);
++
++	dev = &cxl_nvd->dev;
++	rc = dev_set_name(dev, "pmem%d", cxlmd->id);
++	if (rc)
++		goto err;
++
++	rc = device_add(dev);
++	if (rc)
++		goto err;
++
++	dev_dbg(host, "%s: register %s\n", dev_name(dev->parent),
++		dev_name(dev));
++
++	return devm_add_action_or_reset(host, unregister_dev, dev);
++
++err:
++	put_device(dev);
++	return rc;
++}
++EXPORT_SYMBOL_GPL(devm_cxl_add_nvdimm);
++
++/**
++ * cxl_probe_device_regs() - Detect CXL Device register blocks
++ * @dev: Host device of the @base mapping
++ * @base: Mapping of CXL 2.0 8.2.8 CXL Device Register Interface
++ * @map: Map object describing the register block information found
++ *
++ * Probe for device register information and return it in map object.
++ */
++void cxl_probe_device_regs(struct device *dev, void __iomem *base,
++			   struct cxl_device_reg_map *map)
++{
++	int cap, cap_count;
++	u64 cap_array;
++
++	*map = (struct cxl_device_reg_map){ 0 };
++
++	cap_array = readq(base + CXLDEV_CAP_ARRAY_OFFSET);
++	if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
++	    CXLDEV_CAP_ARRAY_CAP_ID)
++		return;
++
++	cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
++
++	for (cap = 1; cap <= cap_count; cap++) {
++		u32 offset, length;
++		u16 cap_id;
++
++		cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK,
++				   readl(base + cap * 0x10));
++		offset = readl(base + cap * 0x10 + 0x4);
++		length = readl(base + cap * 0x10 + 0x8);
++
++		switch (cap_id) {
++		case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
++			dev_dbg(dev, "found Status capability (0x%x)\n", offset);
++
++			map->status.valid = true;
++			map->status.offset = offset;
++			map->status.size = length;
++			break;
++		case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
++			dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
++			map->mbox.valid = true;
++			map->mbox.offset = offset;
++			map->mbox.size = length;
++			break;
++		case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
++			dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
++			break;
++		case CXLDEV_CAP_CAP_ID_MEMDEV:
++			dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
++			map->memdev.valid = true;
++			map->memdev.offset = offset;
++			map->memdev.size = length;
++			break;
++		default:
++			if (cap_id >= 0x8000)
++				dev_dbg(dev, "Vendor cap ID: %#x offset: %#x\n", cap_id, offset);
++			else
++				dev_dbg(dev, "Unknown cap ID: %#x offset: %#x\n", cap_id, offset);
++			break;
++		}
++	}
++}
++EXPORT_SYMBOL_GPL(cxl_probe_device_regs);
++
++static void __iomem *devm_cxl_iomap_block(struct device *dev,
++					  resource_size_t addr,
++					  resource_size_t length)
++{
++	void __iomem *ret_val;
++	struct resource *res;
++
++	res = devm_request_mem_region(dev, addr, length, dev_name(dev));
++	if (!res) {
++		resource_size_t end = addr + length - 1;
++
++		dev_err(dev, "Failed to request region %pa-%pa\n", &addr, &end);
++		return NULL;
++	}
++
++	ret_val = devm_ioremap(dev, addr, length);
++	if (!ret_val)
++		dev_err(dev, "Failed to map region %pr\n", res);
++
++	return ret_val;
++}
++
++int cxl_map_component_regs(struct pci_dev *pdev,
++			   struct cxl_component_regs *regs,
++			   struct cxl_register_map *map)
++{
++	struct device *dev = &pdev->dev;
++	resource_size_t phys_addr;
++	resource_size_t length;
++
++	phys_addr = pci_resource_start(pdev, map->barno);
++	phys_addr += map->block_offset;
++
++	phys_addr += map->component_map.hdm_decoder.offset;
++	length = map->component_map.hdm_decoder.size;
++	regs->hdm_decoder = devm_cxl_iomap_block(dev, phys_addr, length);
++	if (!regs->hdm_decoder)
++		return -ENOMEM;
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(cxl_map_component_regs);
++
++int cxl_map_device_regs(struct pci_dev *pdev,
++			struct cxl_device_regs *regs,
++			struct cxl_register_map *map)
++{
++	struct device *dev = &pdev->dev;
++	resource_size_t phys_addr;
++
++	phys_addr = pci_resource_start(pdev, map->barno);
++	phys_addr += map->block_offset;
++
++	if (map->device_map.status.valid) {
++		resource_size_t addr;
++		resource_size_t length;
++
++		addr = phys_addr + map->device_map.status.offset;
++		length = map->device_map.status.size;
++		regs->status = devm_cxl_iomap_block(dev, addr, length);
++		if (!regs->status)
++			return -ENOMEM;
++	}
++
++	if (map->device_map.mbox.valid) {
++		resource_size_t addr;
++		resource_size_t length;
++
++		addr = phys_addr + map->device_map.mbox.offset;
++		length = map->device_map.mbox.size;
++		regs->mbox = devm_cxl_iomap_block(dev, addr, length);
++		if (!regs->mbox)
++			return -ENOMEM;
++	}
++
++	if (map->device_map.memdev.valid) {
++		resource_size_t addr;
++		resource_size_t length;
++
++		addr = phys_addr + map->device_map.memdev.offset;
++		length = map->device_map.memdev.size;
++		regs->memdev = devm_cxl_iomap_block(dev, addr, length);
++		if (!regs->memdev)
++			return -ENOMEM;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(cxl_map_device_regs);
++
++/**
++ * __cxl_driver_register - register a driver for the cxl bus
++ * @cxl_drv: cxl driver structure to attach
++ * @owner: owning module/driver
++ * @modname: KBUILD_MODNAME for parent driver
++ */
++int __cxl_driver_register(struct cxl_driver *cxl_drv, struct module *owner,
++			  const char *modname)
++{
++	if (!cxl_drv->probe) {
++		pr_debug("%s ->probe() must be specified\n", modname);
++		return -EINVAL;
++	}
++
++	if (!cxl_drv->name) {
++		pr_debug("%s ->name must be specified\n", modname);
++		return -EINVAL;
++	}
++
++	if (!cxl_drv->id) {
++		pr_debug("%s ->id must be specified\n", modname);
++		return -EINVAL;
++	}
++
++	cxl_drv->drv.bus = &cxl_bus_type;
++	cxl_drv->drv.owner = owner;
++	cxl_drv->drv.mod_name = modname;
++	cxl_drv->drv.name = cxl_drv->name;
++
++	return driver_register(&cxl_drv->drv);
++}
++EXPORT_SYMBOL_GPL(__cxl_driver_register);
++
++void cxl_driver_unregister(struct cxl_driver *cxl_drv)
++{
++	driver_unregister(&cxl_drv->drv);
++}
++EXPORT_SYMBOL_GPL(cxl_driver_unregister);
++
++static int cxl_device_id(struct device *dev)
++{
++	if (dev->type == &cxl_nvdimm_bridge_type)
++		return CXL_DEVICE_NVDIMM_BRIDGE;
++	if (dev->type == &cxl_nvdimm_type)
++		return CXL_DEVICE_NVDIMM;
++	return 0;
++}
++
++static int cxl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
++{
++	return add_uevent_var(env, "MODALIAS=" CXL_MODALIAS_FMT,
++			      cxl_device_id(dev));
++}
++
++static int cxl_bus_match(struct device *dev, struct device_driver *drv)
++{
++	return cxl_device_id(dev) == to_cxl_drv(drv)->id;
++}
++
++static int cxl_bus_probe(struct device *dev)
++{
++	return to_cxl_drv(dev->driver)->probe(dev);
++}
++
++static int cxl_bus_remove(struct device *dev)
++{
++	struct cxl_driver *cxl_drv = to_cxl_drv(dev->driver);
++
++	if (cxl_drv->remove)
++		cxl_drv->remove(dev);
++	return 0;
++}
++
++struct bus_type cxl_bus_type = {
++	.name = "cxl",
++	.uevent = cxl_bus_uevent,
++	.match = cxl_bus_match,
++	.probe = cxl_bus_probe,
++	.remove = cxl_bus_remove,
++};
++EXPORT_SYMBOL_GPL(cxl_bus_type);
++
++static __init int cxl_core_init(void)
++{
++	return bus_register(&cxl_bus_type);
++}
++
++static void cxl_core_exit(void)
++{
++	bus_unregister(&cxl_bus_type);
++}
++
++module_init(cxl_core_init);
++module_exit(cxl_core_exit);
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
+new file mode 100644
+index 0000000000000..0cd463de13423
+--- /dev/null
++++ b/drivers/cxl/cxlmem.h
+@@ -0,0 +1,96 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/* Copyright(c) 2020-2021 Intel Corporation. */
++#ifndef __CXL_MEM_H__
++#define __CXL_MEM_H__
++#include <linux/cdev.h>
++#include "cxl.h"
++
++/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
++#define CXLMDEV_STATUS_OFFSET 0x0
++#define   CXLMDEV_DEV_FATAL BIT(0)
++#define   CXLMDEV_FW_HALT BIT(1)
++#define   CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
++#define     CXLMDEV_MS_NOT_READY 0
++#define     CXLMDEV_MS_READY 1
++#define     CXLMDEV_MS_ERROR 2
++#define     CXLMDEV_MS_DISABLED 3
++#define CXLMDEV_READY(status)                                                  \
++	(FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) ==                \
++	 CXLMDEV_MS_READY)
++#define   CXLMDEV_MBOX_IF_READY BIT(4)
++#define   CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
++#define     CXLMDEV_RESET_NEEDED_NOT 0
++#define     CXLMDEV_RESET_NEEDED_COLD 1
++#define     CXLMDEV_RESET_NEEDED_WARM 2
++#define     CXLMDEV_RESET_NEEDED_HOT 3
++#define     CXLMDEV_RESET_NEEDED_CXL 4
++#define CXLMDEV_RESET_NEEDED(status)                                           \
++	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
++	 CXLMDEV_RESET_NEEDED_NOT)
++
++/*
++ * An entire PCI topology full of devices should be enough for any
++ * config
++ */
++#define CXL_MEM_MAX_DEVS 65536
++
++/**
++ * struct cdevm_file_operations - devm coordinated cdev file operations
++ * @fops: file operations that are synchronized against @shutdown
++ * @shutdown: disconnect driver data
++ *
++ * @shutdown is invoked in the devres release path to disconnect any
++ * driver instance data from @dev. It assumes synchronization with any
++ * fops operation that requires driver data. After @shutdown an
++ * operation may only reference @device data.
++ */
++struct cdevm_file_operations {
++	struct file_operations fops;
++	void (*shutdown)(struct device *dev);
++};
++
++/**
++ * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
++ * @dev: driver core device object
++ * @cdev: char dev core object for ioctl operations
++ * @cxlm: pointer to the parent device driver data
++ * @id: id number of this memdev instance.
++ */
++struct cxl_memdev {
++	struct device dev;
++	struct cdev cdev;
++	struct cxl_mem *cxlm;
++	int id;
++};
++
++/**
++ * struct cxl_mem - A CXL memory device
++ * @pdev: The PCI device associated with this CXL device.
++ * @cxlmd: Logical memory device chardev / interface
++ * @regs: Parsed register blocks
++ * @payload_size: Size of space for payload
++ *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
++ * @lsa_size: Size of Label Storage Area
++ *                (CXL 2.0 8.2.9.5.1.1 Identify Memory Device)
++ * @mbox_mutex: Mutex to synchronize mailbox access.
++ * @firmware_version: Firmware version for the memory device.
++ * @enabled_cmds: Hardware commands found enabled in CEL.
++ * @pmem_range: Persistent memory capacity information.
++ * @ram_range: Volatile memory capacity information.
++ */
++struct cxl_mem {
++	struct pci_dev *pdev;
++	struct cxl_memdev *cxlmd;
++
++	struct cxl_regs regs;
++
++	size_t payload_size;
++	size_t lsa_size;
++	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
++	char firmware_version[0x10];
++	unsigned long *enabled_cmds;
++
++	struct range pmem_range;
++	struct range ram_range;
++};
++#endif /* __CXL_MEM_H__ */
+diff --git a/drivers/cxl/mem.h b/drivers/cxl/mem.h
+deleted file mode 100644
+index 8f02d02b26b45..0000000000000
+--- a/drivers/cxl/mem.h
++++ /dev/null
+@@ -1,81 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-/* Copyright(c) 2020-2021 Intel Corporation. */
+-#ifndef __CXL_MEM_H__
+-#define __CXL_MEM_H__
+-#include <linux/cdev.h>
+-#include "cxl.h"
+-
+-/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
+-#define CXLMDEV_STATUS_OFFSET 0x0
+-#define   CXLMDEV_DEV_FATAL BIT(0)
+-#define   CXLMDEV_FW_HALT BIT(1)
+-#define   CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
+-#define     CXLMDEV_MS_NOT_READY 0
+-#define     CXLMDEV_MS_READY 1
+-#define     CXLMDEV_MS_ERROR 2
+-#define     CXLMDEV_MS_DISABLED 3
+-#define CXLMDEV_READY(status)                                                  \
+-	(FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) ==                \
+-	 CXLMDEV_MS_READY)
+-#define   CXLMDEV_MBOX_IF_READY BIT(4)
+-#define   CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
+-#define     CXLMDEV_RESET_NEEDED_NOT 0
+-#define     CXLMDEV_RESET_NEEDED_COLD 1
+-#define     CXLMDEV_RESET_NEEDED_WARM 2
+-#define     CXLMDEV_RESET_NEEDED_HOT 3
+-#define     CXLMDEV_RESET_NEEDED_CXL 4
+-#define CXLMDEV_RESET_NEEDED(status)                                           \
+-	(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) !=                       \
+-	 CXLMDEV_RESET_NEEDED_NOT)
+-
+-/*
+- * An entire PCI topology full of devices should be enough for any
+- * config
+- */
+-#define CXL_MEM_MAX_DEVS 65536
+-
+-/**
+- * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
+- * @dev: driver core device object
+- * @cdev: char dev core object for ioctl operations
+- * @cxlm: pointer to the parent device driver data
+- * @id: id number of this memdev instance.
+- */
+-struct cxl_memdev {
+-	struct device dev;
+-	struct cdev cdev;
+-	struct cxl_mem *cxlm;
+-	int id;
+-};
+-
+-/**
+- * struct cxl_mem - A CXL memory device
+- * @pdev: The PCI device associated with this CXL device.
+- * @cxlmd: Logical memory device chardev / interface
+- * @regs: Parsed register blocks
+- * @payload_size: Size of space for payload
+- *                (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
+- * @lsa_size: Size of Label Storage Area
+- *                (CXL 2.0 8.2.9.5.1.1 Identify Memory Device)
+- * @mbox_mutex: Mutex to synchronize mailbox access.
+- * @firmware_version: Firmware version for the memory device.
+- * @enabled_cmds: Hardware commands found enabled in CEL.
+- * @pmem_range: Persistent memory capacity information.
+- * @ram_range: Volatile memory capacity information.
+- */
+-struct cxl_mem {
+-	struct pci_dev *pdev;
+-	struct cxl_memdev *cxlmd;
+-
+-	struct cxl_regs regs;
+-
+-	size_t payload_size;
+-	size_t lsa_size;
+-	struct mutex mbox_mutex; /* Protects device mailbox and firmware */
+-	char firmware_version[0x10];
+-	unsigned long *enabled_cmds;
+-
+-	struct range pmem_range;
+-	struct range ram_range;
+-};
+-#endif /* __CXL_MEM_H__ */
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 145ad4bc305fc..e809596049b66 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -12,9 +12,9 @@
+ #include <linux/pci.h>
+ #include <linux/io.h>
+ #include <linux/io-64-nonatomic-lo-hi.h>
++#include "cxlmem.h"
+ #include "pci.h"
+ #include "cxl.h"
+-#include "mem.h"
+ 
+ /**
+  * DOC: cxl pci
+@@ -806,13 +806,30 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
+-static const struct file_operations cxl_memdev_fops = {
+-	.owner = THIS_MODULE,
+-	.unlocked_ioctl = cxl_memdev_ioctl,
+-	.open = cxl_memdev_open,
+-	.release = cxl_memdev_release_file,
+-	.compat_ioctl = compat_ptr_ioctl,
+-	.llseek = noop_llseek,
++static struct cxl_memdev *to_cxl_memdev(struct device *dev)
++{
++	return container_of(dev, struct cxl_memdev, dev);
++}
++
++static void cxl_memdev_shutdown(struct device *dev)
++{
++	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
++
++	down_write(&cxl_memdev_rwsem);
++	cxlmd->cxlm = NULL;
++	up_write(&cxl_memdev_rwsem);
++}
++
++static const struct cdevm_file_operations cxl_memdev_fops = {
++	.fops = {
++		.owner = THIS_MODULE,
++		.unlocked_ioctl = cxl_memdev_ioctl,
++		.open = cxl_memdev_open,
++		.release = cxl_memdev_release_file,
++		.compat_ioctl = compat_ptr_ioctl,
++		.llseek = noop_llseek,
++	},
++	.shutdown = cxl_memdev_shutdown,
+ };
+ 
+ static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode)
+@@ -1161,11 +1178,6 @@ free_maps:
+ 	return ret;
+ }
+ 
+-static struct cxl_memdev *to_cxl_memdev(struct device *dev)
+-{
+-	return container_of(dev, struct cxl_memdev, dev);
+-}
+-
+ static void cxl_memdev_release(struct device *dev)
+ {
+ 	struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
+@@ -1281,24 +1293,22 @@ static const struct device_type cxl_memdev_type = {
+ 	.groups = cxl_memdev_attribute_groups,
+ };
+ 
+-static void cxl_memdev_shutdown(struct cxl_memdev *cxlmd)
+-{
+-	down_write(&cxl_memdev_rwsem);
+-	cxlmd->cxlm = NULL;
+-	up_write(&cxl_memdev_rwsem);
+-}
+-
+ static void cxl_memdev_unregister(void *_cxlmd)
+ {
+ 	struct cxl_memdev *cxlmd = _cxlmd;
+ 	struct device *dev = &cxlmd->dev;
++	struct cdev *cdev = &cxlmd->cdev;
++	const struct cdevm_file_operations *cdevm_fops;
++
++	cdevm_fops = container_of(cdev->ops, typeof(*cdevm_fops), fops);
++	cdevm_fops->shutdown(dev);
+ 
+ 	cdev_device_del(&cxlmd->cdev, dev);
+-	cxl_memdev_shutdown(cxlmd);
+ 	put_device(dev);
+ }
+ 
+-static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
++static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm,
++					   const struct file_operations *fops)
+ {
+ 	struct pci_dev *pdev = cxlm->pdev;
+ 	struct cxl_memdev *cxlmd;
+@@ -1324,7 +1334,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm)
+ 	device_set_pm_not_required(dev);
+ 
+ 	cdev = &cxlmd->cdev;
+-	cdev_init(cdev, &cxl_memdev_fops);
++	cdev_init(cdev, fops);
+ 	return cxlmd;
+ 
+ err:
+@@ -1332,15 +1342,16 @@ err:
+ 	return ERR_PTR(rc);
+ }
+ 
+-static struct cxl_memdev *devm_cxl_add_memdev(struct device *host,
+-					      struct cxl_mem *cxlm)
++static struct cxl_memdev *
++devm_cxl_add_memdev(struct device *host, struct cxl_mem *cxlm,
++		    const struct cdevm_file_operations *cdevm_fops)
+ {
+ 	struct cxl_memdev *cxlmd;
+ 	struct device *dev;
+ 	struct cdev *cdev;
+ 	int rc;
+ 
+-	cxlmd = cxl_memdev_alloc(cxlm);
++	cxlmd = cxl_memdev_alloc(cxlm, &cdevm_fops->fops);
+ 	if (IS_ERR(cxlmd))
+ 		return cxlmd;
+ 
+@@ -1370,7 +1381,7 @@ err:
+ 	 * The cdev was briefly live, shutdown any ioctl operations that
+ 	 * saw that state.
+ 	 */
+-	cxl_memdev_shutdown(cxlmd);
++	cdevm_fops->shutdown(dev);
+ 	put_device(dev);
+ 	return ERR_PTR(rc);
+ }
+@@ -1611,7 +1622,7 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (rc)
+ 		return rc;
+ 
+-	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm);
++	cxlmd = devm_cxl_add_memdev(&pdev->dev, cxlm, &cxl_memdev_fops);
+ 	if (IS_ERR(cxlmd))
+ 		return PTR_ERR(cxlmd);
+ 
+diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
+index 0088e41dd2f32..9652c3ee41e7f 100644
+--- a/drivers/cxl/pmem.c
++++ b/drivers/cxl/pmem.c
+@@ -6,7 +6,7 @@
+ #include <linux/ndctl.h>
+ #include <linux/async.h>
+ #include <linux/slab.h>
+-#include "mem.h"
++#include "cxlmem.h"
+ #include "cxl.h"
+ 
+ /*
+diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
+index 4e16c71c24b71..6eb4d13f426ee 100644
+--- a/drivers/dma-buf/Kconfig
++++ b/drivers/dma-buf/Kconfig
+@@ -42,6 +42,7 @@ config UDMABUF
+ config DMABUF_MOVE_NOTIFY
+ 	bool "Move notify between drivers (EXPERIMENTAL)"
+ 	default n
++	depends on DMA_SHARED_BUFFER
+ 	help
+ 	  Don't pin buffers if the dynamic DMA-buf interface is available on
+ 	  both the exporter as well as the importer. This fixes a security
+@@ -52,6 +53,7 @@ config DMABUF_MOVE_NOTIFY
+ 
+ config DMABUF_DEBUG
+ 	bool "DMA-BUF debug checks"
++	depends on DMA_SHARED_BUFFER
+ 	default y if DMA_API_DEBUG
+ 	help
+ 	  This option enables additional checks for DMA-BUF importers and
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index 39b5b46e880f2..4f70cf57471a9 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -279,7 +279,7 @@ config INTEL_IDMA64
+ 
+ config INTEL_IDXD
+ 	tristate "Intel Data Accelerators support"
+-	depends on PCI && X86_64
++	depends on PCI && X86_64 && !UML
+ 	depends on PCI_MSI
+ 	depends on SBITMAP
+ 	select DMA_ENGINE
+@@ -315,7 +315,7 @@ config INTEL_IDXD_PERFMON
+ 
+ config INTEL_IOATDMA
+ 	tristate "Intel I/OAT DMA support"
+-	depends on PCI && X86_64
++	depends on PCI && X86_64 && !UML
+ 	select DMA_ENGINE
+ 	select DMA_ENGINE_RAID
+ 	select DCA
+diff --git a/drivers/dma/acpi-dma.c b/drivers/dma/acpi-dma.c
+index 235f1396f9686..52768dc8ce124 100644
+--- a/drivers/dma/acpi-dma.c
++++ b/drivers/dma/acpi-dma.c
+@@ -70,10 +70,14 @@ static int acpi_dma_parse_resource_group(const struct acpi_csrt_group *grp,
+ 
+ 	si = (const struct acpi_csrt_shared_info *)&grp[1];
+ 
+-	/* Match device by MMIO and IRQ */
++	/* Match device by MMIO */
+ 	if (si->mmio_base_low != lower_32_bits(mem) ||
+-	    si->mmio_base_high != upper_32_bits(mem) ||
+-	    si->gsi_interrupt != irq)
++	    si->mmio_base_high != upper_32_bits(mem))
++		return 0;
++
++	/* Match device by Linux vIRQ */
++	ret = acpi_register_gsi(NULL, si->gsi_interrupt, si->interrupt_mode, si->interrupt_polarity);
++	if (ret != irq)
+ 		return 0;
+ 
+ 	dev_dbg(&adev->dev, "matches with %.4s%04X (rev %u)\n",
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 420b93fe5febc..9c6760ae5aef2 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -15,6 +15,8 @@
+ 
+ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+ 			  u32 *status);
++static void idxd_device_wqs_clear_state(struct idxd_device *idxd);
++static void idxd_wq_disable_cleanup(struct idxd_wq *wq);
+ 
+ /* Interrupt control bits */
+ void idxd_mask_msix_vector(struct idxd_device *idxd, int vec_id)
+@@ -234,7 +236,7 @@ int idxd_wq_enable(struct idxd_wq *wq)
+ 	return 0;
+ }
+ 
+-int idxd_wq_disable(struct idxd_wq *wq)
++int idxd_wq_disable(struct idxd_wq *wq, bool reset_config)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+ 	struct device *dev = &idxd->pdev->dev;
+@@ -255,6 +257,8 @@ int idxd_wq_disable(struct idxd_wq *wq)
+ 		return -ENXIO;
+ 	}
+ 
++	if (reset_config)
++		idxd_wq_disable_cleanup(wq);
+ 	wq->state = IDXD_WQ_DISABLED;
+ 	dev_dbg(dev, "WQ %d disabled\n", wq->id);
+ 	return 0;
+@@ -289,6 +293,7 @@ void idxd_wq_reset(struct idxd_wq *wq)
+ 
+ 	operand = BIT(wq->id % 16) | ((wq->id / 16) << 16);
+ 	idxd_cmd_exec(idxd, IDXD_CMD_RESET_WQ, operand, NULL);
++	idxd_wq_disable_cleanup(wq);
+ 	wq->state = IDXD_WQ_DISABLED;
+ }
+ 
+@@ -337,7 +342,7 @@ int idxd_wq_set_pasid(struct idxd_wq *wq, int pasid)
+ 	unsigned int offset;
+ 	unsigned long flags;
+ 
+-	rc = idxd_wq_disable(wq);
++	rc = idxd_wq_disable(wq, false);
+ 	if (rc < 0)
+ 		return rc;
+ 
+@@ -364,7 +369,7 @@ int idxd_wq_disable_pasid(struct idxd_wq *wq)
+ 	unsigned int offset;
+ 	unsigned long flags;
+ 
+-	rc = idxd_wq_disable(wq);
++	rc = idxd_wq_disable(wq, false);
+ 	if (rc < 0)
+ 		return rc;
+ 
+@@ -383,11 +388,11 @@ int idxd_wq_disable_pasid(struct idxd_wq *wq)
+ 	return 0;
+ }
+ 
+-void idxd_wq_disable_cleanup(struct idxd_wq *wq)
++static void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
+ 
+-	lockdep_assert_held(&idxd->dev_lock);
++	lockdep_assert_held(&wq->wq_lock);
+ 	memset(wq->wqcfg, 0, idxd->wqcfg_size);
+ 	wq->type = IDXD_WQT_NONE;
+ 	wq->size = 0;
+@@ -396,6 +401,7 @@ void idxd_wq_disable_cleanup(struct idxd_wq *wq)
+ 	wq->priority = 0;
+ 	wq->ats_dis = 0;
+ 	clear_bit(WQ_FLAG_DEDICATED, &wq->flags);
++	clear_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
+ 	memset(wq->name, 0, WQ_NAME_SIZE);
+ }
+ 
+@@ -481,6 +487,7 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+ 	union idxd_command_reg cmd;
+ 	DECLARE_COMPLETION_ONSTACK(done);
+ 	unsigned long flags;
++	u32 stat;
+ 
+ 	if (idxd_device_is_halted(idxd)) {
+ 		dev_warn(&idxd->pdev->dev, "Device is HALTED!\n");
+@@ -513,11 +520,11 @@ static void idxd_cmd_exec(struct idxd_device *idxd, int cmd_code, u32 operand,
+ 	 */
+ 	spin_unlock_irqrestore(&idxd->cmd_lock, flags);
+ 	wait_for_completion(&done);
++	stat = ioread32(idxd->reg_base + IDXD_CMDSTS_OFFSET);
+ 	spin_lock_irqsave(&idxd->cmd_lock, flags);
+-	if (status) {
+-		*status = ioread32(idxd->reg_base + IDXD_CMDSTS_OFFSET);
+-		idxd->cmd_status = *status & GENMASK(7, 0);
+-	}
++	if (status)
++		*status = stat;
++	idxd->cmd_status = stat & GENMASK(7, 0);
+ 
+ 	__clear_bit(IDXD_FLAG_CMD_RUNNING, &idxd->flags);
+ 	/* Wake up other pending commands */
+@@ -548,22 +555,6 @@ int idxd_device_enable(struct idxd_device *idxd)
+ 	return 0;
+ }
+ 
+-void idxd_device_wqs_clear_state(struct idxd_device *idxd)
+-{
+-	int i;
+-
+-	lockdep_assert_held(&idxd->dev_lock);
+-
+-	for (i = 0; i < idxd->max_wqs; i++) {
+-		struct idxd_wq *wq = idxd->wqs[i];
+-
+-		if (wq->state == IDXD_WQ_ENABLED) {
+-			idxd_wq_disable_cleanup(wq);
+-			wq->state = IDXD_WQ_DISABLED;
+-		}
+-	}
+-}
+-
+ int idxd_device_disable(struct idxd_device *idxd)
+ {
+ 	struct device *dev = &idxd->pdev->dev;
+@@ -585,7 +576,7 @@ int idxd_device_disable(struct idxd_device *idxd)
+ 	}
+ 
+ 	spin_lock_irqsave(&idxd->dev_lock, flags);
+-	idxd_device_wqs_clear_state(idxd);
++	idxd_device_clear_state(idxd);
+ 	idxd->state = IDXD_DEV_CONF_READY;
+ 	spin_unlock_irqrestore(&idxd->dev_lock, flags);
+ 	return 0;
+@@ -597,7 +588,7 @@ void idxd_device_reset(struct idxd_device *idxd)
+ 
+ 	idxd_cmd_exec(idxd, IDXD_CMD_RESET_DEVICE, 0, NULL);
+ 	spin_lock_irqsave(&idxd->dev_lock, flags);
+-	idxd_device_wqs_clear_state(idxd);
++	idxd_device_clear_state(idxd);
+ 	idxd->state = IDXD_DEV_CONF_READY;
+ 	spin_unlock_irqrestore(&idxd->dev_lock, flags);
+ }
+@@ -685,6 +676,59 @@ int idxd_device_release_int_handle(struct idxd_device *idxd, int handle,
+ }
+ 
+ /* Device configuration bits */
++static void idxd_engines_clear_state(struct idxd_device *idxd)
++{
++	struct idxd_engine *engine;
++	int i;
++
++	lockdep_assert_held(&idxd->dev_lock);
++	for (i = 0; i < idxd->max_engines; i++) {
++		engine = idxd->engines[i];
++		engine->group = NULL;
++	}
++}
++
++static void idxd_groups_clear_state(struct idxd_device *idxd)
++{
++	struct idxd_group *group;
++	int i;
++
++	lockdep_assert_held(&idxd->dev_lock);
++	for (i = 0; i < idxd->max_groups; i++) {
++		group = idxd->groups[i];
++		memset(&group->grpcfg, 0, sizeof(group->grpcfg));
++		group->num_engines = 0;
++		group->num_wqs = 0;
++		group->use_token_limit = false;
++		group->tokens_allowed = 0;
++		group->tokens_reserved = 0;
++		group->tc_a = -1;
++		group->tc_b = -1;
++	}
++}
++
++static void idxd_device_wqs_clear_state(struct idxd_device *idxd)
++{
++	int i;
++
++	lockdep_assert_held(&idxd->dev_lock);
++	for (i = 0; i < idxd->max_wqs; i++) {
++		struct idxd_wq *wq = idxd->wqs[i];
++
++		if (wq->state == IDXD_WQ_ENABLED) {
++			idxd_wq_disable_cleanup(wq);
++			wq->state = IDXD_WQ_DISABLED;
++		}
++	}
++}
++
++void idxd_device_clear_state(struct idxd_device *idxd)
++{
++	idxd_groups_clear_state(idxd);
++	idxd_engines_clear_state(idxd);
++	idxd_device_wqs_clear_state(idxd);
++}
++
+ void idxd_msix_perm_setup(struct idxd_device *idxd)
+ {
+ 	union msix_perm mperm;
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index fc708be7ad9a2..0f27374eae4b3 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -428,9 +428,8 @@ int idxd_device_init_reset(struct idxd_device *idxd);
+ int idxd_device_enable(struct idxd_device *idxd);
+ int idxd_device_disable(struct idxd_device *idxd);
+ void idxd_device_reset(struct idxd_device *idxd);
+-void idxd_device_cleanup(struct idxd_device *idxd);
++void idxd_device_clear_state(struct idxd_device *idxd);
+ int idxd_device_config(struct idxd_device *idxd);
+-void idxd_device_wqs_clear_state(struct idxd_device *idxd);
+ void idxd_device_drain_pasid(struct idxd_device *idxd, int pasid);
+ int idxd_device_load_config(struct idxd_device *idxd);
+ int idxd_device_request_int_handle(struct idxd_device *idxd, int idx, int *handle,
+@@ -443,12 +442,11 @@ void idxd_wqs_unmap_portal(struct idxd_device *idxd);
+ int idxd_wq_alloc_resources(struct idxd_wq *wq);
+ void idxd_wq_free_resources(struct idxd_wq *wq);
+ int idxd_wq_enable(struct idxd_wq *wq);
+-int idxd_wq_disable(struct idxd_wq *wq);
++int idxd_wq_disable(struct idxd_wq *wq, bool reset_config);
+ void idxd_wq_drain(struct idxd_wq *wq);
+ void idxd_wq_reset(struct idxd_wq *wq);
+ int idxd_wq_map_portal(struct idxd_wq *wq);
+ void idxd_wq_unmap_portal(struct idxd_wq *wq);
+-void idxd_wq_disable_cleanup(struct idxd_wq *wq);
+ int idxd_wq_set_pasid(struct idxd_wq *wq, int pasid);
+ int idxd_wq_disable_pasid(struct idxd_wq *wq);
+ void idxd_wq_quiesce(struct idxd_wq *wq);
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 4e3a7198c0caf..ba839d3569cdf 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -59,7 +59,7 @@ static void idxd_device_reinit(struct work_struct *work)
+ 	return;
+ 
+  out:
+-	idxd_device_wqs_clear_state(idxd);
++	idxd_device_clear_state(idxd);
+ }
+ 
+ static void idxd_device_fault_work(struct work_struct *work)
+@@ -192,7 +192,7 @@ static int process_misc_interrupts(struct idxd_device *idxd, u32 cause)
+ 			spin_lock_bh(&idxd->dev_lock);
+ 			idxd_wqs_quiesce(idxd);
+ 			idxd_wqs_unmap_portal(idxd);
+-			idxd_device_wqs_clear_state(idxd);
++			idxd_device_clear_state(idxd);
+ 			dev_err(&idxd->pdev->dev,
+ 				"idxd halted, need %s.\n",
+ 				gensts.reset_type == IDXD_DEVICE_RESET_FLR ?
+@@ -269,7 +269,11 @@ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
+ 		u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
+ 
+ 		if (status) {
+-			if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
++			/*
++			 * Check against the original status as ABORT is software defined
++			 * and 0xff, which DSA_COMP_STATUS_MASK can mask out.
++			 */
++			if (unlikely(desc->completion->status == IDXD_COMP_DESC_ABORT)) {
+ 				complete_desc(desc, IDXD_COMPLETE_ABORT);
+ 				(*processed)++;
+ 				continue;
+@@ -333,7 +337,11 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
+ 	list_for_each_entry(desc, &flist, list) {
+ 		u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
+ 
+-		if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
++		/*
++		 * Check against the original status as ABORT is software defined
++		 * and 0xff, which DSA_COMP_STATUS_MASK can mask out.
++		 */
++		if (unlikely(desc->completion->status == IDXD_COMP_DESC_ABORT)) {
+ 			complete_desc(desc, IDXD_COMPLETE_ABORT);
+ 			continue;
+ 		}
+diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c
+index 36c9c1a89b7e7..196d6cf119656 100644
+--- a/drivers/dma/idxd/submit.c
++++ b/drivers/dma/idxd/submit.c
+@@ -67,7 +67,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype)
+ 		if (signal_pending_state(TASK_INTERRUPTIBLE, current))
+ 			break;
+ 		idx = sbitmap_queue_get(sbq, &cpu);
+-		if (idx > 0)
++		if (idx >= 0)
+ 			break;
+ 		schedule();
+ 	}
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index bb4df63906a72..528cde54724b4 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -129,7 +129,7 @@ static int enable_wq(struct idxd_wq *wq)
+ 	rc = idxd_wq_map_portal(wq);
+ 	if (rc < 0) {
+ 		dev_warn(dev, "wq portal mapping failed: %d\n", rc);
+-		rc = idxd_wq_disable(wq);
++		rc = idxd_wq_disable(wq, false);
+ 		if (rc < 0)
+ 			dev_warn(dev, "IDXD wq disable failed\n");
+ 		mutex_unlock(&wq->wq_lock);
+@@ -262,8 +262,6 @@ static void disable_wq(struct idxd_wq *wq)
+ 
+ static int idxd_config_bus_remove(struct device *dev)
+ {
+-	int rc;
+-
+ 	dev_dbg(dev, "%s called for %s\n", __func__, dev_name(dev));
+ 
+ 	/* disable workqueue here */
+@@ -288,22 +286,12 @@ static int idxd_config_bus_remove(struct device *dev)
+ 		}
+ 
+ 		idxd_unregister_dma_device(idxd);
+-		rc = idxd_device_disable(idxd);
+-		if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) {
+-			for (i = 0; i < idxd->max_wqs; i++) {
+-				struct idxd_wq *wq = idxd->wqs[i];
+-
+-				mutex_lock(&wq->wq_lock);
+-				idxd_wq_disable_cleanup(wq);
+-				mutex_unlock(&wq->wq_lock);
+-			}
+-		}
++		idxd_device_disable(idxd);
++		if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++			idxd_device_reset(idxd);
+ 		module_put(THIS_MODULE);
+-		if (rc < 0)
+-			dev_warn(dev, "Device disable failed\n");
+-		else
+-			dev_info(dev, "Device %s disabled\n", dev_name(dev));
+ 
++		dev_info(dev, "Device %s disabled\n", dev_name(dev));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 0ef5ca81ba4d0..4357d2395e6b7 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -1265,6 +1265,7 @@ static const struct of_device_id sprd_dma_match[] = {
+ 	{ .compatible = "sprd,sc9860-dma", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, sprd_dma_match);
+ 
+ static int __maybe_unused sprd_dma_runtime_suspend(struct device *dev)
+ {
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 4b9530a7bf652..434b1ff22e318 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -3077,7 +3077,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
+ 		xdev->ext_addr = false;
+ 
+ 	/* Set the dma mask bits */
+-	dma_set_mask(xdev->dev, DMA_BIT_MASK(addr_width));
++	dma_set_mask_and_coherent(xdev->dev, DMA_BIT_MASK(addr_width));
+ 
+ 	/* Initialize the DMA engine */
+ 	xdev->common.dev = &pdev->dev;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+index 8f53837d4d3ee..97178b307ed6f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+@@ -468,14 +468,18 @@ bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *ade
+ 	return (fw_cap & ATOM_FIRMWARE_CAP_DYNAMIC_BOOT_CFG_ENABLE) ? true : false;
+ }
+ 
+-/*
+- * Helper function to query RAS EEPROM address
+- *
+- * @adev: amdgpu_device pointer
++/**
++ * amdgpu_atomfirmware_ras_rom_addr -- Get the RAS EEPROM addr from VBIOS
++ * adev: amdgpu_device pointer
++ * i2c_address: pointer to u8; if not NULL, will contain
++ *    the RAS EEPROM address if the function returns true
+  *
+- * Return true if vbios supports ras rom address reporting
++ * Return true if VBIOS supports RAS EEPROM address reporting,
++ * else return false. If true and @i2c_address is not NULL,
++ * will contain the RAS ROM address.
+  */
+-bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address)
++bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev,
++				      u8 *i2c_address)
+ {
+ 	struct amdgpu_mode_info *mode_info = &adev->mode_info;
+ 	int index;
+@@ -483,27 +487,39 @@ bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_a
+ 	union firmware_info *firmware_info;
+ 	u8 frev, crev;
+ 
+-	if (i2c_address == NULL)
+-		return false;
+-
+-	*i2c_address = 0;
+-
+ 	index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+-			firmwareinfo);
++					    firmwareinfo);
+ 
+ 	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context,
+-				index, &size, &frev, &crev, &data_offset)) {
++					  index, &size, &frev, &crev,
++					  &data_offset)) {
+ 		/* support firmware_info 3.4 + */
+ 		if ((frev == 3 && crev >=4) || (frev > 3)) {
+ 			firmware_info = (union firmware_info *)
+ 				(mode_info->atom_context->bios + data_offset);
+-			*i2c_address = firmware_info->v34.ras_rom_i2c_slave_addr;
++			/* The ras_rom_i2c_slave_addr should ideally
++			 * be a 19-bit EEPROM address, which would be
++			 * used as is by the driver; see top of
++			 * amdgpu_eeprom.c.
++			 *
++			 * When this is the case, 0 is of course a
++			 * valid RAS EEPROM address, in which case,
++			 * we'll drop the first "if (firm...)" and only
++			 * leave the check for the pointer.
++			 *
++			 * The reason this works right now is because
++			 * ras_rom_i2c_slave_addr contains the EEPROM
++			 * device type qualifier 1010b in the top 4
++			 * bits.
++			 */
++			if (firmware_info->v34.ras_rom_i2c_slave_addr) {
++				if (i2c_address)
++					*i2c_address = firmware_info->v34.ras_rom_i2c_slave_addr;
++				return true;
++			}
+ 		}
+ 	}
+ 
+-	if (*i2c_address != 0)
+-		return true;
+-
+ 	return false;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c
+index d94c5419ec25c..5a6857c44bb66 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.c
+@@ -59,6 +59,7 @@ void amdgpu_show_fdinfo(struct seq_file *m, struct file *f)
+ 	uint64_t vram_mem = 0, gtt_mem = 0, cpu_mem = 0;
+ 	struct drm_file *file = f->private_data;
+ 	struct amdgpu_device *adev = drm_to_adev(file->minor->dev);
++	struct amdgpu_bo *root;
+ 	int ret;
+ 
+ 	ret = amdgpu_file_to_fpriv(f, &fpriv);
+@@ -69,13 +70,19 @@ void amdgpu_show_fdinfo(struct seq_file *m, struct file *f)
+ 	dev = PCI_SLOT(adev->pdev->devfn);
+ 	fn = PCI_FUNC(adev->pdev->devfn);
+ 
+-	ret = amdgpu_bo_reserve(fpriv->vm.root.bo, false);
++	root = amdgpu_bo_ref(fpriv->vm.root.bo);
++	if (!root)
++		return;
++
++	ret = amdgpu_bo_reserve(root, false);
+ 	if (ret) {
+ 		DRM_ERROR("Fail to reserve bo\n");
+ 		return;
+ 	}
+ 	amdgpu_vm_get_memory(&fpriv->vm, &vram_mem, &gtt_mem, &cpu_mem);
+-	amdgpu_bo_unreserve(fpriv->vm.root.bo);
++	amdgpu_bo_unreserve(root);
++	amdgpu_bo_unref(&root);
++
+ 	seq_printf(m, "pdev:\t%04x:%02x:%02x.%d\npasid:\t%u\n", domain, bus,
+ 			dev, fn, fpriv->vm.pasid);
+ 	seq_printf(m, "vram mem:\t%llu kB\n", vram_mem/1024UL);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+index dc7823d23ba89..dd38796ba30ad 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+@@ -510,8 +510,12 @@ static struct stream_encoder *dcn303_stream_encoder_create(enum engine_id eng_id
+ 	vpg = dcn303_vpg_create(ctx, vpg_inst);
+ 	afmt = dcn303_afmt_create(ctx, afmt_inst);
+ 
+-	if (!enc1 || !vpg || !afmt)
++	if (!enc1 || !vpg || !afmt) {
++		kfree(enc1);
++		kfree(vpg);
++		kfree(afmt);
+ 		return NULL;
++	}
+ 
+ 	dcn30_dio_stream_encoder_construct(enc1, ctx, ctx->dc_bios, eng_id, vpg, afmt, &stream_enc_regs[eng_id],
+ 			&se_shift, &se_mask);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index 0541bfc81c1b4..1d76cf7cd85d5 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -27,6 +27,9 @@
+ #include <linux/pci.h>
+ #include <linux/slab.h>
+ #include <asm/div64.h>
++#if IS_ENABLED(CONFIG_X86_64)
++#include <asm/intel-family.h>
++#endif
+ #include <drm/amdgpu_drm.h>
+ #include "ppatomctrl.h"
+ #include "atombios.h"
+@@ -1733,6 +1736,17 @@ static int smu7_disable_dpm_tasks(struct pp_hwmgr *hwmgr)
+ 	return result;
+ }
+ 
++static bool intel_core_rkl_chk(void)
++{
++#if IS_ENABLED(CONFIG_X86_64)
++	struct cpuinfo_x86 *c = &cpu_data(0);
++
++	return (c->x86 == 6 && c->x86_model == INTEL_FAM6_ROCKETLAKE);
++#else
++	return false;
++#endif
++}
++
+ static void smu7_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ {
+ 	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+@@ -1758,7 +1772,8 @@ static void smu7_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 
+ 	data->mclk_dpm_key_disabled = hwmgr->feature_mask & PP_MCLK_DPM_MASK ? false : true;
+ 	data->sclk_dpm_key_disabled = hwmgr->feature_mask & PP_SCLK_DPM_MASK ? false : true;
+-	data->pcie_dpm_key_disabled = hwmgr->feature_mask & PP_PCIE_DPM_MASK ? false : true;
++	data->pcie_dpm_key_disabled =
++		intel_core_rkl_chk() || !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);
+ 	/* need to set voltage control types before EVV patching */
+ 	data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE;
+ 	data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c
+index b0ece71aefdee..ce774579c89d1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.c
+@@ -57,7 +57,7 @@ nvkm_control_mthd_pstate_info(struct nvkm_control *ctrl, void *data, u32 size)
+ 		args->v0.count = 0;
+ 		args->v0.ustate_ac = NVIF_CONTROL_PSTATE_INFO_V0_USTATE_DISABLE;
+ 		args->v0.ustate_dc = NVIF_CONTROL_PSTATE_INFO_V0_USTATE_DISABLE;
+-		args->v0.pwrsrc = -ENOSYS;
++		args->v0.pwrsrc = -ENODEV;
+ 		args->v0.pstate = NVIF_CONTROL_PSTATE_INFO_V0_PSTATE_UNKNOWN;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 32202385073a2..b47a5053eb854 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -1157,9 +1157,9 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx,
+ 	}
+ 
+ 	if (bo->deleted) {
+-		ttm_bo_cleanup_refs(bo, false, false, locked);
++		ret = ttm_bo_cleanup_refs(bo, false, false, locked);
+ 		ttm_bo_put(bo);
+-		return 0;
++		return ret == -EBUSY ? -ENOSPC : ret;
+ 	}
+ 
+ 	ttm_bo_del_from_lru(bo);
+@@ -1213,7 +1213,7 @@ out:
+ 	if (locked)
+ 		dma_resv_unlock(bo->base.resv);
+ 	ttm_bo_put(bo);
+-	return ret;
++	return ret == -EBUSY ? -ENOSPC : ret;
+ }
+ 
+ void ttm_bo_tt_destroy(struct ttm_buffer_object *bo)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index bf4d9f6658ff9..c320891c8763c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -2004,6 +2004,7 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
+ 	caps->gid_table_len[0] = HNS_ROCE_V2_GID_INDEX_NUM;
+ 
+ 	if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) {
++		caps->flags |= HNS_ROCE_CAP_FLAG_STASH;
+ 		caps->max_sq_inline = HNS_ROCE_V3_MAX_SQ_INLINE;
+ 	} else {
+ 		caps->max_sq_inline = HNS_ROCE_V2_MAX_SQ_INLINE;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 19713cdd7b789..061dbee55cac1 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -995,7 +995,7 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd,
+ static void *mlx5_ib_alloc_xlt(size_t *nents, size_t ent_size, gfp_t gfp_mask)
+ {
+ 	const size_t xlt_chunk_align =
+-		MLX5_UMR_MTT_ALIGNMENT / sizeof(ent_size);
++		MLX5_UMR_MTT_ALIGNMENT / ent_size;
+ 	size_t size;
+ 	void *res = NULL;
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 46280e6e1535b..5c21f1ee50983 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -298,6 +298,22 @@ int amd_iommu_get_num_iommus(void)
+ 	return amd_iommus_present;
+ }
+ 
++#ifdef CONFIG_IRQ_REMAP
++static bool check_feature_on_all_iommus(u64 mask)
++{
++	bool ret = false;
++	struct amd_iommu *iommu;
++
++	for_each_iommu(iommu) {
++		ret = iommu_feature(iommu, mask);
++		if (!ret)
++			return false;
++	}
++
++	return true;
++}
++#endif
++
+ /*
+  * For IVHD type 0x11/0x40, EFR is also available via IVHD.
+  * Default to IVHD EFR since it is available sooner
+@@ -854,13 +870,6 @@ static int iommu_init_ga(struct amd_iommu *iommu)
+ 	int ret = 0;
+ 
+ #ifdef CONFIG_IRQ_REMAP
+-	/* Note: We have already checked GASup from IVRS table.
+-	 *       Now, we need to make sure that GAMSup is set.
+-	 */
+-	if (AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) &&
+-	    !iommu_feature(iommu, FEATURE_GAM_VAPIC))
+-		amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;
+-
+ 	ret = iommu_init_ga_log(iommu);
+ #endif /* CONFIG_IRQ_REMAP */
+ 
+@@ -2477,6 +2486,14 @@ static void early_enable_iommus(void)
+ 	}
+ 
+ #ifdef CONFIG_IRQ_REMAP
++	/*
++	 * Note: We have already checked GASup from IVRS table.
++	 *       Now, we need to make sure that GAMSup is set.
++	 */
++	if (AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) &&
++	    !check_feature_on_all_iommus(FEATURE_GAM_VAPIC))
++		amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;
++
+ 	if (AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir))
+ 		amd_iommu_irq_ops.capability |= (1 << IRQ_POSTING_CAP);
+ #endif
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 4b9b3f35ba0ea..d575082567ca7 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -516,9 +516,6 @@ static void load_pasid(struct mm_struct *mm, u32 pasid)
+ {
+ 	mutex_lock(&mm->context.lock);
+ 
+-	/* Synchronize with READ_ONCE in update_pasid(). */
+-	smp_store_release(&mm->pasid, pasid);
+-
+ 	/* Update PASID MSR on all CPUs running the mm's tasks. */
+ 	on_each_cpu_mask(mm_cpumask(mm), _load_pasid, NULL, true);
+ 
+@@ -796,7 +793,19 @@ prq_retry:
+ 		goto prq_retry;
+ 	}
+ 
++	/*
++	 * A work in IO page fault workqueue may try to lock pasid_mutex now.
++	 * Holding pasid_mutex while waiting in iopf_queue_flush_dev() for
++	 * all works in the workqueue to finish may cause deadlock.
++	 *
++	 * It's unnecessary to hold pasid_mutex in iopf_queue_flush_dev().
++	 * Unlock it to allow the works to be handled while waiting for
++	 * them to finish.
++	 */
++	lockdep_assert_held(&pasid_mutex);
++	mutex_unlock(&pasid_mutex);
+ 	iopf_queue_flush_dev(dev);
++	mutex_lock(&pasid_mutex);
+ 
+ 	/*
+ 	 * Perform steps described in VT-d spec CH7.10 to drain page
+diff --git a/drivers/misc/habanalabs/common/command_buffer.c b/drivers/misc/habanalabs/common/command_buffer.c
+index 719168c980a45..402ac2395fc82 100644
+--- a/drivers/misc/habanalabs/common/command_buffer.c
++++ b/drivers/misc/habanalabs/common/command_buffer.c
+@@ -314,8 +314,6 @@ int hl_cb_create(struct hl_device *hdev, struct hl_cb_mgr *mgr,
+ 
+ 	spin_lock(&mgr->cb_lock);
+ 	rc = idr_alloc(&mgr->cb_handles, cb, 1, 0, GFP_ATOMIC);
+-	if (rc < 0)
+-		rc = idr_alloc(&mgr->cb_handles, cb, 1, 0, GFP_KERNEL);
+ 	spin_unlock(&mgr->cb_lock);
+ 
+ 	if (rc < 0) {
+diff --git a/drivers/misc/habanalabs/common/debugfs.c b/drivers/misc/habanalabs/common/debugfs.c
+index 703d79fb6f3f5..379529bffc700 100644
+--- a/drivers/misc/habanalabs/common/debugfs.c
++++ b/drivers/misc/habanalabs/common/debugfs.c
+@@ -349,7 +349,7 @@ static int mmu_show(struct seq_file *s, void *data)
+ 		return 0;
+ 	}
+ 
+-	phys_addr = hops_info.hop_info[hops_info.used_hops - 1].hop_pte_val;
++	hl_mmu_va_to_pa(ctx, virt_addr, &phys_addr);
+ 
+ 	if (hops_info.scrambled_vaddr &&
+ 		(dev_entry->mmu_addr != hops_info.scrambled_vaddr))
+diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c
+index ff4cbde289c0b..4e9b677460bad 100644
+--- a/drivers/misc/habanalabs/common/device.c
++++ b/drivers/misc/habanalabs/common/device.c
+@@ -23,6 +23,8 @@ enum hl_device_status hl_device_status(struct hl_device *hdev)
+ 		status = HL_DEVICE_STATUS_NEEDS_RESET;
+ 	else if (hdev->disabled)
+ 		status = HL_DEVICE_STATUS_MALFUNCTION;
++	else if (!hdev->init_done)
++		status = HL_DEVICE_STATUS_IN_DEVICE_CREATION;
+ 	else
+ 		status = HL_DEVICE_STATUS_OPERATIONAL;
+ 
+@@ -44,6 +46,7 @@ bool hl_device_operational(struct hl_device *hdev,
+ 	case HL_DEVICE_STATUS_NEEDS_RESET:
+ 		return false;
+ 	case HL_DEVICE_STATUS_OPERATIONAL:
++	case HL_DEVICE_STATUS_IN_DEVICE_CREATION:
+ 	default:
+ 		return true;
+ 	}
+diff --git a/drivers/misc/habanalabs/common/habanalabs.h b/drivers/misc/habanalabs/common/habanalabs.h
+index 6b3cdd7e068a3..61db72ecec0e0 100644
+--- a/drivers/misc/habanalabs/common/habanalabs.h
++++ b/drivers/misc/habanalabs/common/habanalabs.h
+@@ -1798,7 +1798,7 @@ struct hl_dbg_device_entry {
+ 
+ #define HL_STR_MAX	32
+ 
+-#define HL_DEV_STS_MAX (HL_DEVICE_STATUS_NEEDS_RESET + 1)
++#define HL_DEV_STS_MAX (HL_DEVICE_STATUS_LAST + 1)
+ 
+ /* Theoretical limit only. A single host can only contain up to 4 or 8 PCIe
+  * x16 cards. In extreme cases, there are hosts that can accommodate 16 cards.
+diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c
+index 4194cda2d04c3..536451a9a16c9 100644
+--- a/drivers/misc/habanalabs/common/habanalabs_drv.c
++++ b/drivers/misc/habanalabs/common/habanalabs_drv.c
+@@ -318,12 +318,16 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev,
+ 		hdev->asic_prop.fw_security_enabled = false;
+ 
+ 	/* Assign status description string */
+-	strncpy(hdev->status[HL_DEVICE_STATUS_MALFUNCTION],
+-					"disabled", HL_STR_MAX);
++	strncpy(hdev->status[HL_DEVICE_STATUS_OPERATIONAL],
++					"operational", HL_STR_MAX);
+ 	strncpy(hdev->status[HL_DEVICE_STATUS_IN_RESET],
+ 					"in reset", HL_STR_MAX);
++	strncpy(hdev->status[HL_DEVICE_STATUS_MALFUNCTION],
++					"disabled", HL_STR_MAX);
+ 	strncpy(hdev->status[HL_DEVICE_STATUS_NEEDS_RESET],
+ 					"needs reset", HL_STR_MAX);
++	strncpy(hdev->status[HL_DEVICE_STATUS_IN_DEVICE_CREATION],
++					"in device creation", HL_STR_MAX);
+ 
+ 	hdev->major = hl_major;
+ 	hdev->reset_on_lockup = reset_on_lockup;
+diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
+index af339ce1ab4f2..fcadde594a580 100644
+--- a/drivers/misc/habanalabs/common/memory.c
++++ b/drivers/misc/habanalabs/common/memory.c
+@@ -124,7 +124,7 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args,
+ 
+ 	spin_lock(&vm->idr_lock);
+ 	handle = idr_alloc(&vm->phys_pg_pack_handles, phys_pg_pack, 1, 0,
+-				GFP_KERNEL);
++				GFP_ATOMIC);
+ 	spin_unlock(&vm->idr_lock);
+ 
+ 	if (handle < 0) {
+diff --git a/drivers/misc/habanalabs/common/mmu/mmu_v1.c b/drivers/misc/habanalabs/common/mmu/mmu_v1.c
+index c5e93ff325866..0f536f79dd9c9 100644
+--- a/drivers/misc/habanalabs/common/mmu/mmu_v1.c
++++ b/drivers/misc/habanalabs/common/mmu/mmu_v1.c
+@@ -470,13 +470,13 @@ static void hl_mmu_v1_fini(struct hl_device *hdev)
+ 	if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.hr.mmu_shadow_hop0)) {
+ 		kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0);
+ 		gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool);
+-	}
+ 
+-	/* Make sure that if we arrive here again without init was called we
+-	 * won't cause kernel panic. This can happen for example if we fail
+-	 * during hard reset code at certain points
+-	 */
+-	hdev->mmu_priv.dr.mmu_shadow_hop0 = NULL;
++		/* Make sure that if we arrive here again without init was
++		 * called we won't cause kernel panic. This can happen for
++		 * example if we fail during hard reset code at certain points
++		 */
++		hdev->mmu_priv.dr.mmu_shadow_hop0 = NULL;
++	}
+ }
+ 
+ /**
+diff --git a/drivers/misc/habanalabs/common/sysfs.c b/drivers/misc/habanalabs/common/sysfs.c
+index db72df282ef8d..34f9f2779962a 100644
+--- a/drivers/misc/habanalabs/common/sysfs.c
++++ b/drivers/misc/habanalabs/common/sysfs.c
+@@ -9,8 +9,7 @@
+ 
+ #include <linux/pci.h>
+ 
+-long hl_get_frequency(struct hl_device *hdev, u32 pll_index,
+-								bool curr)
++long hl_get_frequency(struct hl_device *hdev, u32 pll_index, bool curr)
+ {
+ 	struct cpucp_packet pkt;
+ 	u32 used_pll_idx;
+@@ -44,8 +43,7 @@ long hl_get_frequency(struct hl_device *hdev, u32 pll_index,
+ 	return (long) result;
+ }
+ 
+-void hl_set_frequency(struct hl_device *hdev, u32 pll_index,
+-								u64 freq)
++void hl_set_frequency(struct hl_device *hdev, u32 pll_index, u64 freq)
+ {
+ 	struct cpucp_packet pkt;
+ 	u32 used_pll_idx;
+@@ -285,16 +283,12 @@ static ssize_t status_show(struct device *dev, struct device_attribute *attr,
+ 				char *buf)
+ {
+ 	struct hl_device *hdev = dev_get_drvdata(dev);
+-	char *str;
++	char str[HL_STR_MAX];
+ 
+-	if (atomic_read(&hdev->in_reset))
+-		str = "In reset";
+-	else if (hdev->disabled)
+-		str = "Malfunction";
+-	else if (hdev->needs_reset)
+-		str = "Needs Reset";
+-	else
+-		str = "Operational";
++	strscpy(str, hdev->status[hl_device_status(hdev)], HL_STR_MAX);
++
++	/* use uppercase for backward compatibility */
++	str[0] = 'A' + (str[0] - 'a');
+ 
+ 	return sprintf(buf, "%s\n", str);
+ }
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index aa8a0ca5aca24..409f05c962f24 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -7809,6 +7809,12 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ 	u8 cause;
+ 	bool reset_required;
+ 
++	if (event_type >= GAUDI_EVENT_SIZE) {
++		dev_err(hdev->dev, "Event type %u exceeds maximum of %u",
++				event_type, GAUDI_EVENT_SIZE - 1);
++		return;
++	}
++
+ 	gaudi->events_stat[event_type]++;
+ 	gaudi->events_stat_aggregate[event_type]++;
+ 
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index 755e08cf2ecc8..bfb22f96c1a33 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -4797,6 +4797,12 @@ void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry)
+ 				>> EQ_CTL_EVENT_TYPE_SHIFT);
+ 	struct goya_device *goya = hdev->asic_specific;
+ 
++	if (event_type >= GOYA_ASYNC_EVENT_ID_SIZE) {
++		dev_err(hdev->dev, "Event type %u exceeds maximum of %u",
++				event_type, GOYA_ASYNC_EVENT_ID_SIZE - 1);
++		return;
++	}
++
+ 	goya->events_stat[event_type]++;
+ 	goya->events_stat_aggregate[event_type]++;
+ 
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 2735551271889..fa88bf9cba4d0 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1067,7 +1067,8 @@ static ssize_t nvmet_subsys_attr_serial_show(struct config_item *item,
+ {
+ 	struct nvmet_subsys *subsys = to_subsys(item);
+ 
+-	return snprintf(page, PAGE_SIZE, "%s\n", subsys->serial);
++	return snprintf(page, PAGE_SIZE, "%*s\n",
++			NVMET_SN_MAX_SIZE, subsys->serial);
+ }
+ 
+ static ssize_t
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 6c028632f425f..0b9c2fb843e79 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1434,6 +1434,9 @@ static int of_fwnode_add_links(struct fwnode_handle *fwnode)
+ 	struct property *p;
+ 	struct device_node *con_np = to_of_node(fwnode);
+ 
++	if (IS_ENABLED(CONFIG_X86))
++		return 0;
++
+ 	if (!con_np)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
+index 889d7ce282ebb..952a92504df69 100644
+--- a/drivers/parisc/dino.c
++++ b/drivers/parisc/dino.c
+@@ -156,15 +156,6 @@ static inline struct dino_device *DINO_DEV(struct pci_hba_data *hba)
+ 	return container_of(hba, struct dino_device, hba);
+ }
+ 
+-/* Check if PCI device is behind a Card-mode Dino. */
+-static int pci_dev_is_behind_card_dino(struct pci_dev *dev)
+-{
+-	struct dino_device *dino_dev;
+-
+-	dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge));
+-	return is_card_dino(&dino_dev->hba.dev->id);
+-}
+-
+ /*
+  * Dino Configuration Space Accessor Functions
+  */
+@@ -447,6 +438,15 @@ static void quirk_cirrus_cardbus(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_6832, quirk_cirrus_cardbus );
+ 
+ #ifdef CONFIG_TULIP
++/* Check if PCI device is behind a Card-mode Dino. */
++static int pci_dev_is_behind_card_dino(struct pci_dev *dev)
++{
++	struct dino_device *dino_dev;
++
++	dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge));
++	return is_card_dino(&dino_dev->hba.dev->id);
++}
++
+ static void pci_fixup_tulip(struct pci_dev *dev)
+ {
+ 	if (!pci_dev_is_behind_card_dino(dev))
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index fdbf051586970..0e4a46af82288 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -218,6 +218,8 @@
+ 
+ #define MSI_IRQ_NUM			32
+ 
++#define CFG_RD_CRS_VAL			0xffff0001
++
+ struct advk_pcie {
+ 	struct platform_device *pdev;
+ 	void __iomem *base;
+@@ -587,7 +589,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+-static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
++static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val)
+ {
+ 	struct device *dev = &pcie->pdev->dev;
+ 	u32 reg;
+@@ -629,9 +631,30 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
+ 		strcomp_status = "UR";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CRS:
++		if (allow_crs && val) {
++			/* PCIe r4.0, sec 2.3.2, says:
++			 * If CRS Software Visibility is enabled:
++			 * For a Configuration Read Request that includes both
++			 * bytes of the Vendor ID field of a device Function's
++			 * Configuration Space Header, the Root Complex must
++			 * complete the Request to the host by returning a
++			 * read-data value of 0001h for the Vendor ID field and
++			 * all '1's for any additional bytes included in the
++			 * request.
++			 *
++			 * So CRS in this case is not an error status.
++			 */
++			*val = CFG_RD_CRS_VAL;
++			strcomp_status = NULL;
++			break;
++		}
+ 		/* PCIe r4.0, sec 2.3.2, says:
+ 		 * If CRS Software Visibility is not enabled, the Root Complex
+ 		 * must re-issue the Configuration Request as a new Request.
++		 * If CRS Software Visibility is enabled: For a Configuration
++		 * Write Request or for any other Configuration Read Request,
++		 * the Root Complex must re-issue the Configuration Request as
++		 * a new Request.
+ 		 * A Root Complex implementation may choose to limit the number
+ 		 * of Configuration Request/CRS Completion Status loops before
+ 		 * determining that something is wrong with the target of the
+@@ -700,6 +723,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_EXP_RTCTL: {
+ 		u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+ 		*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
++		*value |= PCI_EXP_RTCAP_CRSVIS << 16;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
+@@ -781,6 +805,7 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
+ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ {
+ 	struct pci_bridge_emul *bridge = &pcie->bridge;
++	int ret;
+ 
+ 	bridge->conf.vendor =
+ 		cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff);
+@@ -804,7 +829,15 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	bridge->data = pcie;
+ 	bridge->ops = &advk_pci_bridge_emul_ops;
+ 
+-	return pci_bridge_emul_init(bridge, 0);
++	/* PCIe config space can be initialized after pci_bridge_emul_init() */
++	ret = pci_bridge_emul_init(bridge, 0);
++	if (ret < 0)
++		return ret;
++
++	/* Indicates supports for Completion Retry Status */
++	bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
++
++	return 0;
+ }
+ 
+ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
+@@ -856,6 +889,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 			     int where, int size, u32 *val)
+ {
+ 	struct advk_pcie *pcie = bus->sysdata;
++	bool allow_crs;
+ 	u32 reg;
+ 	int ret;
+ 
+@@ -868,7 +902,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		return pci_bridge_emul_conf_read(&pcie->bridge, where,
+ 						 size, val);
+ 
++	/*
++	 * Completion Retry Status is possible to return only when reading all
++	 * 4 bytes from PCI_VENDOR_ID and PCI_DEVICE_ID registers at once and
++	 * CRSSVE flag on Root Bridge is enabled.
++	 */
++	allow_crs = (where == PCI_VENDOR_ID) && (size == 4) &&
++		    (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) &
++		     PCI_EXP_RTCTL_CRSSVE);
++
+ 	if (advk_pcie_pio_is_running(pcie)) {
++		/*
++		 * If it is possible return Completion Retry Status so caller
++		 * tries to issue the request again instead of failing.
++		 */
++		if (allow_crs) {
++			*val = CFG_RD_CRS_VAL;
++			return PCIBIOS_SUCCESSFUL;
++		}
+ 		*val = 0xffffffff;
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+@@ -896,12 +947,20 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 
+ 	ret = advk_pcie_wait_pio(pcie);
+ 	if (ret < 0) {
++		/*
++		 * If it is possible return Completion Retry Status so caller
++		 * tries to issue the request again instead of failing.
++		 */
++		if (allow_crs) {
++			*val = CFG_RD_CRS_VAL;
++			return PCIBIOS_SUCCESSFUL;
++		}
+ 		*val = 0xffffffff;
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+ 
+ 	/* Check PIO status and get the read result */
+-	ret = advk_pcie_check_pio_status(pcie, val);
++	ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
+ 	if (ret < 0) {
+ 		*val = 0xffffffff;
+ 		return PCIBIOS_SET_FAILED;
+@@ -970,7 +1029,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+-	ret = advk_pcie_check_pio_status(pcie, NULL);
++	ret = advk_pcie_check_pio_status(pcie, false, NULL);
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h
+index b31883022a8e6..49bbd37ee318a 100644
+--- a/drivers/pci/pci-bridge-emul.h
++++ b/drivers/pci/pci-bridge-emul.h
+@@ -54,7 +54,7 @@ struct pci_bridge_emul_pcie_conf {
+ 	__le16 slotctl;
+ 	__le16 slotsta;
+ 	__le16 rootctl;
+-	__le16 rsvd;
++	__le16 rootcap;
+ 	__le32 rootsta;
+ 	__le32 devcap2;
+ 	__le16 devctl2;
+diff --git a/drivers/platform/chrome/Makefile b/drivers/platform/chrome/Makefile
+index 41baccba033f7..f901d2e43166c 100644
+--- a/drivers/platform/chrome/Makefile
++++ b/drivers/platform/chrome/Makefile
+@@ -20,7 +20,7 @@ obj-$(CONFIG_CROS_EC_CHARDEV)		+= cros_ec_chardev.o
+ obj-$(CONFIG_CROS_EC_LIGHTBAR)		+= cros_ec_lightbar.o
+ obj-$(CONFIG_CROS_EC_VBC)		+= cros_ec_vbc.o
+ obj-$(CONFIG_CROS_EC_DEBUGFS)		+= cros_ec_debugfs.o
+-cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o
++cros-ec-sensorhub-objs			:= cros_ec_sensorhub.o cros_ec_sensorhub_ring.o cros_ec_trace.o
+ obj-$(CONFIG_CROS_EC_SENSORHUB)		+= cros-ec-sensorhub.o
+ obj-$(CONFIG_CROS_EC_SYSFS)		+= cros_ec_sysfs.o
+ obj-$(CONFIG_CROS_USBPD_LOGGER)		+= cros_usbpd_logger.o
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_ring.c b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+index 8921f24e83bac..98e37080f7609 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub_ring.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+@@ -17,6 +17,8 @@
+ #include <linux/sort.h>
+ #include <linux/slab.h>
+ 
++#include "cros_ec_trace.h"
++
+ /* Precision of fixed point for the m values from the filter */
+ #define M_PRECISION BIT(23)
+ 
+@@ -291,6 +293,7 @@ cros_ec_sensor_ring_ts_filter_update(struct cros_ec_sensors_ts_filter_state
+ 		state->median_m = 0;
+ 		state->median_error = 0;
+ 	}
++	trace_cros_ec_sensorhub_filter(state, dx, dy);
+ }
+ 
+ /**
+@@ -427,6 +430,11 @@ cros_ec_sensor_ring_process_event(struct cros_ec_sensorhub *sensorhub,
+ 			if (new_timestamp - *current_timestamp > 0)
+ 				*current_timestamp = new_timestamp;
+ 		}
++		trace_cros_ec_sensorhub_timestamp(in->timestamp,
++						  fifo_info->timestamp,
++						  fifo_timestamp,
++						  *current_timestamp,
++						  now);
+ 	}
+ 
+ 	if (in->flags & MOTIONSENSE_SENSOR_FLAG_ODR) {
+@@ -460,6 +468,12 @@ cros_ec_sensor_ring_process_event(struct cros_ec_sensorhub *sensorhub,
+ 
+ 	/* Regular sample */
+ 	out->sensor_id = in->sensor_num;
++	trace_cros_ec_sensorhub_data(in->sensor_num,
++				     fifo_info->timestamp,
++				     fifo_timestamp,
++				     *current_timestamp,
++				     now);
++
+ 	if (*current_timestamp - now > 0) {
+ 		/*
+ 		 * This fix is needed to overcome the timestamp filter putting
+diff --git a/drivers/platform/chrome/cros_ec_trace.h b/drivers/platform/chrome/cros_ec_trace.h
+index f744b21bc655f..7e7cfc98657a4 100644
+--- a/drivers/platform/chrome/cros_ec_trace.h
++++ b/drivers/platform/chrome/cros_ec_trace.h
+@@ -15,6 +15,7 @@
+ #include <linux/types.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
++#include <linux/platform_data/cros_ec_sensorhub.h>
+ 
+ #include <linux/tracepoint.h>
+ 
+@@ -70,6 +71,99 @@ TRACE_EVENT(cros_ec_request_done,
+ 		  __entry->retval)
+ );
+ 
++TRACE_EVENT(cros_ec_sensorhub_timestamp,
++	    TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
++		current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sample_timestamp)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sample_timestamp = ec_sample_timestamp;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sample_timestamp,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_data,
++	    TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++		     s64 current_timestamp, s64 current_time),
++	TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
++	TP_STRUCT__entry(
++		__field(u32, ec_sensor_num)
++		__field(u32, ec_fifo_timestamp)
++		__field(s64, fifo_timestamp)
++		__field(s64, current_timestamp)
++		__field(s64, current_time)
++		__field(s64, delta)
++	),
++	TP_fast_assign(
++		__entry->ec_sensor_num = ec_sensor_num;
++		__entry->ec_fifo_timestamp = ec_fifo_timestamp;
++		__entry->fifo_timestamp = fifo_timestamp;
++		__entry->current_timestamp = current_timestamp;
++		__entry->current_time = current_time;
++		__entry->delta = current_timestamp - current_time;
++	),
++	TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++		  __entry->ec_sensor_num,
++		__entry->ec_fifo_timestamp,
++		__entry->fifo_timestamp,
++		__entry->current_timestamp,
++		__entry->current_time,
++		__entry->delta
++	)
++);
++
++TRACE_EVENT(cros_ec_sensorhub_filter,
++	    TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
++	TP_ARGS(state, dx, dy),
++	TP_STRUCT__entry(
++		__field(s64, dx)
++		__field(s64, dy)
++		__field(s64, median_m)
++		__field(s64, median_error)
++		__field(s64, history_len)
++		__field(s64, x)
++		__field(s64, y)
++	),
++	TP_fast_assign(
++		__entry->dx = dx;
++		__entry->dy = dy;
++		__entry->median_m = state->median_m;
++		__entry->median_error = state->median_error;
++		__entry->history_len = state->history_len;
++		__entry->x = state->x_offset;
++		__entry->y = state->y_offset;
++	),
++	TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
++		  __entry->dx,
++		__entry->dy,
++		__entry->median_m,
++		__entry->median_error,
++		__entry->history_len,
++		__entry->x,
++		__entry->y
++	)
++);
++
+ 
+ #endif /* _CROS_EC_TRACE_H_ */
+ 
+diff --git a/drivers/pwm/pwm-ab8500.c b/drivers/pwm/pwm-ab8500.c
+index e2a26d9da25b3..281f74a1c50bd 100644
+--- a/drivers/pwm/pwm-ab8500.c
++++ b/drivers/pwm/pwm-ab8500.c
+@@ -22,14 +22,21 @@
+ 
+ struct ab8500_pwm_chip {
+ 	struct pwm_chip chip;
++	unsigned int hwid;
+ };
+ 
++static struct ab8500_pwm_chip *ab8500_pwm_from_chip(struct pwm_chip *chip)
++{
++	return container_of(chip, struct ab8500_pwm_chip, chip);
++}
++
+ static int ab8500_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			    const struct pwm_state *state)
+ {
+ 	int ret;
+ 	u8 reg;
+ 	unsigned int higher_val, lower_val;
++	struct ab8500_pwm_chip *ab8500 = ab8500_pwm_from_chip(chip);
+ 
+ 	if (state->polarity != PWM_POLARITY_NORMAL)
+ 		return -EINVAL;
+@@ -37,7 +44,7 @@ static int ab8500_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	if (!state->enabled) {
+ 		ret = abx500_mask_and_set_register_interruptible(chip->dev,
+ 					AB8500_MISC, AB8500_PWM_OUT_CTRL7_REG,
+-					1 << (chip->base - 1), 0);
++					1 << ab8500->hwid, 0);
+ 
+ 		if (ret < 0)
+ 			dev_err(chip->dev, "%s: Failed to disable PWM, Error %d\n",
+@@ -56,7 +63,7 @@ static int ab8500_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	 */
+ 	higher_val = ((state->duty_cycle & 0x0300) >> 8);
+ 
+-	reg = AB8500_PWM_OUT_CTRL1_REG + ((chip->base - 1) * 2);
++	reg = AB8500_PWM_OUT_CTRL1_REG + (ab8500->hwid * 2);
+ 
+ 	ret = abx500_set_register_interruptible(chip->dev, AB8500_MISC,
+ 			reg, (u8)lower_val);
+@@ -70,7 +77,7 @@ static int ab8500_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	ret = abx500_mask_and_set_register_interruptible(chip->dev,
+ 				AB8500_MISC, AB8500_PWM_OUT_CTRL7_REG,
+-				1 << (chip->base - 1), 1 << (chip->base - 1));
++				1 << ab8500->hwid, 1 << ab8500->hwid);
+ 	if (ret < 0)
+ 		dev_err(chip->dev, "%s: Failed to enable PWM, Error %d\n",
+ 							pwm->label, ret);
+@@ -88,6 +95,9 @@ static int ab8500_pwm_probe(struct platform_device *pdev)
+ 	struct ab8500_pwm_chip *ab8500;
+ 	int err;
+ 
++	if (pdev->id < 1 || pdev->id > 31)
++		return dev_err_probe(&pdev->dev, EINVAL, "Invalid device id %d\n", pdev->id);
++
+ 	/*
+ 	 * Nothing to be done in probe, this is required to get the
+ 	 * device which is required for ab8500 read and write
+@@ -99,6 +109,7 @@ static int ab8500_pwm_probe(struct platform_device *pdev)
+ 	ab8500->chip.dev = &pdev->dev;
+ 	ab8500->chip.ops = &ab8500_pwm_ops;
+ 	ab8500->chip.npwm = 1;
++	ab8500->hwid = pdev->id - 1;
+ 
+ 	err = pwmchip_add(&ab8500->chip);
+ 	if (err < 0)
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index 11b16ecc4f967..18d8e34d0d08a 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -326,23 +326,7 @@ err_pm_disable:
+ static int img_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct img_pwm_chip *pwm_chip = platform_get_drvdata(pdev);
+-	u32 val;
+-	unsigned int i;
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(&pdev->dev);
+-	if (ret < 0) {
+-		pm_runtime_put(&pdev->dev);
+-		return ret;
+-	}
+-
+-	for (i = 0; i < pwm_chip->chip.npwm; i++) {
+-		val = img_pwm_readl(pwm_chip, PWM_CTRL_CFG);
+-		val &= ~BIT(i);
+-		img_pwm_writel(pwm_chip, PWM_CTRL_CFG, val);
+-	}
+ 
+-	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	if (!pm_runtime_status_suspended(&pdev->dev))
+ 		img_pwm_runtime_suspend(&pdev->dev);
+diff --git a/drivers/pwm/pwm-lpc32xx.c b/drivers/pwm/pwm-lpc32xx.c
+index 2834a0f001d3a..719e8e9136569 100644
+--- a/drivers/pwm/pwm-lpc32xx.c
++++ b/drivers/pwm/pwm-lpc32xx.c
+@@ -117,17 +117,17 @@ static int lpc32xx_pwm_probe(struct platform_device *pdev)
+ 	lpc32xx->chip.ops = &lpc32xx_pwm_ops;
+ 	lpc32xx->chip.npwm = 1;
+ 
++	/* If PWM is disabled, configure the output to the default value */
++	val = readl(lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
++	val &= ~PWM_PIN_LEVEL;
++	writel(val, lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
++
+ 	ret = pwmchip_add(&lpc32xx->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to add PWM chip, error %d\n", ret);
+ 		return ret;
+ 	}
+ 
+-	/* When PWM is disable, configure the output to the default value */
+-	val = readl(lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
+-	val &= ~PWM_PIN_LEVEL;
+-	writel(val, lpc32xx->base + (lpc32xx->chip.pwms[0].hwpwm << 2));
+-
+ 	platform_set_drvdata(pdev, lpc32xx);
+ 
+ 	return 0;
+diff --git a/drivers/pwm/pwm-mxs.c b/drivers/pwm/pwm-mxs.c
+index a22180803bd7d..558dc1de8f5d5 100644
+--- a/drivers/pwm/pwm-mxs.c
++++ b/drivers/pwm/pwm-mxs.c
+@@ -145,6 +145,11 @@ static int mxs_pwm_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	/* FIXME: Only do this if the PWM isn't already running */
++	ret = stmp_reset_block(mxs->base);
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret, "failed to reset PWM\n");
++
+ 	ret = pwmchip_add(&mxs->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to add pwm chip %d\n", ret);
+@@ -153,15 +158,7 @@ static int mxs_pwm_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, mxs);
+ 
+-	ret = stmp_reset_block(mxs->base);
+-	if (ret)
+-		goto pwm_remove;
+-
+ 	return 0;
+-
+-pwm_remove:
+-	pwmchip_remove(&mxs->chip);
+-	return ret;
+ }
+ 
+ static int mxs_pwm_remove(struct platform_device *pdev)
+diff --git a/drivers/pwm/pwm-rockchip.c b/drivers/pwm/pwm-rockchip.c
+index cbe900877724f..8fcef29948d77 100644
+--- a/drivers/pwm/pwm-rockchip.c
++++ b/drivers/pwm/pwm-rockchip.c
+@@ -384,20 +384,6 @@ static int rockchip_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct rockchip_pwm_chip *pc = platform_get_drvdata(pdev);
+ 
+-	/*
+-	 * Disable the PWM clk before unpreparing it if the PWM device is still
+-	 * running. This should only happen when the last PWM user left it
+-	 * enabled, or when nobody requested a PWM that was previously enabled
+-	 * by the bootloader.
+-	 *
+-	 * FIXME: Maybe the core should disable all PWM devices in
+-	 * pwmchip_remove(). In this case we'd only have to call
+-	 * clk_unprepare() after pwmchip_remove().
+-	 *
+-	 */
+-	if (pwm_is_enabled(pc->chip.pwms))
+-		clk_disable(pc->clk);
+-
+ 	clk_unprepare(pc->pclk);
+ 	clk_unprepare(pc->clk);
+ 
+diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
+index 93dd03618465b..e4a10aac354d6 100644
+--- a/drivers/pwm/pwm-stm32-lp.c
++++ b/drivers/pwm/pwm-stm32-lp.c
+@@ -222,8 +222,6 @@ static int stm32_pwm_lp_remove(struct platform_device *pdev)
+ {
+ 	struct stm32_pwm_lp *priv = platform_get_drvdata(pdev);
+ 
+-	pwm_disable(&priv->chip.pwms[0]);
+-
+ 	return pwmchip_remove(&priv->chip);
+ }
+ 
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 12153d5801ce1..f7bf87097a9fb 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -624,6 +624,7 @@ config RTC_DRV_FM3130
+ 
+ config RTC_DRV_RX8010
+ 	tristate "Epson RX8010SJ"
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for the Epson RX8010SJ RTC
+ 	  chip.
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index db26edeccea6e..b6698656fc014 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -4265,7 +4265,7 @@ static void TranslateRxSignalStuff819xUsb(struct sk_buff *skb,
+ 	bpacket_match_bssid = (type != IEEE80211_FTYPE_CTL) &&
+ 			       (ether_addr_equal(priv->ieee80211->current_network.bssid,  (fc & IEEE80211_FCTL_TODS) ? hdr->addr1 : (fc & IEEE80211_FCTL_FROMDS) ? hdr->addr2 : hdr->addr3))
+ 			       && (!pstats->bHwError) && (!pstats->bCRC) && (!pstats->bICV);
+-	bpacket_toself =  bpacket_match_bssid &
++	bpacket_toself =  bpacket_match_bssid &&
+ 			  (ether_addr_equal(praddr, priv->ieee80211->dev->dev_addr));
+ 
+ 	if (WLAN_FC_GET_FRAMETYPE(fc) == IEEE80211_STYPE_BEACON)
+diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+index f95000df89422..965558516cbdc 100644
+--- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
++++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+@@ -349,16 +349,16 @@ static int wpa_set_auth_algs(struct net_device *dev, u32 value)
+ 	struct adapter *padapter = rtw_netdev_priv(dev);
+ 	int ret = 0;
+ 
+-	if ((value & WLAN_AUTH_SHARED_KEY) && (value & WLAN_AUTH_OPEN)) {
++	if ((value & IW_AUTH_ALG_SHARED_KEY) && (value & IW_AUTH_ALG_OPEN_SYSTEM)) {
+ 		padapter->securitypriv.ndisencryptstatus = Ndis802_11Encryption1Enabled;
+ 		padapter->securitypriv.ndisauthtype = Ndis802_11AuthModeAutoSwitch;
+ 		padapter->securitypriv.dot11AuthAlgrthm = dot11AuthAlgrthm_Auto;
+-	} else if (value & WLAN_AUTH_SHARED_KEY)	{
++	} else if (value & IW_AUTH_ALG_SHARED_KEY)	{
+ 		padapter->securitypriv.ndisencryptstatus = Ndis802_11Encryption1Enabled;
+ 
+ 		padapter->securitypriv.ndisauthtype = Ndis802_11AuthModeShared;
+ 		padapter->securitypriv.dot11AuthAlgrthm = dot11AuthAlgrthm_Shared;
+-	} else if (value & WLAN_AUTH_OPEN) {
++	} else if (value & IW_AUTH_ALG_OPEN_SYSTEM) {
+ 		/* padapter->securitypriv.ndisencryptstatus = Ndis802_11EncryptionDisabled; */
+ 		if (padapter->securitypriv.ndisauthtype < Ndis802_11AuthModeWPAPSK) {
+ 			padapter->securitypriv.ndisauthtype = Ndis802_11AuthModeOpen;
+diff --git a/drivers/thermal/qcom/qcom-spmi-adc-tm5.c b/drivers/thermal/qcom/qcom-spmi-adc-tm5.c
+index 232fd0b333251..8494cc04aa210 100644
+--- a/drivers/thermal/qcom/qcom-spmi-adc-tm5.c
++++ b/drivers/thermal/qcom/qcom-spmi-adc-tm5.c
+@@ -359,6 +359,12 @@ static int adc_tm5_register_tzd(struct adc_tm5_chip *adc_tm)
+ 							   &adc_tm->channels[i],
+ 							   &adc_tm5_ops);
+ 		if (IS_ERR(tzd)) {
++			if (PTR_ERR(tzd) == -ENODEV) {
++				dev_warn(adc_tm->dev, "thermal sensor on channel %d is not used\n",
++					 adc_tm->channels[i].channel);
++				continue;
++			}
++
+ 			dev_err(adc_tm->dev, "Error registering TZ zone for channel %d: %ld\n",
+ 				adc_tm->channels[i].channel, PTR_ERR(tzd));
+ 			return PTR_ERR(tzd);
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index fdf16aa34eb47..702696cf58b67 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -84,7 +84,7 @@ struct rcar_gen3_thermal_tsc {
+ 	struct thermal_zone_device *zone;
+ 	struct equation_coefs coef;
+ 	int tj_t;
+-	int id; /* thermal channel id */
++	unsigned int id; /* thermal channel id */
+ };
+ 
+ struct rcar_gen3_thermal_priv {
+@@ -310,7 +310,8 @@ static int rcar_gen3_thermal_probe(struct platform_device *pdev)
+ 	const int *ths_tj_1 = of_device_get_match_data(dev);
+ 	struct resource *res;
+ 	struct thermal_zone_device *zone;
+-	int ret, i;
++	unsigned int i;
++	int ret;
+ 
+ 	/* default values if FUSEs are missing */
+ 	/* TODO: Read values from hardware on supported platforms */
+@@ -376,7 +377,7 @@ static int rcar_gen3_thermal_probe(struct platform_device *pdev)
+ 		if (ret < 0)
+ 			goto error_unregister;
+ 
+-		dev_info(dev, "TSC%d: Loaded %d trip points\n", i, ret);
++		dev_info(dev, "TSC%u: Loaded %d trip points\n", i, ret);
+ 	}
+ 
+ 	priv->num_tscs = i;
+diff --git a/drivers/thermal/samsung/exynos_tmu.c b/drivers/thermal/samsung/exynos_tmu.c
+index e9a90bc23b11d..f4ab4c5b4b626 100644
+--- a/drivers/thermal/samsung/exynos_tmu.c
++++ b/drivers/thermal/samsung/exynos_tmu.c
+@@ -1073,6 +1073,7 @@ static int exynos_tmu_probe(struct platform_device *pdev)
+ 		data->sclk = devm_clk_get(&pdev->dev, "tmu_sclk");
+ 		if (IS_ERR(data->sclk)) {
+ 			dev_err(&pdev->dev, "Failed to get sclk\n");
++			ret = PTR_ERR(data->sclk);
+ 			goto err_clk;
+ 		} else {
+ 			ret = clk_prepare_enable(data->sclk);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index ef981d3b7bb49..cb72393f92d3a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -2059,7 +2059,7 @@ static void restore_cur(struct vc_data *vc)
+ 
+ enum { ESnormal, ESesc, ESsquare, ESgetpars, ESfunckey,
+ 	EShash, ESsetG0, ESsetG1, ESpercent, EScsiignore, ESnonstd,
+-	ESpalette, ESosc };
++	ESpalette, ESosc, ESapc, ESpm, ESdcs };
+ 
+ /* console_lock is held (except via vc_init()) */
+ static void reset_terminal(struct vc_data *vc, int do_clear)
+@@ -2133,20 +2133,28 @@ static void vc_setGx(struct vc_data *vc, unsigned int which, int c)
+ 		vc->vc_translate = set_translate(*charset, vc);
+ }
+ 
++/* is this state an ANSI control string? */
++static bool ansi_control_string(unsigned int state)
++{
++	if (state == ESosc || state == ESapc || state == ESpm || state == ESdcs)
++		return true;
++	return false;
++}
++
+ /* console_lock is held */
+ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ {
+ 	/*
+ 	 *  Control characters can be used in the _middle_
+-	 *  of an escape sequence.
++	 *  of an escape sequence, aside from ANSI control strings.
+ 	 */
+-	if (vc->vc_state == ESosc && c>=8 && c<=13) /* ... except for OSC */
++	if (ansi_control_string(vc->vc_state) && c >= 8 && c <= 13)
+ 		return;
+ 	switch (c) {
+ 	case 0:
+ 		return;
+ 	case 7:
+-		if (vc->vc_state == ESosc)
++		if (ansi_control_string(vc->vc_state))
+ 			vc->vc_state = ESnormal;
+ 		else if (vc->vc_bell_duration)
+ 			kd_mksound(vc->vc_bell_pitch, vc->vc_bell_duration);
+@@ -2207,6 +2215,12 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 		case ']':
+ 			vc->vc_state = ESnonstd;
+ 			return;
++		case '_':
++			vc->vc_state = ESapc;
++			return;
++		case '^':
++			vc->vc_state = ESpm;
++			return;
+ 		case '%':
+ 			vc->vc_state = ESpercent;
+ 			return;
+@@ -2224,6 +2238,9 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 			if (vc->state.x < VC_TABSTOPS_COUNT)
+ 				set_bit(vc->state.x, vc->vc_tab_stop);
+ 			return;
++		case 'P':
++			vc->vc_state = ESdcs;
++			return;
+ 		case 'Z':
+ 			respond_ID(tty);
+ 			return;
+@@ -2520,8 +2537,14 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 		vc_setGx(vc, 1, c);
+ 		vc->vc_state = ESnormal;
+ 		return;
++	case ESapc:
++		return;
+ 	case ESosc:
+ 		return;
++	case ESpm:
++		return;
++	case ESdcs:
++		return;
+ 	default:
+ 		vc->vc_state = ESnormal;
+ 	}
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 0ba98e08a0290..50e12989e84a1 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3205,6 +3205,8 @@ static long btrfs_ioctl_rm_dev_v2(struct file *file, void __user *arg)
+ 	struct inode *inode = file_inode(file);
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	struct btrfs_ioctl_vol_args_v2 *vol_args;
++	struct block_device *bdev = NULL;
++	fmode_t mode;
+ 	int ret;
+ 	bool cancel = false;
+ 
+@@ -3237,9 +3239,9 @@ static long btrfs_ioctl_rm_dev_v2(struct file *file, void __user *arg)
+ 	/* Exclusive operation is now claimed */
+ 
+ 	if (vol_args->flags & BTRFS_DEVICE_SPEC_BY_ID)
+-		ret = btrfs_rm_device(fs_info, NULL, vol_args->devid);
++		ret = btrfs_rm_device(fs_info, NULL, vol_args->devid, &bdev, &mode);
+ 	else
+-		ret = btrfs_rm_device(fs_info, vol_args->name, 0);
++		ret = btrfs_rm_device(fs_info, vol_args->name, 0, &bdev, &mode);
+ 
+ 	btrfs_exclop_finish(fs_info);
+ 
+@@ -3255,6 +3257,8 @@ out:
+ 	kfree(vol_args);
+ err_drop:
+ 	mnt_drop_write_file(file);
++	if (bdev)
++		blkdev_put(bdev, mode);
+ 	return ret;
+ }
+ 
+@@ -3263,6 +3267,8 @@ static long btrfs_ioctl_rm_dev(struct file *file, void __user *arg)
+ 	struct inode *inode = file_inode(file);
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	struct btrfs_ioctl_vol_args *vol_args;
++	struct block_device *bdev = NULL;
++	fmode_t mode;
+ 	int ret;
+ 	bool cancel;
+ 
+@@ -3284,7 +3290,7 @@ static long btrfs_ioctl_rm_dev(struct file *file, void __user *arg)
+ 	ret = exclop_start_or_cancel_reloc(fs_info, BTRFS_EXCLOP_DEV_REMOVE,
+ 					   cancel);
+ 	if (ret == 0) {
+-		ret = btrfs_rm_device(fs_info, vol_args->name, 0);
++		ret = btrfs_rm_device(fs_info, vol_args->name, 0, &bdev, &mode);
+ 		if (!ret)
+ 			btrfs_info(fs_info, "disk deleted %s", vol_args->name);
+ 		btrfs_exclop_finish(fs_info);
+@@ -3293,7 +3299,8 @@ static long btrfs_ioctl_rm_dev(struct file *file, void __user *arg)
+ 	kfree(vol_args);
+ out_drop_write:
+ 	mnt_drop_write_file(file);
+-
++	if (bdev)
++		blkdev_put(bdev, mode);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 10dd2d210b0f4..682416d4edefa 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -570,6 +570,8 @@ static int btrfs_free_stale_devices(const char *path,
+ 	struct btrfs_device *device, *tmp_device;
+ 	int ret = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	if (path)
+ 		ret = -ENOENT;
+ 
+@@ -1000,11 +1002,12 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig)
+ 	struct btrfs_device *orig_dev;
+ 	int ret = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	fs_devices = alloc_fs_devices(orig->fsid, NULL);
+ 	if (IS_ERR(fs_devices))
+ 		return fs_devices;
+ 
+-	mutex_lock(&orig->device_list_mutex);
+ 	fs_devices->total_devices = orig->total_devices;
+ 
+ 	list_for_each_entry(orig_dev, &orig->devices, dev_list) {
+@@ -1036,10 +1039,8 @@ static struct btrfs_fs_devices *clone_fs_devices(struct btrfs_fs_devices *orig)
+ 		device->fs_devices = fs_devices;
+ 		fs_devices->num_devices++;
+ 	}
+-	mutex_unlock(&orig->device_list_mutex);
+ 	return fs_devices;
+ error:
+-	mutex_unlock(&orig->device_list_mutex);
+ 	free_fs_devices(fs_devices);
+ 	return ERR_PTR(ret);
+ }
+@@ -1928,15 +1929,17 @@ out:
+  * Function to update ctime/mtime for a given device path.
+  * Mainly used for ctime/mtime based probe like libblkid.
+  */
+-static void update_dev_time(const char *path_name)
++static void update_dev_time(struct block_device *bdev)
+ {
+-	struct file *filp;
++	struct inode *inode = bdev->bd_inode;
++	struct timespec64 now;
+ 
+-	filp = filp_open(path_name, O_RDWR, 0);
+-	if (IS_ERR(filp))
++	/* Shouldn't happen but just in case. */
++	if (!inode)
+ 		return;
+-	file_update_time(filp);
+-	filp_close(filp, NULL);
++
++	now = current_time(inode);
++	generic_update_time(inode, &now, S_MTIME | S_CTIME);
+ }
+ 
+ static int btrfs_rm_dev_item(struct btrfs_device *device)
+@@ -2116,11 +2119,11 @@ void btrfs_scratch_superblocks(struct btrfs_fs_info *fs_info,
+ 	btrfs_kobject_uevent(bdev, KOBJ_CHANGE);
+ 
+ 	/* Update ctime/mtime for device path for libblkid */
+-	update_dev_time(device_path);
++	update_dev_time(bdev);
+ }
+ 
+ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+-		    u64 devid)
++		    u64 devid, struct block_device **bdev, fmode_t *mode)
+ {
+ 	struct btrfs_device *device;
+ 	struct btrfs_fs_devices *cur_devices;
+@@ -2234,15 +2237,26 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+ 
+ 	/*
+-	 * at this point, the device is zero sized and detached from
+-	 * the devices list.  All that's left is to zero out the old
+-	 * supers and free the device.
++	 * At this point, the device is zero sized and detached from the
++	 * devices list.  All that's left is to zero out the old supers and
++	 * free the device.
++	 *
++	 * We cannot call btrfs_close_bdev() here because we're holding the sb
++	 * write lock, and blkdev_put() will pull in the ->open_mutex on the
++	 * block device and it's dependencies.  Instead just flush the device
++	 * and let the caller do the final blkdev_put.
+ 	 */
+-	if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state))
++	if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
+ 		btrfs_scratch_superblocks(fs_info, device->bdev,
+ 					  device->name->str);
++		if (device->bdev) {
++			sync_blockdev(device->bdev);
++			invalidate_bdev(device->bdev);
++		}
++	}
+ 
+-	btrfs_close_bdev(device);
++	*bdev = device->bdev;
++	*mode = device->mode;
+ 	synchronize_rcu();
+ 	btrfs_free_device(device);
+ 
+@@ -2769,7 +2783,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 	btrfs_forget_devices(device_path);
+ 
+ 	/* Update ctime/mtime for blkid or udev */
+-	update_dev_time(device_path);
++	update_dev_time(bdev);
+ 
+ 	return ret;
+ 
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 55a8ba244716b..f77f869dfd2cf 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -472,7 +472,8 @@ struct btrfs_device *btrfs_alloc_device(struct btrfs_fs_info *fs_info,
+ 					const u8 *uuid);
+ void btrfs_free_device(struct btrfs_device *device);
+ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+-		    const char *device_path, u64 devid);
++		    const char *device_path, u64 devid,
++		    struct block_device **bdev, fmode_t *mode);
+ void __exit btrfs_cleanup_fs_uuids(void);
+ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len);
+ int btrfs_grow_device(struct btrfs_trans_handle *trans,
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index ba562efdf07b8..3296a93be907c 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1859,6 +1859,8 @@ static u64 __mark_caps_flushing(struct inode *inode,
+  * try to invalidate mapping pages without blocking.
+  */
+ static int try_nonblocking_invalidate(struct inode *inode)
++	__releases(ci->i_ceph_lock)
++	__acquires(ci->i_ceph_lock)
+ {
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	u32 invalidating_gen = ci->i_rdcache_gen;
+@@ -3117,7 +3119,16 @@ void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 				break;
+ 			}
+ 		}
+-		BUG_ON(!found);
++
++		if (!found) {
++			/*
++			 * The capsnap should already be removed when removing
++			 * auth cap in the case of a forced unmount.
++			 */
++			WARN_ON_ONCE(ci->i_auth_cap);
++			goto unlock;
++		}
++
+ 		capsnap->dirty_pages -= nr;
+ 		if (capsnap->dirty_pages == 0) {
+ 			complete_capsnap = true;
+@@ -3139,6 +3150,7 @@ void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 		     complete_capsnap ? " (complete capsnap)" : "");
+ 	}
+ 
++unlock:
+ 	spin_unlock(&ci->i_ceph_lock);
+ 
+ 	if (last) {
+@@ -3609,6 +3621,43 @@ out:
+ 		iput(inode);
+ }
+ 
++void __ceph_remove_capsnap(struct inode *inode, struct ceph_cap_snap *capsnap,
++			   bool *wake_ci, bool *wake_mdsc)
++{
++	struct ceph_inode_info *ci = ceph_inode(inode);
++	struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc;
++	bool ret;
++
++	lockdep_assert_held(&ci->i_ceph_lock);
++
++	dout("removing capsnap %p, inode %p ci %p\n", capsnap, inode, ci);
++
++	list_del_init(&capsnap->ci_item);
++	ret = __detach_cap_flush_from_ci(ci, &capsnap->cap_flush);
++	if (wake_ci)
++		*wake_ci = ret;
++
++	spin_lock(&mdsc->cap_dirty_lock);
++	if (list_empty(&ci->i_cap_flush_list))
++		list_del_init(&ci->i_flushing_item);
++
++	ret = __detach_cap_flush_from_mdsc(mdsc, &capsnap->cap_flush);
++	if (wake_mdsc)
++		*wake_mdsc = ret;
++	spin_unlock(&mdsc->cap_dirty_lock);
++}
++
++void ceph_remove_capsnap(struct inode *inode, struct ceph_cap_snap *capsnap,
++			 bool *wake_ci, bool *wake_mdsc)
++{
++	struct ceph_inode_info *ci = ceph_inode(inode);
++
++	lockdep_assert_held(&ci->i_ceph_lock);
++
++	WARN_ON_ONCE(capsnap->dirty_pages || capsnap->writing);
++	__ceph_remove_capsnap(inode, capsnap, wake_ci, wake_mdsc);
++}
++
+ /*
+  * Handle FLUSHSNAP_ACK.  MDS has flushed snap data to disk and we can
+  * throw away our cap_snap.
+@@ -3646,23 +3695,10 @@ static void handle_cap_flushsnap_ack(struct inode *inode, u64 flush_tid,
+ 			     capsnap, capsnap->follows);
+ 		}
+ 	}
+-	if (flushed) {
+-		WARN_ON(capsnap->dirty_pages || capsnap->writing);
+-		dout(" removing %p cap_snap %p follows %lld\n",
+-		     inode, capsnap, follows);
+-		list_del(&capsnap->ci_item);
+-		wake_ci |= __detach_cap_flush_from_ci(ci, &capsnap->cap_flush);
+-
+-		spin_lock(&mdsc->cap_dirty_lock);
+-
+-		if (list_empty(&ci->i_cap_flush_list))
+-			list_del_init(&ci->i_flushing_item);
+-
+-		wake_mdsc |= __detach_cap_flush_from_mdsc(mdsc,
+-							  &capsnap->cap_flush);
+-		spin_unlock(&mdsc->cap_dirty_lock);
+-	}
++	if (flushed)
++		ceph_remove_capsnap(inode, capsnap, &wake_ci, &wake_mdsc);
+ 	spin_unlock(&ci->i_ceph_lock);
++
+ 	if (flushed) {
+ 		ceph_put_snap_context(capsnap->context);
+ 		ceph_put_cap_snap(capsnap);
+@@ -4137,8 +4173,9 @@ void ceph_handle_caps(struct ceph_mds_session *session,
+ done:
+ 	mutex_unlock(&session->s_mutex);
+ done_unlocked:
+-	ceph_put_string(extra_info.pool_ns);
+ 	iput(inode);
++out:
++	ceph_put_string(extra_info.pool_ns);
+ 	return;
+ 
+ flush_cap_releases:
+@@ -4153,7 +4190,7 @@ flush_cap_releases:
+ bad:
+ 	pr_err("ceph_handle_caps: corrupt message\n");
+ 	ceph_msg_dump(msg);
+-	return;
++	goto out;
+ }
+ 
+ /*
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index d1755ac1d964a..3daebfaec8c6d 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1722,32 +1722,26 @@ retry_snap:
+ 		goto out;
+ 	}
+ 
+-	err = file_remove_privs(file);
+-	if (err)
++	down_read(&osdc->lock);
++	map_flags = osdc->osdmap->flags;
++	pool_flags = ceph_pg_pool_flags(osdc->osdmap, ci->i_layout.pool_id);
++	up_read(&osdc->lock);
++	if ((map_flags & CEPH_OSDMAP_FULL) ||
++	    (pool_flags & CEPH_POOL_FLAG_FULL)) {
++		err = -ENOSPC;
+ 		goto out;
++	}
+ 
+-	err = file_update_time(file);
++	err = file_remove_privs(file);
+ 	if (err)
+ 		goto out;
+ 
+-	inode_inc_iversion_raw(inode);
+-
+ 	if (ci->i_inline_version != CEPH_INLINE_NONE) {
+ 		err = ceph_uninline_data(file, NULL);
+ 		if (err < 0)
+ 			goto out;
+ 	}
+ 
+-	down_read(&osdc->lock);
+-	map_flags = osdc->osdmap->flags;
+-	pool_flags = ceph_pg_pool_flags(osdc->osdmap, ci->i_layout.pool_id);
+-	up_read(&osdc->lock);
+-	if ((map_flags & CEPH_OSDMAP_FULL) ||
+-	    (pool_flags & CEPH_POOL_FLAG_FULL)) {
+-		err = -ENOSPC;
+-		goto out;
+-	}
+-
+ 	dout("aio_write %p %llx.%llx %llu~%zd getting caps. i_size %llu\n",
+ 	     inode, ceph_vinop(inode), pos, count, i_size_read(inode));
+ 	if (fi->fmode & CEPH_FILE_MODE_LAZY)
+@@ -1759,6 +1753,12 @@ retry_snap:
+ 	if (err < 0)
+ 		goto out;
+ 
++	err = file_update_time(file);
++	if (err)
++		goto out_caps;
++
++	inode_inc_iversion_raw(inode);
++
+ 	dout("aio_write %p %llx.%llx %llu~%zd got cap refs on %s\n",
+ 	     inode, ceph_vinop(inode), pos, count, ceph_cap_string(got));
+ 
+@@ -1842,6 +1842,8 @@ retry_snap:
+ 	}
+ 
+ 	goto out_unlocked;
++out_caps:
++	ceph_put_cap_refs(ci, got);
+ out:
+ 	if (direct_lock)
+ 		ceph_end_io_direct(inode);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 0b69aec23e5c4..52b3ddc5f1991 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1583,14 +1583,39 @@ out:
+ 	return ret;
+ }
+ 
++static int remove_capsnaps(struct ceph_mds_client *mdsc, struct inode *inode)
++{
++	struct ceph_inode_info *ci = ceph_inode(inode);
++	struct ceph_cap_snap *capsnap;
++	int capsnap_release = 0;
++
++	lockdep_assert_held(&ci->i_ceph_lock);
++
++	dout("removing capsnaps, ci is %p, inode is %p\n", ci, inode);
++
++	while (!list_empty(&ci->i_cap_snaps)) {
++		capsnap = list_first_entry(&ci->i_cap_snaps,
++					   struct ceph_cap_snap, ci_item);
++		__ceph_remove_capsnap(inode, capsnap, NULL, NULL);
++		ceph_put_snap_context(capsnap->context);
++		ceph_put_cap_snap(capsnap);
++		capsnap_release++;
++	}
++	wake_up_all(&ci->i_cap_wq);
++	wake_up_all(&mdsc->cap_flushing_wq);
++	return capsnap_release;
++}
++
+ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 				  void *arg)
+ {
+ 	struct ceph_fs_client *fsc = (struct ceph_fs_client *)arg;
++	struct ceph_mds_client *mdsc = fsc->mdsc;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	LIST_HEAD(to_remove);
+ 	bool dirty_dropped = false;
+ 	bool invalidate = false;
++	int capsnap_release = 0;
+ 
+ 	dout("removing cap %p, ci is %p, inode is %p\n",
+ 	     cap, ci, &ci->vfs_inode);
+@@ -1598,7 +1623,6 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 	__ceph_remove_cap(cap, false);
+ 	if (!ci->i_auth_cap) {
+ 		struct ceph_cap_flush *cf;
+-		struct ceph_mds_client *mdsc = fsc->mdsc;
+ 
+ 		if (READ_ONCE(fsc->mount_state) >= CEPH_MOUNT_SHUTDOWN) {
+ 			if (inode->i_data.nrpages > 0)
+@@ -1662,6 +1686,9 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 			list_add(&ci->i_prealloc_cap_flush->i_list, &to_remove);
+ 			ci->i_prealloc_cap_flush = NULL;
+ 		}
++
++		if (!list_empty(&ci->i_cap_snaps))
++			capsnap_release = remove_capsnaps(mdsc, inode);
+ 	}
+ 	spin_unlock(&ci->i_ceph_lock);
+ 	while (!list_empty(&to_remove)) {
+@@ -1678,6 +1705,8 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		ceph_queue_invalidate(inode);
+ 	if (dirty_dropped)
+ 		iput(inode);
++	while (capsnap_release--)
++		iput(inode);
+ 	return 0;
+ }
+ 
+@@ -4912,7 +4941,6 @@ void ceph_mdsc_destroy(struct ceph_fs_client *fsc)
+ 
+ 	ceph_metric_destroy(&mdsc->metric);
+ 
+-	flush_delayed_work(&mdsc->metric.delayed_work);
+ 	fsc->mdsc = NULL;
+ 	kfree(mdsc);
+ 	dout("mdsc_destroy %p done\n", mdsc);
+diff --git a/fs/ceph/metric.c b/fs/ceph/metric.c
+index 5ac151eb0d498..04d5df29bbbfb 100644
+--- a/fs/ceph/metric.c
++++ b/fs/ceph/metric.c
+@@ -302,6 +302,8 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
+ 	if (!m)
+ 		return;
+ 
++	cancel_delayed_work_sync(&m->delayed_work);
++
+ 	percpu_counter_destroy(&m->total_inodes);
+ 	percpu_counter_destroy(&m->opened_inodes);
+ 	percpu_counter_destroy(&m->i_caps_mis);
+@@ -309,8 +311,6 @@ void ceph_metric_destroy(struct ceph_client_metric *m)
+ 	percpu_counter_destroy(&m->d_lease_mis);
+ 	percpu_counter_destroy(&m->d_lease_hit);
+ 
+-	cancel_delayed_work_sync(&m->delayed_work);
+-
+ 	ceph_put_mds_session(m->session);
+ }
+ 
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index b1a363641beb6..2200ed76b1230 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1163,6 +1163,12 @@ extern void ceph_put_cap_refs_no_check_caps(struct ceph_inode_info *ci,
+ 					    int had);
+ extern void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 				       struct ceph_snap_context *snapc);
++extern void __ceph_remove_capsnap(struct inode *inode,
++				  struct ceph_cap_snap *capsnap,
++				  bool *wake_ci, bool *wake_mdsc);
++extern void ceph_remove_capsnap(struct inode *inode,
++				struct ceph_cap_snap *capsnap,
++				bool *wake_ci, bool *wake_mdsc);
+ extern void ceph_flush_snaps(struct ceph_inode_info *ci,
+ 			     struct ceph_mds_session **psession);
+ extern bool __ceph_should_report_size(struct ceph_inode_info *ci);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 2dfd0d8297eb3..1b9de38a136aa 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -689,13 +689,19 @@ smb2_close_cached_fid(struct kref *ref)
+ 		cifs_dbg(FYI, "clear cached root file handle\n");
+ 		SMB2_close(0, cfid->tcon, cfid->fid->persistent_fid,
+ 			   cfid->fid->volatile_fid);
+-		cfid->is_valid = false;
+-		cfid->file_all_info_is_valid = false;
+-		cfid->has_lease = false;
+-		if (cfid->dentry) {
+-			dput(cfid->dentry);
+-			cfid->dentry = NULL;
+-		}
++	}
++
++	/*
++	 * We only check validity above to send SMB2_close,
++	 * but we still need to invalidate these entries
++	 * when this function is called
++	 */
++	cfid->is_valid = false;
++	cfid->file_all_info_is_valid = false;
++	cfid->has_lease = false;
++	if (cfid->dentry) {
++		dput(cfid->dentry);
++		cfid->dentry = NULL;
+ 	}
+ }
+ 
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 07afb5ddb1c4e..19fe5312c10f3 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1127,8 +1127,10 @@ int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+ 
+ 	mmap_write_unlock(mm);
+ 
+-	if (WARN_ON(i != *vma_count))
++	if (WARN_ON(i != *vma_count)) {
++		kvfree(*vma_meta);
+ 		return -EFAULT;
++	}
+ 
+ 	*vma_data_size_ptr = vma_data_size;
+ 	return 0;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 43aaa35664315..754d59f734d84 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -10335,7 +10335,7 @@ static int __init io_uring_init(void)
+ 	BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
+ 
+ 	BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
+-	BUILD_BUG_ON(__REQ_F_LAST_BIT >= 8 * sizeof(int));
++	BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int));
+ 
+ 	req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC |
+ 				SLAB_ACCOUNT);
+diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c
+index 68e8d61e28dd5..62f8a7ac19c85 100644
+--- a/fs/nilfs2/sysfs.c
++++ b/fs/nilfs2/sysfs.c
+@@ -51,11 +51,9 @@ static const struct sysfs_ops nilfs_##name##_attr_ops = { \
+ #define NILFS_DEV_INT_GROUP_TYPE(name, parent_name) \
+ static void nilfs_##name##_attr_release(struct kobject *kobj) \
+ { \
+-	struct nilfs_sysfs_##parent_name##_subgroups *subgroups; \
+-	struct the_nilfs *nilfs = container_of(kobj->parent, \
+-						struct the_nilfs, \
+-						ns_##parent_name##_kobj); \
+-	subgroups = nilfs->ns_##parent_name##_subgroups; \
++	struct nilfs_sysfs_##parent_name##_subgroups *subgroups = container_of(kobj, \
++						struct nilfs_sysfs_##parent_name##_subgroups, \
++						sg_##name##_kobj); \
+ 	complete(&subgroups->sg_##name##_kobj_unregister); \
+ } \
+ static struct kobj_type nilfs_##name##_ktype = { \
+@@ -81,12 +79,12 @@ static int nilfs_sysfs_create_##name##_group(struct the_nilfs *nilfs) \
+ 	err = kobject_init_and_add(kobj, &nilfs_##name##_ktype, parent, \
+ 				    #name); \
+ 	if (err) \
+-		return err; \
+-	return 0; \
++		kobject_put(kobj); \
++	return err; \
+ } \
+ static void nilfs_sysfs_delete_##name##_group(struct the_nilfs *nilfs) \
+ { \
+-	kobject_del(&nilfs->ns_##parent_name##_subgroups->sg_##name##_kobj); \
++	kobject_put(&nilfs->ns_##parent_name##_subgroups->sg_##name##_kobj); \
+ }
+ 
+ /************************************************************************
+@@ -197,14 +195,14 @@ int nilfs_sysfs_create_snapshot_group(struct nilfs_root *root)
+ 	}
+ 
+ 	if (err)
+-		return err;
++		kobject_put(&root->snapshot_kobj);
+ 
+-	return 0;
++	return err;
+ }
+ 
+ void nilfs_sysfs_delete_snapshot_group(struct nilfs_root *root)
+ {
+-	kobject_del(&root->snapshot_kobj);
++	kobject_put(&root->snapshot_kobj);
+ }
+ 
+ /************************************************************************
+@@ -986,7 +984,7 @@ int nilfs_sysfs_create_device_group(struct super_block *sb)
+ 	err = kobject_init_and_add(&nilfs->ns_dev_kobj, &nilfs_dev_ktype, NULL,
+ 				    "%s", sb->s_id);
+ 	if (err)
+-		goto free_dev_subgroups;
++		goto cleanup_dev_kobject;
+ 
+ 	err = nilfs_sysfs_create_mounted_snapshots_group(nilfs);
+ 	if (err)
+@@ -1023,9 +1021,7 @@ delete_mounted_snapshots_group:
+ 	nilfs_sysfs_delete_mounted_snapshots_group(nilfs);
+ 
+ cleanup_dev_kobject:
+-	kobject_del(&nilfs->ns_dev_kobj);
+-
+-free_dev_subgroups:
++	kobject_put(&nilfs->ns_dev_kobj);
+ 	kfree(nilfs->ns_dev_subgroups);
+ 
+ failed_create_device_group:
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index 8b7b01a380cea..c8bfc01da5d71 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -792,14 +792,13 @@ nilfs_find_or_create_root(struct the_nilfs *nilfs, __u64 cno)
+ 
+ void nilfs_put_root(struct nilfs_root *root)
+ {
+-	if (refcount_dec_and_test(&root->count)) {
+-		struct the_nilfs *nilfs = root->nilfs;
++	struct the_nilfs *nilfs = root->nilfs;
+ 
+-		nilfs_sysfs_delete_snapshot_group(root);
+-
+-		spin_lock(&nilfs->ns_cptree_lock);
++	if (refcount_dec_and_lock(&root->count, &nilfs->ns_cptree_lock)) {
+ 		rb_erase(&root->rb_node, &nilfs->ns_cptree);
+ 		spin_unlock(&nilfs->ns_cptree_lock);
++
++		nilfs_sysfs_delete_snapshot_group(root);
+ 		iput(root->ifile);
+ 
+ 		kfree(root);
+diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
+index 4f72b47973c30..2f909ed084c63 100644
+--- a/include/linux/cacheinfo.h
++++ b/include/linux/cacheinfo.h
+@@ -79,24 +79,6 @@ struct cpu_cacheinfo {
+ 	bool cpu_map_populated;
+ };
+ 
+-/*
+- * Helpers to make sure "func" is executed on the cpu whose cache
+- * attributes are being detected
+- */
+-#define DEFINE_SMP_CALL_CACHE_FUNCTION(func)			\
+-static inline void _##func(void *ret)				\
+-{								\
+-	int cpu = smp_processor_id();				\
+-	*(int *)ret = __##func(cpu);				\
+-}								\
+-								\
+-int func(unsigned int cpu)					\
+-{								\
+-	int ret;						\
+-	smp_call_function_single(cpu, _##func, &ret, true);	\
+-	return ret;						\
+-}
+-
+ struct cpu_cacheinfo *get_cpu_cacheinfo(unsigned int cpu);
+ int init_cache_level(unsigned int cpu);
+ int populate_cache_leaves(unsigned int cpu);
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index d296f3b88fb98..8050d929a5b45 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -404,12 +404,13 @@ static inline void thermal_zone_device_unregister(
+ 	struct thermal_zone_device *tz)
+ { }
+ static inline struct thermal_cooling_device *
+-thermal_cooling_device_register(char *type, void *devdata,
++thermal_cooling_device_register(const char *type, void *devdata,
+ 	const struct thermal_cooling_device_ops *ops)
+ { return ERR_PTR(-ENODEV); }
+ static inline struct thermal_cooling_device *
+ thermal_of_cooling_device_register(struct device_node *np,
+-	char *type, void *devdata, const struct thermal_cooling_device_ops *ops)
++	const char *type, void *devdata,
++	const struct thermal_cooling_device_ops *ops)
+ { return ERR_PTR(-ENODEV); }
+ static inline struct thermal_cooling_device *
+ devm_thermal_of_cooling_device_register(struct device *dev,
+diff --git a/include/uapi/misc/habanalabs.h b/include/uapi/misc/habanalabs.h
+index a47a731e45277..b4b681b81df81 100644
+--- a/include/uapi/misc/habanalabs.h
++++ b/include/uapi/misc/habanalabs.h
+@@ -276,7 +276,9 @@ enum hl_device_status {
+ 	HL_DEVICE_STATUS_OPERATIONAL,
+ 	HL_DEVICE_STATUS_IN_RESET,
+ 	HL_DEVICE_STATUS_MALFUNCTION,
+-	HL_DEVICE_STATUS_NEEDS_RESET
++	HL_DEVICE_STATUS_NEEDS_RESET,
++	HL_DEVICE_STATUS_IN_DEVICE_CREATION,
++	HL_DEVICE_STATUS_LAST = HL_DEVICE_STATUS_IN_DEVICE_CREATION
+ };
+ 
+ /* Opcode for management ioctl
+diff --git a/init/initramfs.c b/init/initramfs.c
+index af27abc596436..a842c05447456 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -15,6 +15,7 @@
+ #include <linux/mm.h>
+ #include <linux/namei.h>
+ #include <linux/init_syscalls.h>
++#include <linux/umh.h>
+ 
+ static ssize_t __init xwrite(struct file *file, const char *p, size_t count,
+ 		loff_t *pos)
+@@ -727,6 +728,7 @@ static int __init populate_rootfs(void)
+ {
+ 	initramfs_cookie = async_schedule_domain(do_populate_rootfs, NULL,
+ 						 &initramfs_domain);
++	usermodehelper_enable();
+ 	if (!initramfs_async)
+ 		wait_for_initramfs();
+ 	return 0;
+diff --git a/init/main.c b/init/main.c
+index 8d97aba78c3ad..90733a916791f 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1392,7 +1392,6 @@ static void __init do_basic_setup(void)
+ 	driver_init();
+ 	init_irq_proc();
+ 	do_ctors();
+-	usermodehelper_enable();
+ 	do_initcalls();
+ }
+ 
+diff --git a/init/noinitramfs.c b/init/noinitramfs.c
+index 3d62b07f3bb9c..d1d26b93d25cd 100644
+--- a/init/noinitramfs.c
++++ b/init/noinitramfs.c
+@@ -10,6 +10,7 @@
+ #include <linux/kdev_t.h>
+ #include <linux/syscalls.h>
+ #include <linux/init_syscalls.h>
++#include <linux/umh.h>
+ 
+ /*
+  * Create a simple rootfs that is similar to the default initramfs
+@@ -18,6 +19,7 @@ static int __init default_rootfs(void)
+ {
+ 	int err;
+ 
++	usermodehelper_enable();
+ 	err = init_mkdir("/dev", 0755);
+ 	if (err < 0)
+ 		goto out;
+diff --git a/kernel/profile.c b/kernel/profile.c
+index c2ebddb5e9746..eb9c7f0f5ac52 100644
+--- a/kernel/profile.c
++++ b/kernel/profile.c
+@@ -41,7 +41,8 @@ struct profile_hit {
+ #define NR_PROFILE_GRP		(NR_PROFILE_HIT/PROFILE_GRPSZ)
+ 
+ static atomic_t *prof_buffer;
+-static unsigned long prof_len, prof_shift;
++static unsigned long prof_len;
++static unsigned short int prof_shift;
+ 
+ int prof_on __read_mostly;
+ EXPORT_SYMBOL_GPL(prof_on);
+@@ -67,8 +68,8 @@ int profile_setup(char *str)
+ 		if (str[strlen(sleepstr)] == ',')
+ 			str += strlen(sleepstr) + 1;
+ 		if (get_option(&str, &par))
+-			prof_shift = par;
+-		pr_info("kernel sleep profiling enabled (shift: %ld)\n",
++			prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
++		pr_info("kernel sleep profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ #else
+ 		pr_warn("kernel sleep profiling requires CONFIG_SCHEDSTATS\n");
+@@ -78,21 +79,21 @@ int profile_setup(char *str)
+ 		if (str[strlen(schedstr)] == ',')
+ 			str += strlen(schedstr) + 1;
+ 		if (get_option(&str, &par))
+-			prof_shift = par;
+-		pr_info("kernel schedule profiling enabled (shift: %ld)\n",
++			prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
++		pr_info("kernel schedule profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ 	} else if (!strncmp(str, kvmstr, strlen(kvmstr))) {
+ 		prof_on = KVM_PROFILING;
+ 		if (str[strlen(kvmstr)] == ',')
+ 			str += strlen(kvmstr) + 1;
+ 		if (get_option(&str, &par))
+-			prof_shift = par;
+-		pr_info("kernel KVM profiling enabled (shift: %ld)\n",
++			prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
++		pr_info("kernel KVM profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ 	} else if (get_option(&str, &par)) {
+-		prof_shift = par;
++		prof_shift = clamp(par, 0, BITS_PER_LONG - 1);
+ 		prof_on = CPU_PROFILING;
+-		pr_info("kernel profiling enabled (shift: %ld)\n",
++		pr_info("kernel profiling enabled (shift: %u)\n",
+ 			prof_shift);
+ 	}
+ 	return 1;
+@@ -468,7 +469,7 @@ read_profile(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+ 	unsigned long p = *ppos;
+ 	ssize_t read;
+ 	char *pnt;
+-	unsigned int sample_step = 1 << prof_shift;
++	unsigned long sample_step = 1UL << prof_shift;
+ 
+ 	profile_flip_buffers();
+ 	if (p >= (prof_len+1)*sizeof(unsigned int))
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 912b47aa99d82..d17b0a5ce6ac3 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -379,10 +379,10 @@ void play_idle_precise(u64 duration_ns, u64 latency_ns)
+ 	cpuidle_use_deepest_state(latency_ns);
+ 
+ 	it.done = 0;
+-	hrtimer_init_on_stack(&it.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer_init_on_stack(&it.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ 	it.timer.function = idle_inject_timer_fn;
+ 	hrtimer_start(&it.timer, ns_to_ktime(duration_ns),
+-		      HRTIMER_MODE_REL_PINNED);
++		      HRTIMER_MODE_REL_PINNED_HARD);
+ 
+ 	while (!READ_ONCE(it.done))
+ 		do_idle();
+diff --git a/kernel/sys.c b/kernel/sys.c
+index ef1a78f5d71c7..6ec50924b5176 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1959,13 +1959,6 @@ static int validate_prctl_map_addr(struct prctl_mm_map *prctl_map)
+ 
+ 	error = -EINVAL;
+ 
+-	/*
+-	 * @brk should be after @end_data in traditional maps.
+-	 */
+-	if (prctl_map->start_brk <= prctl_map->end_data ||
+-	    prctl_map->brk <= prctl_map->end_data)
+-		goto out;
+-
+ 	/*
+ 	 * Neither we should allow to override limits if they set.
+ 	 */
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index d713714cba67f..4bd8f94a56c63 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -235,14 +235,14 @@ trace_boot_init_events(struct trace_array *tr, struct xbc_node *node)
+ 	if (!node)
+ 		return;
+ 	/* per-event key starts with "event.GROUP.EVENT" */
+-	xbc_node_for_each_child(node, gnode) {
++	xbc_node_for_each_subkey(node, gnode) {
+ 		data = xbc_node_get_data(gnode);
+ 		if (!strcmp(data, "enable")) {
+ 			enable_all = true;
+ 			continue;
+ 		}
+ 		enable = false;
+-		xbc_node_for_each_child(gnode, enode) {
++		xbc_node_for_each_subkey(gnode, enode) {
+ 			data = xbc_node_get_data(enode);
+ 			if (!strcmp(data, "enable")) {
+ 				enable = true;
+@@ -338,7 +338,7 @@ trace_boot_init_instances(struct xbc_node *node)
+ 	if (!node)
+ 		return;
+ 
+-	xbc_node_for_each_child(node, inode) {
++	xbc_node_for_each_subkey(node, inode) {
+ 		p = xbc_node_get_data(inode);
+ 		if (!p || *p == '\0')
+ 			continue;
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 5ddd575159fb8..ffd22e499997a 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1062,7 +1062,6 @@ config HARDLOCKUP_DETECTOR
+ 	depends on HAVE_HARDLOCKUP_DETECTOR_PERF || HAVE_HARDLOCKUP_DETECTOR_ARCH
+ 	select LOCKUP_DETECTOR
+ 	select HARDLOCKUP_DETECTOR_PERF if HAVE_HARDLOCKUP_DETECTOR_PERF
+-	select HARDLOCKUP_DETECTOR_ARCH if HAVE_HARDLOCKUP_DETECTOR_ARCH
+ 	help
+ 	  Say Y here to enable the kernel to act as a watchdog to detect
+ 	  hard lockups.
+@@ -2460,8 +2459,7 @@ config SLUB_KUNIT_TEST
+ 
+ config RATIONAL_KUNIT_TEST
+ 	tristate "KUnit test for rational.c" if !KUNIT_ALL_TESTS
+-	depends on KUNIT
+-	select RATIONAL
++	depends on KUNIT && RATIONAL
+ 	default KUNIT_ALL_TESTS
+ 	help
+ 	  This builds the rational math unit test.
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 2bbd7dce0f1d3..490a4c9003395 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -610,7 +610,7 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 	chan->vc_wq = kmalloc(sizeof(wait_queue_head_t), GFP_KERNEL);
+ 	if (!chan->vc_wq) {
+ 		err = -ENOMEM;
+-		goto out_free_tag;
++		goto out_remove_file;
+ 	}
+ 	init_waitqueue_head(chan->vc_wq);
+ 	chan->ring_bufs_avail = 1;
+@@ -628,6 +628,8 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 
+ 	return 0;
+ 
++out_remove_file:
++	sysfs_remove_file(&vdev->dev.kobj, &dev_attr_mount_tag.attr);
+ out_free_tag:
+ 	kfree(tag);
+ out_free_vq:
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index dbb41821b1b85..cd5a2b186f0d0 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -662,7 +662,7 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
+ {
+ 	struct svc_serv *serv = rqstp->rq_server;
+ 	struct xdr_buf *arg = &rqstp->rq_arg;
+-	unsigned long pages, filled;
++	unsigned long pages, filled, ret;
+ 
+ 	pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
+ 	if (pages > RPCSVC_MAXPAGES) {
+@@ -672,11 +672,12 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
+ 		pages = RPCSVC_MAXPAGES;
+ 	}
+ 
+-	for (;;) {
+-		filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
+-						rqstp->rq_pages);
+-		if (filled == pages)
+-			break;
++	for (filled = 0; filled < pages; filled = ret) {
++		ret = alloc_pages_bulk_array(GFP_KERNEL, pages,
++					     rqstp->rq_pages);
++		if (ret > filled)
++			/* Made progress, don't sleep yet */
++			continue;
+ 
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (signalled() || kthread_should_stop()) {
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index b0032c42333eb..572e564bf6cd9 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -2155,7 +2155,7 @@ static int selinux_ptrace_access_check(struct task_struct *child,
+ static int selinux_ptrace_traceme(struct task_struct *parent)
+ {
+ 	return avc_has_perm(&selinux_state,
+-			    task_sid_subj(parent), task_sid_obj(current),
++			    task_sid_obj(parent), task_sid_obj(current),
+ 			    SECCLASS_PROCESS, PROCESS__PTRACE, NULL);
+ }
+ 
+@@ -6218,7 +6218,7 @@ static int selinux_msg_queue_msgrcv(struct kern_ipc_perm *msq, struct msg_msg *m
+ 	struct ipc_security_struct *isec;
+ 	struct msg_security_struct *msec;
+ 	struct common_audit_data ad;
+-	u32 sid = task_sid_subj(target);
++	u32 sid = task_sid_obj(target);
+ 	int rc;
+ 
+ 	isec = selinux_ipc(msq);
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 223a6da0e6dc5..b1694fa6f12b1 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2016,7 +2016,7 @@ static int smk_curacc_on_task(struct task_struct *p, int access,
+ 				const char *caller)
+ {
+ 	struct smk_audit_info ad;
+-	struct smack_known *skp = smk_of_task_struct_subj(p);
++	struct smack_known *skp = smk_of_task_struct_obj(p);
+ 	int rc;
+ 
+ 	smk_ad_init(&ad, caller, LSM_AUDIT_DATA_TASK);
+@@ -3480,7 +3480,7 @@ static void smack_d_instantiate(struct dentry *opt_dentry, struct inode *inode)
+  */
+ static int smack_getprocattr(struct task_struct *p, char *name, char **value)
+ {
+-	struct smack_known *skp = smk_of_task_struct_subj(p);
++	struct smack_known *skp = smk_of_task_struct_obj(p);
+ 	char *cp;
+ 	int slen;
+ 
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index 5e71382467e88..546f6fd0609e1 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -285,6 +285,7 @@ static int graph_dai_link_of_dpcm(struct asoc_simple_priv *priv,
+ 	if (li->cpu) {
+ 		struct snd_soc_card *card = simple_priv_to_card(priv);
+ 		struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);
++		struct snd_soc_dai_link_component *platforms = asoc_link_to_platform(dai_link, 0);
+ 		int is_single_links = 0;
+ 
+ 		/* Codec is dummy */
+@@ -313,6 +314,7 @@ static int graph_dai_link_of_dpcm(struct asoc_simple_priv *priv,
+ 			dai_link->no_pcm = 1;
+ 
+ 		asoc_simple_canonicalize_cpu(cpus, is_single_links);
++		asoc_simple_canonicalize_platform(platforms, cpus);
+ 	} else {
+ 		struct snd_soc_codec_conf *cconf = simple_props_to_codec_conf(dai_props, 0);
+ 		struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);
+@@ -366,6 +368,7 @@ static int graph_dai_link_of(struct asoc_simple_priv *priv,
+ 	struct snd_soc_dai_link *dai_link = simple_priv_to_link(priv, li->link);
+ 	struct snd_soc_dai_link_component *cpus = asoc_link_to_cpu(dai_link, 0);
+ 	struct snd_soc_dai_link_component *codecs = asoc_link_to_codec(dai_link, 0);
++	struct snd_soc_dai_link_component *platforms = asoc_link_to_platform(dai_link, 0);
+ 	char dai_name[64];
+ 	int ret, is_single_links = 0;
+ 
+@@ -383,6 +386,7 @@ static int graph_dai_link_of(struct asoc_simple_priv *priv,
+ 		 "%s-%s", cpus->dai_name, codecs->dai_name);
+ 
+ 	asoc_simple_canonicalize_cpu(cpus, is_single_links);
++	asoc_simple_canonicalize_platform(platforms, cpus);
+ 
+ 	ret = graph_link_init(priv, cpu_ep, codec_ep, li, dai_name);
+ 	if (ret < 0)
+@@ -608,6 +612,7 @@ static int graph_count_noml(struct asoc_simple_priv *priv,
+ 
+ 	li->num[li->link].cpus		= 1;
+ 	li->num[li->link].codecs	= 1;
++	li->num[li->link].platforms     = 1;
+ 
+ 	li->link += 1; /* 1xCPU-Codec */
+ 
+@@ -630,6 +635,7 @@ static int graph_count_dpcm(struct asoc_simple_priv *priv,
+ 
+ 	if (li->cpu) {
+ 		li->num[li->link].cpus		= 1;
++		li->num[li->link].platforms     = 1;
+ 
+ 		li->link++; /* 1xCPU-dummy */
+ 	} else {
+diff --git a/tools/bootconfig/scripts/ftrace2bconf.sh b/tools/bootconfig/scripts/ftrace2bconf.sh
+index a0c3bcc6da4f3..fb201d5afe2c1 100755
+--- a/tools/bootconfig/scripts/ftrace2bconf.sh
++++ b/tools/bootconfig/scripts/ftrace2bconf.sh
+@@ -222,8 +222,8 @@ instance_options() { # [instance-name]
+ 		emit_kv $PREFIX.cpumask = $val
+ 	fi
+ 	val=`cat $INSTANCE/tracing_on`
+-	if [ `echo $val | sed -e s/f//g`x != x ]; then
+-		emit_kv $PREFIX.tracing_on = $val
++	if [ "$val" = "0" ]; then
++		emit_kv $PREFIX.tracing_on = 0
+ 	fi
+ 
+ 	val=
+diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
+index dbf5f5215abee..fa03ff0dc0831 100644
+--- a/tools/perf/tests/bpf.c
++++ b/tools/perf/tests/bpf.c
+@@ -192,7 +192,7 @@ static int do_test(struct bpf_object *obj, int (*func)(void),
+ 	}
+ 
+ 	if (count != expect * evlist->core.nr_entries) {
+-		pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect, count);
++		pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect * evlist->core.nr_entries, count);
+ 		goto out_delete_evlist;
+ 	}
+ 
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index ee15db2be2f43..9ed9a5676d352 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -1349,6 +1349,16 @@ void dso__set_build_id(struct dso *dso, struct build_id *bid)
+ 
+ bool dso__build_id_equal(const struct dso *dso, struct build_id *bid)
+ {
++	if (dso->bid.size > bid->size && dso->bid.size == BUILD_ID_SIZE) {
++		/*
++		 * For the backward compatibility, it allows a build-id has
++		 * trailing zeros.
++		 */
++		return !memcmp(dso->bid.data, bid->data, bid->size) &&
++			!memchr_inv(&dso->bid.data[bid->size], 0,
++				    dso->bid.size - bid->size);
++	}
++
+ 	return dso->bid.size == bid->size &&
+ 	       memcmp(dso->bid.data, bid->data, dso->bid.size) == 0;
+ }
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 77fc46ca07c07..0fc9a54107399 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -1581,10 +1581,6 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ 	if (bfd_get_flavour(abfd) == bfd_target_elf_flavour)
+ 		goto out_close;
+ 
+-	section = bfd_get_section_by_name(abfd, ".text");
+-	if (section)
+-		dso->text_offset = section->vma - section->filepos;
+-
+ 	symbols_size = bfd_get_symtab_upper_bound(abfd);
+ 	if (symbols_size == 0) {
+ 		bfd_close(abfd);
+@@ -1602,6 +1598,22 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ 	if (symbols_count < 0)
+ 		goto out_free;
+ 
++	section = bfd_get_section_by_name(abfd, ".text");
++	if (section) {
++		for (i = 0; i < symbols_count; ++i) {
++			if (!strcmp(bfd_asymbol_name(symbols[i]), "__ImageBase") ||
++			    !strcmp(bfd_asymbol_name(symbols[i]), "__image_base__"))
++				break;
++		}
++		if (i < symbols_count) {
++			/* PE symbols can only have 4 bytes, so use .text high bits */
++			dso->text_offset = section->vma - (u32)section->vma;
++			dso->text_offset += (u32)bfd_asymbol_value(symbols[i]);
++		} else {
++			dso->text_offset = section->vma - section->filepos;
++		}
++	}
++
+ 	qsort(symbols, symbols_count, sizeof(asymbol *), bfd_symbols__cmpvalue);
+ 
+ #ifdef bfd_get_section


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-09-30 10:48 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-09-30 10:48 UTC (permalink / raw
  To: gentoo-commits

commit:     9b265515473c671205e9fffa1f4c306cd5232d79
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 30 10:48:05 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 30 10:48:05 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9b265515

Linux patch 5.14.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-5.14.9.patch | 6265 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6269 insertions(+)

diff --git a/0000_README b/0000_README
index dcc9f9a..21444f8 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1007_linux-5.14.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.8
 
+Patch:  1008_linux-5.14.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.14.9.patch b/1008_linux-5.14.9.patch
new file mode 100644
index 0000000..e5d16b9
--- /dev/null
+++ b/1008_linux-5.14.9.patch
@@ -0,0 +1,6265 @@
+diff --git a/Makefile b/Makefile
+index d6b4737194b88..50c17e63c54ef 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h
+index 0fab5ac907758..c9cb554fbe54c 100644
+--- a/arch/alpha/include/asm/io.h
++++ b/arch/alpha/include/asm/io.h
+@@ -60,7 +60,7 @@ extern inline void set_hae(unsigned long new_hae)
+  * Change virtual addresses to physical addresses and vv.
+  */
+ #ifdef USE_48_BIT_KSEG
+-static inline unsigned long virt_to_phys(void *address)
++static inline unsigned long virt_to_phys(volatile void *address)
+ {
+ 	return (unsigned long)address - IDENT_ADDR;
+ }
+@@ -70,7 +70,7 @@ static inline void * phys_to_virt(unsigned long address)
+ 	return (void *) (address + IDENT_ADDR);
+ }
+ #else
+-static inline unsigned long virt_to_phys(void *address)
++static inline unsigned long virt_to_phys(volatile void *address)
+ {
+         unsigned long phys = (unsigned long)address;
+ 
+@@ -106,7 +106,7 @@ static inline void * phys_to_virt(unsigned long address)
+ extern unsigned long __direct_map_base;
+ extern unsigned long __direct_map_size;
+ 
+-static inline unsigned long __deprecated virt_to_bus(void *address)
++static inline unsigned long __deprecated virt_to_bus(volatile void *address)
+ {
+ 	unsigned long phys = virt_to_phys(address);
+ 	unsigned long bus = phys + __direct_map_base;
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index 89faca0e740d0..bfa58409a4d4d 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -525,6 +525,11 @@ alternative_endif
+ #define EXPORT_SYMBOL_NOKASAN(name)	EXPORT_SYMBOL(name)
+ #endif
+ 
++#ifdef CONFIG_KASAN_HW_TAGS
++#define EXPORT_SYMBOL_NOHWKASAN(name)
++#else
++#define EXPORT_SYMBOL_NOHWKASAN(name)	EXPORT_SYMBOL_NOKASAN(name)
++#endif
+ 	/*
+ 	 * Emit a 64-bit absolute little endian symbol reference in a way that
+ 	 * ensures that it will be resolved at build time, even when building a
+diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
+index 58c7f80f55961..c724a288a412d 100644
+--- a/arch/arm64/include/asm/mte.h
++++ b/arch/arm64/include/asm/mte.h
+@@ -105,11 +105,17 @@ void mte_check_tfsr_el1(void);
+ 
+ static inline void mte_check_tfsr_entry(void)
+ {
++	if (!system_supports_mte())
++		return;
++
+ 	mte_check_tfsr_el1();
+ }
+ 
+ static inline void mte_check_tfsr_exit(void)
+ {
++	if (!system_supports_mte())
++		return;
++
+ 	/*
+ 	 * The asynchronous faults are sync'ed automatically with
+ 	 * TFSR_EL1 on kernel entry but for exit an explicit dsb()
+diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
+index 3a3264ff47b97..95f7686b728d7 100644
+--- a/arch/arm64/include/asm/string.h
++++ b/arch/arm64/include/asm/string.h
+@@ -12,11 +12,13 @@ extern char *strrchr(const char *, int c);
+ #define __HAVE_ARCH_STRCHR
+ extern char *strchr(const char *, int c);
+ 
++#ifndef CONFIG_KASAN_HW_TAGS
+ #define __HAVE_ARCH_STRCMP
+ extern int strcmp(const char *, const char *);
+ 
+ #define __HAVE_ARCH_STRNCMP
+ extern int strncmp(const char *, const char *, __kernel_size_t);
++#endif
+ 
+ #define __HAVE_ARCH_STRLEN
+ extern __kernel_size_t strlen(const char *);
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 0ead8bfedf201..92c99472d2c90 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1500,9 +1500,13 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+ 	/*
+ 	 * For reasons that aren't entirely clear, enabling KPTI on Cavium
+ 	 * ThunderX leads to apparent I-cache corruption of kernel text, which
+-	 * ends as well as you might imagine. Don't even try.
++	 * ends as well as you might imagine. Don't even try. We cannot rely
++	 * on the cpus_have_*cap() helpers here to detect the CPU erratum
++	 * because cpucap detection order may change. However, since we know
++	 * affected CPUs are always in a homogeneous configuration, it is
++	 * safe to rely on this_cpu_has_cap() here.
+ 	 */
+-	if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
++	if (this_cpu_has_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
+ 		str = "ARM64_WORKAROUND_CAVIUM_27456";
+ 		__kpti_forced = -1;
+ 	}
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index 36f51b0e438a6..d223df11fc00b 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -173,12 +173,7 @@ bool mte_report_once(void)
+ #ifdef CONFIG_KASAN_HW_TAGS
+ void mte_check_tfsr_el1(void)
+ {
+-	u64 tfsr_el1;
+-
+-	if (!system_supports_mte())
+-		return;
+-
+-	tfsr_el1 = read_sysreg_s(SYS_TFSR_EL1);
++	u64 tfsr_el1 = read_sysreg_s(SYS_TFSR_EL1);
+ 
+ 	if (unlikely(tfsr_el1 & SYS_TFSR_EL1_TF1)) {
+ 		/*
+@@ -221,6 +216,9 @@ void mte_thread_init_user(void)
+ 
+ void mte_thread_switch(struct task_struct *next)
+ {
++	if (!system_supports_mte())
++		return;
++
+ 	/*
+ 	 * Check if an async tag exception occurred at EL1.
+ 	 *
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index c8989b999250d..c858b857c1ecf 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -60,7 +60,7 @@
+ 
+ #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
+ #include <linux/stackprotector.h>
+-unsigned long __stack_chk_guard __read_mostly;
++unsigned long __stack_chk_guard __ro_after_init;
+ EXPORT_SYMBOL(__stack_chk_guard);
+ #endif
+ 
+diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
+index d7bee210a798a..83bcad72ec972 100644
+--- a/arch/arm64/lib/strcmp.S
++++ b/arch/arm64/lib/strcmp.S
+@@ -173,4 +173,4 @@ L(done):
+ 	ret
+ 
+ SYM_FUNC_END_PI(strcmp)
+-EXPORT_SYMBOL_NOKASAN(strcmp)
++EXPORT_SYMBOL_NOHWKASAN(strcmp)
+diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
+index 48d44f7fddb13..e42bcfcd37e6f 100644
+--- a/arch/arm64/lib/strncmp.S
++++ b/arch/arm64/lib/strncmp.S
+@@ -258,4 +258,4 @@ L(ret0):
+ 	ret
+ 
+ SYM_FUNC_END_PI(strncmp)
+-EXPORT_SYMBOL_NOKASAN(strncmp)
++EXPORT_SYMBOL_NOHWKASAN(strncmp)
+diff --git a/arch/m68k/include/asm/raw_io.h b/arch/m68k/include/asm/raw_io.h
+index 911826ea83ce1..80eb2396d01eb 100644
+--- a/arch/m68k/include/asm/raw_io.h
++++ b/arch/m68k/include/asm/raw_io.h
+@@ -17,21 +17,21 @@
+  * two accesses to memory, which may be undesirable for some devices.
+  */
+ #define in_8(addr) \
+-    ({ u8 __v = (*(__force volatile u8 *) (addr)); __v; })
++    ({ u8 __v = (*(__force volatile u8 *) (unsigned long)(addr)); __v; })
+ #define in_be16(addr) \
+-    ({ u16 __v = (*(__force volatile u16 *) (addr)); __v; })
++    ({ u16 __v = (*(__force volatile u16 *) (unsigned long)(addr)); __v; })
+ #define in_be32(addr) \
+-    ({ u32 __v = (*(__force volatile u32 *) (addr)); __v; })
++    ({ u32 __v = (*(__force volatile u32 *) (unsigned long)(addr)); __v; })
+ #define in_le16(addr) \
+-    ({ u16 __v = le16_to_cpu(*(__force volatile __le16 *) (addr)); __v; })
++    ({ u16 __v = le16_to_cpu(*(__force volatile __le16 *) (unsigned long)(addr)); __v; })
+ #define in_le32(addr) \
+-    ({ u32 __v = le32_to_cpu(*(__force volatile __le32 *) (addr)); __v; })
++    ({ u32 __v = le32_to_cpu(*(__force volatile __le32 *) (unsigned long)(addr)); __v; })
+ 
+-#define out_8(addr,b) (void)((*(__force volatile u8 *) (addr)) = (b))
+-#define out_be16(addr,w) (void)((*(__force volatile u16 *) (addr)) = (w))
+-#define out_be32(addr,l) (void)((*(__force volatile u32 *) (addr)) = (l))
+-#define out_le16(addr,w) (void)((*(__force volatile __le16 *) (addr)) = cpu_to_le16(w))
+-#define out_le32(addr,l) (void)((*(__force volatile __le32 *) (addr)) = cpu_to_le32(l))
++#define out_8(addr,b) (void)((*(__force volatile u8 *) (unsigned long)(addr)) = (b))
++#define out_be16(addr,w) (void)((*(__force volatile u16 *) (unsigned long)(addr)) = (w))
++#define out_be32(addr,l) (void)((*(__force volatile u32 *) (unsigned long)(addr)) = (l))
++#define out_le16(addr,w) (void)((*(__force volatile __le16 *) (unsigned long)(addr)) = cpu_to_le16(w))
++#define out_le32(addr,l) (void)((*(__force volatile __le32 *) (unsigned long)(addr)) = cpu_to_le32(l))
+ 
+ #define raw_inb in_8
+ #define raw_inw in_be16
+diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h
+index d00313d1274e8..0561568f7b489 100644
+--- a/arch/parisc/include/asm/page.h
++++ b/arch/parisc/include/asm/page.h
+@@ -184,7 +184,7 @@ extern int npmem_ranges;
+ #include <asm-generic/getorder.h>
+ #include <asm/pdc.h>
+ 
+-#define PAGE0   ((struct zeropage *)__PAGE_OFFSET)
++#define PAGE0   ((struct zeropage *)absolute_pointer(__PAGE_OFFSET))
+ 
+ /* DEFINITION OF THE ZERO-PAGE (PAG0) */
+ /* based on work by Jason Eckhardt (jason@equator.com) */
+diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
+index 8e1d72a167594..7ceae24b0ca99 100644
+--- a/arch/sparc/kernel/ioport.c
++++ b/arch/sparc/kernel/ioport.c
+@@ -356,7 +356,9 @@ err_nomem:
+ void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ 		dma_addr_t dma_addr, unsigned long attrs)
+ {
+-	if (!sparc_dma_free_resource(cpu_addr, PAGE_ALIGN(size)))
++	size = PAGE_ALIGN(size);
++
++	if (!sparc_dma_free_resource(cpu_addr, size))
+ 		return;
+ 
+ 	dma_make_coherent(dma_addr, size);
+diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
+index 8e645ddac58e2..30f171b7b00c2 100644
+--- a/arch/sparc/kernel/mdesc.c
++++ b/arch/sparc/kernel/mdesc.c
+@@ -39,6 +39,7 @@ struct mdesc_hdr {
+ 	u32	node_sz; /* node block size */
+ 	u32	name_sz; /* name block size */
+ 	u32	data_sz; /* data block size */
++	char	data[];
+ } __attribute__((aligned(16)));
+ 
+ struct mdesc_elem {
+@@ -612,7 +613,7 @@ EXPORT_SYMBOL(mdesc_get_node_info);
+ 
+ static struct mdesc_elem *node_block(struct mdesc_hdr *mdesc)
+ {
+-	return (struct mdesc_elem *) (mdesc + 1);
++	return (struct mdesc_elem *) mdesc->data;
+ }
+ 
+ static void *name_block(struct mdesc_hdr *mdesc)
+diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h
+index 5c7bcaa796232..1d5f14aff5f6f 100644
+--- a/arch/x86/include/asm/pkeys.h
++++ b/arch/x86/include/asm/pkeys.h
+@@ -2,8 +2,6 @@
+ #ifndef _ASM_X86_PKEYS_H
+ #define _ASM_X86_PKEYS_H
+ 
+-#define ARCH_DEFAULT_PKEY	0
+-
+ /*
+  * If more than 16 keys are ever supported, a thorough audit
+  * will be necessary to ensure that the types that store key
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index f3fbb84ff8a77..68c257a3de0d3 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -275,7 +275,7 @@ static inline int enqcmds(void __iomem *dst, const void *src)
+ {
+ 	const struct { char _[64]; } *__src = src;
+ 	struct { char _[64]; } __iomem *__dst = dst;
+-	int zf;
++	bool zf;
+ 
+ 	/*
+ 	 * ENQCMDS %(rdx), rax
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index bff3a784aec5b..d103e8489ec17 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -839,6 +839,20 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	x86_init.oem.arch_setup();
+ 
++	/*
++	 * Do some memory reservations *before* memory is added to memblock, so
++	 * memblock allocations won't overwrite it.
++	 *
++	 * After this point, everything still needed from the boot loader or
++	 * firmware or kernel text should be early reserved or marked not RAM in
++	 * e820. All other memory is free game.
++	 *
++	 * This call needs to happen before e820__memory_setup() which calls the
++	 * xen_memory_setup() on Xen dom0 which relies on the fact that those
++	 * early reservations have happened already.
++	 */
++	early_reserve_memory();
++
+ 	iomem_resource.end = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
+ 	e820__memory_setup();
+ 	parse_setup_data();
+@@ -885,18 +899,6 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	parse_early_param();
+ 
+-	/*
+-	 * Do some memory reservations *before* memory is added to
+-	 * memblock, so memblock allocations won't overwrite it.
+-	 * Do it after early param, so we could get (unlikely) panic from
+-	 * serial.
+-	 *
+-	 * After this point everything still needed from the boot loader or
+-	 * firmware or kernel text should be early reserved or marked not
+-	 * RAM in e820. All other memory is free game.
+-	 */
+-	early_reserve_memory();
+-
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ 	/*
+ 	 * Memory used by the kernel cannot be hot-removed because Linux
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index b2eefdefc1083..84a2c8c4af735 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -710,7 +710,8 @@ oops:
+ 
+ static noinline void
+ kernelmode_fixup_or_oops(struct pt_regs *regs, unsigned long error_code,
+-			 unsigned long address, int signal, int si_code)
++			 unsigned long address, int signal, int si_code,
++			 u32 pkey)
+ {
+ 	WARN_ON_ONCE(user_mode(regs));
+ 
+@@ -735,8 +736,12 @@ kernelmode_fixup_or_oops(struct pt_regs *regs, unsigned long error_code,
+ 
+ 			set_signal_archinfo(address, error_code);
+ 
+-			/* XXX: hwpoison faults will set the wrong code. */
+-			force_sig_fault(signal, si_code, (void __user *)address);
++			if (si_code == SEGV_PKUERR) {
++				force_sig_pkuerr((void __user *)address, pkey);
++			} else {
++				/* XXX: hwpoison faults will set the wrong code. */
++				force_sig_fault(signal, si_code, (void __user *)address);
++			}
+ 		}
+ 
+ 		/*
+@@ -798,7 +803,8 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
+ 	struct task_struct *tsk = current;
+ 
+ 	if (!user_mode(regs)) {
+-		kernelmode_fixup_or_oops(regs, error_code, address, pkey, si_code);
++		kernelmode_fixup_or_oops(regs, error_code, address,
++					 SIGSEGV, si_code, pkey);
+ 		return;
+ 	}
+ 
+@@ -930,7 +936,8 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
+ {
+ 	/* Kernel mode? Handle exceptions or die: */
+ 	if (!user_mode(regs)) {
+-		kernelmode_fixup_or_oops(regs, error_code, address, SIGBUS, BUS_ADRERR);
++		kernelmode_fixup_or_oops(regs, error_code, address,
++					 SIGBUS, BUS_ADRERR, ARCH_DEFAULT_PKEY);
+ 		return;
+ 	}
+ 
+@@ -1396,7 +1403,8 @@ good_area:
+ 		 */
+ 		if (!user_mode(regs))
+ 			kernelmode_fixup_or_oops(regs, error_code, address,
+-						 SIGBUS, BUS_ADRERR);
++						 SIGBUS, BUS_ADRERR,
++						 ARCH_DEFAULT_PKEY);
+ 		return;
+ 	}
+ 
+@@ -1416,7 +1424,8 @@ good_area:
+ 		return;
+ 
+ 	if (fatal_signal_pending(current) && !user_mode(regs)) {
+-		kernelmode_fixup_or_oops(regs, error_code, address, 0, 0);
++		kernelmode_fixup_or_oops(regs, error_code, address,
++					 0, 0, ARCH_DEFAULT_PKEY);
+ 		return;
+ 	}
+ 
+@@ -1424,7 +1433,8 @@ good_area:
+ 		/* Kernel mode? Handle exceptions or die: */
+ 		if (!user_mode(regs)) {
+ 			kernelmode_fixup_or_oops(regs, error_code, address,
+-						 SIGSEGV, SEGV_MAPERR);
++						 SIGSEGV, SEGV_MAPERR,
++						 ARCH_DEFAULT_PKEY);
+ 			return;
+ 		}
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 475d9c71b1713..d8aaccc9a246d 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -756,8 +756,8 @@ static void xen_write_idt_entry(gate_desc *dt, int entrynum, const gate_desc *g)
+ 	preempt_enable();
+ }
+ 
+-static void xen_convert_trap_info(const struct desc_ptr *desc,
+-				  struct trap_info *traps)
++static unsigned xen_convert_trap_info(const struct desc_ptr *desc,
++				      struct trap_info *traps, bool full)
+ {
+ 	unsigned in, out, count;
+ 
+@@ -767,17 +767,18 @@ static void xen_convert_trap_info(const struct desc_ptr *desc,
+ 	for (in = out = 0; in < count; in++) {
+ 		gate_desc *entry = (gate_desc *)(desc->address) + in;
+ 
+-		if (cvt_gate_to_trap(in, entry, &traps[out]))
++		if (cvt_gate_to_trap(in, entry, &traps[out]) || full)
+ 			out++;
+ 	}
+-	traps[out].address = 0;
++
++	return out;
+ }
+ 
+ void xen_copy_trap_info(struct trap_info *traps)
+ {
+ 	const struct desc_ptr *desc = this_cpu_ptr(&idt_desc);
+ 
+-	xen_convert_trap_info(desc, traps);
++	xen_convert_trap_info(desc, traps, true);
+ }
+ 
+ /* Load a new IDT into Xen.  In principle this can be per-CPU, so we
+@@ -787,6 +788,7 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ {
+ 	static DEFINE_SPINLOCK(lock);
+ 	static struct trap_info traps[257];
++	unsigned out;
+ 
+ 	trace_xen_cpu_load_idt(desc);
+ 
+@@ -794,7 +796,8 @@ static void xen_load_idt(const struct desc_ptr *desc)
+ 
+ 	memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc));
+ 
+-	xen_convert_trap_info(desc, traps);
++	out = xen_convert_trap_info(desc, traps, false);
++	memset(&traps[out], 0, sizeof(traps[0]));
+ 
+ 	xen_mc_flush();
+ 	if (HYPERVISOR_set_trap_table(traps))
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 26446f97deee4..28e11decbac58 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1385,10 +1385,14 @@ enomem:
+ 	/* alloc failed, nothing's initialized yet, free everything */
+ 	spin_lock_irq(&q->queue_lock);
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
++		struct blkcg *blkcg = blkg->blkcg;
++
++		spin_lock(&blkcg->lock);
+ 		if (blkg->pd[pol->plid]) {
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
++		spin_unlock(&blkcg->lock);
+ 	}
+ 	spin_unlock_irq(&q->queue_lock);
+ 	ret = -ENOMEM;
+@@ -1420,12 +1424,16 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ 	__clear_bit(pol->plid, q->blkcg_pols);
+ 
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
++		struct blkcg *blkcg = blkg->blkcg;
++
++		spin_lock(&blkcg->lock);
+ 		if (blkg->pd[pol->plid]) {
+ 			if (pol->pd_offline_fn)
+ 				pol->pd_offline_fn(blkg->pd[pol->plid]);
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
++		spin_unlock(&blkcg->lock);
+ 	}
+ 
+ 	spin_unlock_irq(&q->queue_lock);
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index 410da060d1f5a..9e83159f5a527 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -426,8 +426,15 @@ EXPORT_SYMBOL(blk_integrity_register);
+  */
+ void blk_integrity_unregister(struct gendisk *disk)
+ {
++	struct blk_integrity *bi = &disk->queue->integrity;
++
++	if (!bi->profile)
++		return;
++
++	/* ensure all bios are off the integrity workqueue */
++	blk_flush_integrity();
+ 	blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, disk->queue);
+-	memset(&disk->queue->integrity, 0, sizeof(struct blk_integrity));
++	memset(bi, 0, sizeof(*bi));
+ }
+ EXPORT_SYMBOL(blk_integrity_unregister);
+ 
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 86f87346232a6..ff5caeb825429 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -208,7 +208,7 @@ static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
+ 
+ 	spin_lock_irqsave(&tags->lock, flags);
+ 	rq = tags->rqs[bitnr];
+-	if (!rq || !refcount_inc_not_zero(&rq->ref))
++	if (!rq || rq->tag != bitnr || !refcount_inc_not_zero(&rq->ref))
+ 		rq = NULL;
+ 	spin_unlock_irqrestore(&tags->lock, flags);
+ 	return rq;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index bcec598b89f23..9edb776249efd 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1852,6 +1852,7 @@ static void binder_deferred_fd_close(int fd)
+ }
+ 
+ static void binder_transaction_buffer_release(struct binder_proc *proc,
++					      struct binder_thread *thread,
+ 					      struct binder_buffer *buffer,
+ 					      binder_size_t failed_at,
+ 					      bool is_failure)
+@@ -2011,8 +2012,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 						&proc->alloc, &fd, buffer,
+ 						offset, sizeof(fd));
+ 				WARN_ON(err);
+-				if (!err)
++				if (!err) {
+ 					binder_deferred_fd_close(fd);
++					/*
++					 * Need to make sure the thread goes
++					 * back to userspace to complete the
++					 * deferred close
++					 */
++					if (thread)
++						thread->looper_need_return = true;
++				}
+ 			}
+ 		} break;
+ 		default:
+@@ -3038,9 +3047,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 	if (reply) {
+ 		binder_enqueue_thread_work(thread, tcomplete);
+ 		binder_inner_proc_lock(target_proc);
+-		if (target_thread->is_dead || target_proc->is_frozen) {
+-			return_error = target_thread->is_dead ?
+-				BR_DEAD_REPLY : BR_FROZEN_REPLY;
++		if (target_thread->is_dead) {
++			return_error = BR_DEAD_REPLY;
+ 			binder_inner_proc_unlock(target_proc);
+ 			goto err_dead_proc_or_thread;
+ 		}
+@@ -3105,7 +3113,7 @@ err_bad_parent:
+ err_copy_data_failed:
+ 	binder_free_txn_fixups(t);
+ 	trace_binder_transaction_failed_buffer_release(t->buffer);
+-	binder_transaction_buffer_release(target_proc, t->buffer,
++	binder_transaction_buffer_release(target_proc, NULL, t->buffer,
+ 					  buffer_offset, true);
+ 	if (target_node)
+ 		binder_dec_node_tmpref(target_node);
+@@ -3184,7 +3192,9 @@ err_invalid_target_handle:
+  * Cleanup buffer and free it.
+  */
+ static void
+-binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
++binder_free_buf(struct binder_proc *proc,
++		struct binder_thread *thread,
++		struct binder_buffer *buffer)
+ {
+ 	binder_inner_proc_lock(proc);
+ 	if (buffer->transaction) {
+@@ -3212,7 +3222,7 @@ binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
+ 		binder_node_inner_unlock(buf_node);
+ 	}
+ 	trace_binder_transaction_buffer_release(buffer);
+-	binder_transaction_buffer_release(proc, buffer, 0, false);
++	binder_transaction_buffer_release(proc, thread, buffer, 0, false);
+ 	binder_alloc_free_buf(&proc->alloc, buffer);
+ }
+ 
+@@ -3414,7 +3424,7 @@ static int binder_thread_write(struct binder_proc *proc,
+ 				     proc->pid, thread->pid, (u64)data_ptr,
+ 				     buffer->debug_id,
+ 				     buffer->transaction ? "active" : "finished");
+-			binder_free_buf(proc, buffer);
++			binder_free_buf(proc, thread, buffer);
+ 			break;
+ 		}
+ 
+@@ -4107,7 +4117,7 @@ retry:
+ 			buffer->transaction = NULL;
+ 			binder_cleanup_transaction(t, "fd fixups failed",
+ 						   BR_FAILED_REPLY);
+-			binder_free_buf(proc, buffer);
++			binder_free_buf(proc, thread, buffer);
+ 			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
+ 				     "%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
+ 				     proc->pid, thread->pid,
+@@ -4648,6 +4658,22 @@ static int binder_ioctl_get_node_debug_info(struct binder_proc *proc,
+ 	return 0;
+ }
+ 
++static bool binder_txns_pending_ilocked(struct binder_proc *proc)
++{
++	struct rb_node *n;
++	struct binder_thread *thread;
++
++	if (proc->outstanding_txns > 0)
++		return true;
++
++	for (n = rb_first(&proc->threads); n; n = rb_next(n)) {
++		thread = rb_entry(n, struct binder_thread, rb_node);
++		if (thread->transaction_stack)
++			return true;
++	}
++	return false;
++}
++
+ static int binder_ioctl_freeze(struct binder_freeze_info *info,
+ 			       struct binder_proc *target_proc)
+ {
+@@ -4679,8 +4705,13 @@ static int binder_ioctl_freeze(struct binder_freeze_info *info,
+ 			(!target_proc->outstanding_txns),
+ 			msecs_to_jiffies(info->timeout_ms));
+ 
+-	if (!ret && target_proc->outstanding_txns)
+-		ret = -EAGAIN;
++	/* Check pending transactions that wait for reply */
++	if (ret >= 0) {
++		binder_inner_proc_lock(target_proc);
++		if (binder_txns_pending_ilocked(target_proc))
++			ret = -EAGAIN;
++		binder_inner_proc_unlock(target_proc);
++	}
+ 
+ 	if (ret < 0) {
+ 		binder_inner_proc_lock(target_proc);
+@@ -4696,6 +4727,7 @@ static int binder_ioctl_get_freezer_info(
+ {
+ 	struct binder_proc *target_proc;
+ 	bool found = false;
++	__u32 txns_pending;
+ 
+ 	info->sync_recv = 0;
+ 	info->async_recv = 0;
+@@ -4705,7 +4737,9 @@ static int binder_ioctl_get_freezer_info(
+ 		if (target_proc->pid == info->pid) {
+ 			found = true;
+ 			binder_inner_proc_lock(target_proc);
+-			info->sync_recv |= target_proc->sync_recv;
++			txns_pending = binder_txns_pending_ilocked(target_proc);
++			info->sync_recv |= target_proc->sync_recv |
++					(txns_pending << 1);
+ 			info->async_recv |= target_proc->async_recv;
+ 			binder_inner_proc_unlock(target_proc);
+ 		}
+diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
+index 810c0b84d3f81..402c4d4362a83 100644
+--- a/drivers/android/binder_internal.h
++++ b/drivers/android/binder_internal.h
+@@ -378,6 +378,8 @@ struct binder_ref {
+  *                        binder transactions
+  *                        (protected by @inner_lock)
+  * @sync_recv:            process received sync transactions since last frozen
++ *                        bit 0: received sync transaction after being frozen
++ *                        bit 1: new pending sync transaction during freezing
+  *                        (protected by @inner_lock)
+  * @async_recv:           process received async transactions since last frozen
+  *                        (protected by @inner_lock)
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index d1f1a82401207..bdb50a06c82ae 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -1113,6 +1113,9 @@ int device_create_managed_software_node(struct device *dev,
+ 	to_swnode(fwnode)->managed = true;
+ 	set_secondary_fwnode(dev, fwnode);
+ 
++	if (device_is_registered(dev))
++		software_node_notify(dev, KOBJ_ADD);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(device_create_managed_software_node);
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index df77b6bf5c641..763cea8418f8e 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -3090,6 +3090,7 @@ static int compat_insnlist(struct file *file, unsigned long arg)
+ 	mutex_lock(&dev->mutex);
+ 	rc = do_insnlist_ioctl(dev, insns, insnlist32.n_insns, file);
+ 	mutex_unlock(&dev->mutex);
++	kfree(insns);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index bb4549959b113..e7cd3882bda4d 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -3251,11 +3251,15 @@ static int __init intel_pstate_init(void)
+ 	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ 		return -ENODEV;
+ 
+-	if (no_load)
+-		return -ENODEV;
+-
+ 	id = x86_match_cpu(hwp_support_ids);
+ 	if (id) {
++		bool hwp_forced = intel_pstate_hwp_is_enabled();
++
++		if (hwp_forced)
++			pr_info("HWP enabled by BIOS\n");
++		else if (no_load)
++			return -ENODEV;
++
+ 		copy_cpu_funcs(&core_funcs);
+ 		/*
+ 		 * Avoid enabling HWP for processors without EPP support,
+@@ -3265,8 +3269,7 @@ static int __init intel_pstate_init(void)
+ 		 * If HWP is enabled already, though, there is no choice but to
+ 		 * deal with it.
+ 		 */
+-		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) ||
+-		    intel_pstate_hwp_is_enabled()) {
++		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
+ 			hwp_active++;
+ 			hwp_mode_bdw = id->driver_data;
+ 			intel_pstate.attr = hwp_cpufreq_attrs;
+@@ -3278,7 +3281,11 @@ static int __init intel_pstate_init(void)
+ 
+ 			goto hwp_cpu_matched;
+ 		}
++		pr_info("HWP not enabled\n");
+ 	} else {
++		if (no_load)
++			return -ENODEV;
++
+ 		id = x86_match_cpu(intel_pstate_cpu_ids);
+ 		if (!id) {
+ 			pr_info("CPU model not supported\n");
+@@ -3357,10 +3364,9 @@ static int __init intel_pstate_setup(char *str)
+ 	else if (!strcmp(str, "passive"))
+ 		default_driver = &intel_cpufreq;
+ 
+-	if (!strcmp(str, "no_hwp")) {
+-		pr_info("HWP disabled\n");
++	if (!strcmp(str, "no_hwp"))
+ 		no_hwp = 1;
+-	}
++
+ 	if (!strcmp(str, "force"))
+ 		force_load = 1;
+ 	if (!strcmp(str, "hwp_only"))
+diff --git a/drivers/edac/dmc520_edac.c b/drivers/edac/dmc520_edac.c
+index fc1153ab1ebbc..b8a7d9594afd4 100644
+--- a/drivers/edac/dmc520_edac.c
++++ b/drivers/edac/dmc520_edac.c
+@@ -464,7 +464,7 @@ static void dmc520_init_csrow(struct mem_ctl_info *mci)
+ 			dimm->grain	= pvt->mem_width_in_bytes;
+ 			dimm->dtype	= dt;
+ 			dimm->mtype	= mt;
+-			dimm->edac_mode	= EDAC_FLAG_SECDED;
++			dimm->edac_mode	= EDAC_SECDED;
+ 			dimm->nr_pages	= pages_per_rank / csi->nr_channels;
+ 		}
+ 	}
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 7e7146b22c160..7d08627e738b3 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -782,7 +782,7 @@ static void init_csrows(struct mem_ctl_info *mci)
+ 
+ 		for (j = 0; j < csi->nr_channels; j++) {
+ 			dimm		= csi->channels[j]->dimm;
+-			dimm->edac_mode	= EDAC_FLAG_SECDED;
++			dimm->edac_mode	= EDAC_SECDED;
+ 			dimm->mtype	= p_data->get_mtype(priv->baseaddr);
+ 			dimm->nr_pages	= (size >> PAGE_SHIFT) / csi->nr_channels;
+ 			dimm->grain	= SYNPS_EDAC_ERR_GRAIN;
+diff --git a/drivers/fpga/machxo2-spi.c b/drivers/fpga/machxo2-spi.c
+index 1afb41aa20d71..ea2ec3c6815cb 100644
+--- a/drivers/fpga/machxo2-spi.c
++++ b/drivers/fpga/machxo2-spi.c
+@@ -225,8 +225,10 @@ static int machxo2_write_init(struct fpga_manager *mgr,
+ 		goto fail;
+ 
+ 	get_status(spi, &status);
+-	if (test_bit(FAIL, &status))
++	if (test_bit(FAIL, &status)) {
++		ret = -EINVAL;
+ 		goto fail;
++	}
+ 	dump_status_reg(&status);
+ 
+ 	spi_message_init(&msg);
+@@ -313,6 +315,7 @@ static int machxo2_write_complete(struct fpga_manager *mgr,
+ 	dump_status_reg(&status);
+ 	if (!test_bit(DONE, &status)) {
+ 		machxo2_cleanup(mgr);
++		ret = -EINVAL;
+ 		goto fail;
+ 	}
+ 
+@@ -335,6 +338,7 @@ static int machxo2_write_complete(struct fpga_manager *mgr,
+ 			break;
+ 		if (++refreshloop == MACHXO2_MAX_REFRESH_LOOP) {
+ 			machxo2_cleanup(mgr);
++			ret = -EINVAL;
+ 			goto fail;
+ 		}
+ 	} while (1);
+diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c
+index f99f3c10bed03..39dca147d587a 100644
+--- a/drivers/gpio/gpio-uniphier.c
++++ b/drivers/gpio/gpio-uniphier.c
+@@ -184,7 +184,7 @@ static void uniphier_gpio_irq_mask(struct irq_data *data)
+ 
+ 	uniphier_gpio_reg_update(priv, UNIPHIER_GPIO_IRQ_EN, mask, 0);
+ 
+-	return irq_chip_mask_parent(data);
++	irq_chip_mask_parent(data);
+ }
+ 
+ static void uniphier_gpio_irq_unmask(struct irq_data *data)
+@@ -194,7 +194,7 @@ static void uniphier_gpio_irq_unmask(struct irq_data *data)
+ 
+ 	uniphier_gpio_reg_update(priv, UNIPHIER_GPIO_IRQ_EN, mask, mask);
+ 
+-	return irq_chip_unmask_parent(data);
++	irq_chip_unmask_parent(data);
+ }
+ 
+ static int uniphier_gpio_irq_set_type(struct irq_data *data, unsigned int type)
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 411525ac4cc45..47712b6903b51 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -313,9 +313,11 @@ static struct gpio_desc *acpi_request_own_gpiod(struct gpio_chip *chip,
+ 
+ 	ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout);
+ 	if (ret)
+-		gpiochip_free_own_desc(desc);
++		dev_warn(chip->parent,
++			 "Failed to set debounce-timeout for pin 0x%04X, err %d\n",
++			 pin, ret);
+ 
+-	return ret ? ERR_PTR(ret) : desc;
++	return desc;
+ }
+ 
+ static bool acpi_gpio_in_ignore_list(const char *controller_in, int pin_in)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 9e52948d49920..5a872adcfdb98 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -447,6 +447,7 @@ static const struct kfd_device_info navi10_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 145,
+ 	.num_sdma_engines = 2,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -465,6 +466,7 @@ static const struct kfd_device_info navi12_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 145,
+ 	.num_sdma_engines = 2,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -483,6 +485,7 @@ static const struct kfd_device_info navi14_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 145,
+ 	.num_sdma_engines = 2,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -501,6 +504,7 @@ static const struct kfd_device_info sienna_cichlid_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 92,
+ 	.num_sdma_engines = 4,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -519,6 +523,7 @@ static const struct kfd_device_info navy_flounder_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 92,
+ 	.num_sdma_engines = 2,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -536,7 +541,8 @@ static const struct kfd_device_info vangogh_device_info = {
+ 	.mqd_size_aligned = MQD_SIZE_ALIGNED,
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+-	.needs_pci_atomics = false,
++	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 92,
+ 	.num_sdma_engines = 1,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 2,
+@@ -555,6 +561,7 @@ static const struct kfd_device_info dimgrey_cavefish_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 92,
+ 	.num_sdma_engines = 2,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -573,6 +580,7 @@ static const struct kfd_device_info beige_goby_device_info = {
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+ 	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 92,
+ 	.num_sdma_engines = 1,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 8,
+@@ -590,7 +598,8 @@ static const struct kfd_device_info yellow_carp_device_info = {
+ 	.mqd_size_aligned = MQD_SIZE_ALIGNED,
+ 	.needs_iommu_device = false,
+ 	.supports_cwsr = true,
+-	.needs_pci_atomics = false,
++	.needs_pci_atomics = true,
++	.no_atomic_fw_version = 92,
+ 	.num_sdma_engines = 1,
+ 	.num_xgmi_sdma_engines = 0,
+ 	.num_sdma_queues_per_engine = 2,
+@@ -659,20 +668,6 @@ struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd,
+ 	if (!kfd)
+ 		return NULL;
+ 
+-	/* Allow BIF to recode atomics to PCIe 3.0 AtomicOps.
+-	 * 32 and 64-bit requests are possible and must be
+-	 * supported.
+-	 */
+-	kfd->pci_atomic_requested = amdgpu_amdkfd_have_atomics_support(kgd);
+-	if (device_info->needs_pci_atomics &&
+-	    !kfd->pci_atomic_requested) {
+-		dev_info(kfd_device,
+-			 "skipped device %x:%x, PCI rejects atomics\n",
+-			 pdev->vendor, pdev->device);
+-		kfree(kfd);
+-		return NULL;
+-	}
+-
+ 	kfd->kgd = kgd;
+ 	kfd->device_info = device_info;
+ 	kfd->pdev = pdev;
+@@ -772,6 +767,23 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 	kfd->vm_info.vmid_num_kfd = kfd->vm_info.last_vmid_kfd
+ 			- kfd->vm_info.first_vmid_kfd + 1;
+ 
++	/* Allow BIF to recode atomics to PCIe 3.0 AtomicOps.
++	 * 32 and 64-bit requests are possible and must be
++	 * supported.
++	 */
++	kfd->pci_atomic_requested = amdgpu_amdkfd_have_atomics_support(kfd->kgd);
++	if (!kfd->pci_atomic_requested &&
++	    kfd->device_info->needs_pci_atomics &&
++	    (!kfd->device_info->no_atomic_fw_version ||
++	     kfd->mec_fw_version < kfd->device_info->no_atomic_fw_version)) {
++		dev_info(kfd_device,
++			 "skipped device %x:%x, PCI rejects atomics %d<%d\n",
++			 kfd->pdev->vendor, kfd->pdev->device,
++			 kfd->mec_fw_version,
++			 kfd->device_info->no_atomic_fw_version);
++		return false;
++	}
++
+ 	/* Verify module parameters regarding mapped process number*/
+ 	if ((hws_max_conc_proc < 0)
+ 			|| (hws_max_conc_proc > kfd->vm_info.vmid_num_kfd)) {
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 3426743ed228b..b38a84a274387 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -206,6 +206,7 @@ struct kfd_device_info {
+ 	bool supports_cwsr;
+ 	bool needs_iommu_device;
+ 	bool needs_pci_atomics;
++	uint32_t no_atomic_fw_version;
+ 	unsigned int num_sdma_engines;
+ 	unsigned int num_xgmi_sdma_engines;
+ 	unsigned int num_sdma_queues_per_engine;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 0f7f1e5621ea4..e85035fd1ccb4 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -118,8 +118,16 @@ static void svm_range_remove_notifier(struct svm_range *prange)
+ 		mmu_interval_notifier_remove(&prange->notifier);
+ }
+ 
++static bool
++svm_is_valid_dma_mapping_addr(struct device *dev, dma_addr_t dma_addr)
++{
++	return dma_addr && !dma_mapping_error(dev, dma_addr) &&
++	       !(dma_addr & SVM_RANGE_VRAM_DOMAIN);
++}
++
+ static int
+ svm_range_dma_map_dev(struct amdgpu_device *adev, struct svm_range *prange,
++		      unsigned long offset, unsigned long npages,
+ 		      unsigned long *hmm_pfns, uint32_t gpuidx)
+ {
+ 	enum dma_data_direction dir = DMA_BIDIRECTIONAL;
+@@ -136,9 +144,9 @@ svm_range_dma_map_dev(struct amdgpu_device *adev, struct svm_range *prange,
+ 		prange->dma_addr[gpuidx] = addr;
+ 	}
+ 
+-	for (i = 0; i < prange->npages; i++) {
+-		if (WARN_ONCE(addr[i] && !dma_mapping_error(dev, addr[i]),
+-			      "leaking dma mapping\n"))
++	addr += offset;
++	for (i = 0; i < npages; i++) {
++		if (svm_is_valid_dma_mapping_addr(dev, addr[i]))
+ 			dma_unmap_page(dev, addr[i], PAGE_SIZE, dir);
+ 
+ 		page = hmm_pfn_to_page(hmm_pfns[i]);
+@@ -167,6 +175,7 @@ svm_range_dma_map_dev(struct amdgpu_device *adev, struct svm_range *prange,
+ 
+ static int
+ svm_range_dma_map(struct svm_range *prange, unsigned long *bitmap,
++		  unsigned long offset, unsigned long npages,
+ 		  unsigned long *hmm_pfns)
+ {
+ 	struct kfd_process *p;
+@@ -187,7 +196,8 @@ svm_range_dma_map(struct svm_range *prange, unsigned long *bitmap,
+ 		}
+ 		adev = (struct amdgpu_device *)pdd->dev->kgd;
+ 
+-		r = svm_range_dma_map_dev(adev, prange, hmm_pfns, gpuidx);
++		r = svm_range_dma_map_dev(adev, prange, offset, npages,
++					  hmm_pfns, gpuidx);
+ 		if (r)
+ 			break;
+ 	}
+@@ -205,7 +215,7 @@ void svm_range_dma_unmap(struct device *dev, dma_addr_t *dma_addr,
+ 		return;
+ 
+ 	for (i = offset; i < offset + npages; i++) {
+-		if (!dma_addr[i] || dma_mapping_error(dev, dma_addr[i]))
++		if (!svm_is_valid_dma_mapping_addr(dev, dma_addr[i]))
+ 			continue;
+ 		pr_debug("dma unmapping 0x%llx\n", dma_addr[i] >> PAGE_SHIFT);
+ 		dma_unmap_page(dev, dma_addr[i], PAGE_SIZE, dir);
+@@ -1088,11 +1098,6 @@ svm_range_get_pte_flags(struct amdgpu_device *adev, struct svm_range *prange,
+ 	pte_flags |= snoop ? AMDGPU_PTE_SNOOPED : 0;
+ 
+ 	pte_flags |= amdgpu_gem_va_map_flags(adev, mapping_flags);
+-
+-	pr_debug("svms 0x%p [0x%lx 0x%lx] vram %d PTE 0x%llx mapping 0x%x\n",
+-		 prange->svms, prange->start, prange->last,
+-		 (domain == SVM_RANGE_VRAM_DOMAIN) ? 1:0, pte_flags, mapping_flags);
+-
+ 	return pte_flags;
+ }
+ 
+@@ -1156,7 +1161,8 @@ svm_range_unmap_from_gpus(struct svm_range *prange, unsigned long start,
+ 
+ static int
+ svm_range_map_to_gpu(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+-		     struct svm_range *prange, dma_addr_t *dma_addr,
++		     struct svm_range *prange, unsigned long offset,
++		     unsigned long npages, bool readonly, dma_addr_t *dma_addr,
+ 		     struct amdgpu_device *bo_adev, struct dma_fence **fence)
+ {
+ 	struct amdgpu_bo_va bo_va;
+@@ -1165,16 +1171,17 @@ svm_range_map_to_gpu(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 	unsigned long last_start;
+ 	int last_domain;
+ 	int r = 0;
+-	int64_t i;
++	int64_t i, j;
+ 
+-	pr_debug("svms 0x%p [0x%lx 0x%lx]\n", prange->svms, prange->start,
+-		 prange->last);
++	last_start = prange->start + offset;
++
++	pr_debug("svms 0x%p [0x%lx 0x%lx] readonly %d\n", prange->svms,
++		 last_start, last_start + npages - 1, readonly);
+ 
+ 	if (prange->svm_bo && prange->ttm_res)
+ 		bo_va.is_xgmi = amdgpu_xgmi_same_hive(adev, bo_adev);
+ 
+-	last_start = prange->start;
+-	for (i = 0; i < prange->npages; i++) {
++	for (i = offset; i < offset + npages; i++) {
+ 		last_domain = dma_addr[i] & SVM_RANGE_VRAM_DOMAIN;
+ 		dma_addr[i] &= ~SVM_RANGE_VRAM_DOMAIN;
+ 		if ((prange->start + i) < prange->last &&
+@@ -1183,15 +1190,27 @@ svm_range_map_to_gpu(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 
+ 		pr_debug("Mapping range [0x%lx 0x%llx] on domain: %s\n",
+ 			 last_start, prange->start + i, last_domain ? "GPU" : "CPU");
++
+ 		pte_flags = svm_range_get_pte_flags(adev, prange, last_domain);
+-		r = amdgpu_vm_bo_update_mapping(adev, bo_adev, vm, false, false, NULL,
+-						last_start,
++		if (readonly)
++			pte_flags &= ~AMDGPU_PTE_WRITEABLE;
++
++		pr_debug("svms 0x%p map [0x%lx 0x%llx] vram %d PTE 0x%llx\n",
++			 prange->svms, last_start, prange->start + i,
++			 (last_domain == SVM_RANGE_VRAM_DOMAIN) ? 1 : 0,
++			 pte_flags);
++
++		r = amdgpu_vm_bo_update_mapping(adev, bo_adev, vm, false, false,
++						NULL, last_start,
+ 						prange->start + i, pte_flags,
+ 						last_start - prange->start,
+-						NULL,
+-						dma_addr,
++						NULL, dma_addr,
+ 						&vm->last_update,
+ 						&table_freed);
++
++		for (j = last_start - prange->start; j <= i; j++)
++			dma_addr[j] |= last_domain;
++
+ 		if (r) {
+ 			pr_debug("failed %d to map to gpu 0x%lx\n", r, prange->start);
+ 			goto out;
+@@ -1220,8 +1239,10 @@ out:
+ 	return r;
+ }
+ 
+-static int svm_range_map_to_gpus(struct svm_range *prange,
+-				 unsigned long *bitmap, bool wait)
++static int
++svm_range_map_to_gpus(struct svm_range *prange, unsigned long offset,
++		      unsigned long npages, bool readonly,
++		      unsigned long *bitmap, bool wait)
+ {
+ 	struct kfd_process_device *pdd;
+ 	struct amdgpu_device *bo_adev;
+@@ -1257,7 +1278,8 @@ static int svm_range_map_to_gpus(struct svm_range *prange,
+ 		}
+ 
+ 		r = svm_range_map_to_gpu(adev, drm_priv_to_vm(pdd->drm_priv),
+-					 prange, prange->dma_addr[gpuidx],
++					 prange, offset, npages, readonly,
++					 prange->dma_addr[gpuidx],
+ 					 bo_adev, wait ? &fence : NULL);
+ 		if (r)
+ 			break;
+@@ -1390,7 +1412,7 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
+ 				      int32_t gpuidx, bool intr, bool wait)
+ {
+ 	struct svm_validate_context ctx;
+-	struct hmm_range *hmm_range;
++	unsigned long start, end, addr;
+ 	struct kfd_process *p;
+ 	void *owner;
+ 	int32_t idx;
+@@ -1448,40 +1470,66 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
+ 			break;
+ 		}
+ 	}
+-	r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
+-				       prange->start << PAGE_SHIFT,
+-				       prange->npages, &hmm_range,
+-				       false, true, owner);
+-	if (r) {
+-		pr_debug("failed %d to get svm range pages\n", r);
+-		goto unreserve_out;
+-	}
+ 
+-	r = svm_range_dma_map(prange, ctx.bitmap,
+-			      hmm_range->hmm_pfns);
+-	if (r) {
+-		pr_debug("failed %d to dma map range\n", r);
+-		goto unreserve_out;
+-	}
++	start = prange->start << PAGE_SHIFT;
++	end = (prange->last + 1) << PAGE_SHIFT;
++	for (addr = start; addr < end && !r; ) {
++		struct hmm_range *hmm_range;
++		struct vm_area_struct *vma;
++		unsigned long next;
++		unsigned long offset;
++		unsigned long npages;
++		bool readonly;
+ 
+-	prange->validated_once = true;
++		vma = find_vma(mm, addr);
++		if (!vma || addr < vma->vm_start) {
++			r = -EFAULT;
++			goto unreserve_out;
++		}
++		readonly = !(vma->vm_flags & VM_WRITE);
+ 
+-	svm_range_lock(prange);
+-	if (amdgpu_hmm_range_get_pages_done(hmm_range)) {
+-		pr_debug("hmm update the range, need validate again\n");
+-		r = -EAGAIN;
+-		goto unlock_out;
+-	}
+-	if (!list_empty(&prange->child_list)) {
+-		pr_debug("range split by unmap in parallel, validate again\n");
+-		r = -EAGAIN;
+-		goto unlock_out;
+-	}
++		next = min(vma->vm_end, end);
++		npages = (next - addr) >> PAGE_SHIFT;
++		r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
++					       addr, npages, &hmm_range,
++					       readonly, true, owner);
++		if (r) {
++			pr_debug("failed %d to get svm range pages\n", r);
++			goto unreserve_out;
++		}
++
++		offset = (addr - start) >> PAGE_SHIFT;
++		r = svm_range_dma_map(prange, ctx.bitmap, offset, npages,
++				      hmm_range->hmm_pfns);
++		if (r) {
++			pr_debug("failed %d to dma map range\n", r);
++			goto unreserve_out;
++		}
++
++		svm_range_lock(prange);
++		if (amdgpu_hmm_range_get_pages_done(hmm_range)) {
++			pr_debug("hmm update the range, need validate again\n");
++			r = -EAGAIN;
++			goto unlock_out;
++		}
++		if (!list_empty(&prange->child_list)) {
++			pr_debug("range split by unmap in parallel, validate again\n");
++			r = -EAGAIN;
++			goto unlock_out;
++		}
+ 
+-	r = svm_range_map_to_gpus(prange, ctx.bitmap, wait);
++		r = svm_range_map_to_gpus(prange, offset, npages, readonly,
++					  ctx.bitmap, wait);
+ 
+ unlock_out:
+-	svm_range_unlock(prange);
++		svm_range_unlock(prange);
++
++		addr = next;
++	}
++
++	if (addr == end)
++		prange->validated_once = true;
++
+ unreserve_out:
+ 	svm_range_unreserve_bos(&ctx);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 6a4c6c47dcfaf..3bb567ea2cef9 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7514,6 +7514,32 @@ static void amdgpu_dm_connector_add_common_modes(struct drm_encoder *encoder,
+ 	}
+ }
+ 
++static void amdgpu_set_panel_orientation(struct drm_connector *connector)
++{
++	struct drm_encoder *encoder;
++	struct amdgpu_encoder *amdgpu_encoder;
++	const struct drm_display_mode *native_mode;
++
++	if (connector->connector_type != DRM_MODE_CONNECTOR_eDP &&
++	    connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
++		return;
++
++	encoder = amdgpu_dm_connector_to_encoder(connector);
++	if (!encoder)
++		return;
++
++	amdgpu_encoder = to_amdgpu_encoder(encoder);
++
++	native_mode = &amdgpu_encoder->native_mode;
++	if (native_mode->hdisplay == 0 || native_mode->vdisplay == 0)
++		return;
++
++	drm_connector_set_panel_orientation_with_quirk(connector,
++						       DRM_MODE_PANEL_ORIENTATION_UNKNOWN,
++						       native_mode->hdisplay,
++						       native_mode->vdisplay);
++}
++
+ static void amdgpu_dm_connector_ddc_get_modes(struct drm_connector *connector,
+ 					      struct edid *edid)
+ {
+@@ -7542,6 +7568,8 @@ static void amdgpu_dm_connector_ddc_get_modes(struct drm_connector *connector,
+ 		 * restored here.
+ 		 */
+ 		amdgpu_dm_update_freesync_caps(connector, edid);
++
++		amdgpu_set_panel_orientation(connector);
+ 	} else {
+ 		amdgpu_dm_connector->num_modes = 0;
+ 	}
+@@ -8051,8 +8079,26 @@ static bool is_content_protection_different(struct drm_connector_state *state,
+ 	    state->content_protection == DRM_MODE_CONTENT_PROTECTION_ENABLED)
+ 		state->content_protection = DRM_MODE_CONTENT_PROTECTION_DESIRED;
+ 
+-	/* Check if something is connected/enabled, otherwise we start hdcp but nothing is connected/enabled
+-	 * hot-plug, headless s3, dpms
++	/* Stream removed and re-enabled
++	 *
++	 * Can sometimes overlap with the HPD case,
++	 * thus set update_hdcp to false to avoid
++	 * setting HDCP multiple times.
++	 *
++	 * Handles:	DESIRED -> DESIRED (Special case)
++	 */
++	if (!(old_state->crtc && old_state->crtc->enabled) &&
++		state->crtc && state->crtc->enabled &&
++		connector->state->content_protection == DRM_MODE_CONTENT_PROTECTION_DESIRED) {
++		dm_con_state->update_hdcp = false;
++		return true;
++	}
++
++	/* Hot-plug, headless s3, dpms
++	 *
++	 * Only start HDCP if the display is connected/enabled.
++	 * update_hdcp flag will be set to false until the next
++	 * HPD comes in.
+ 	 *
+ 	 * Handles:	DESIRED -> DESIRED (Special case)
+ 	 */
+@@ -10469,7 +10515,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 		status = dc_validate_global_state(dc, dm_state->context, false);
+ 		if (status != DC_OK) {
+-			DC_LOG_WARNING("DC global validation failure: %s (%d)",
++			drm_dbg_atomic(dev,
++				       "DC global validation failure: %s (%d)",
+ 				       dc_status_to_str(status), status);
+ 			ret = -EINVAL;
+ 			goto fail;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index a6d0fd24fd02d..83ef72a3ebf41 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1849,9 +1849,13 @@ bool perform_link_training_with_retries(
+ 		dp_disable_link_phy(link, signal);
+ 
+ 		/* Abort link training if failure due to sink being unplugged. */
+-		if (status == LINK_TRAINING_ABORT)
+-			break;
+-		else if (do_fallback) {
++		if (status == LINK_TRAINING_ABORT) {
++			enum dc_connection_type type = dc_connection_none;
++
++			dc_link_detect_sink(link, &type);
++			if (type == dc_connection_none)
++				break;
++		} else if (do_fallback) {
+ 			decide_fallback_link_setting(*link_setting, &current_setting, status);
+ 			/* Fail link training if reduced link bandwidth no longer meets
+ 			 * stream requirements.
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+index 15c0b8af376f8..6e8fe1242752d 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+@@ -6870,6 +6870,8 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 	si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
+ 	si_thermal_start_thermal_controller(adev);
+ 
++	ni_update_current_ps(adev, boot_ps);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
+index cb38b1a17b098..82cbb29a05aa3 100644
+--- a/drivers/gpu/drm/ttm/ttm_pool.c
++++ b/drivers/gpu/drm/ttm/ttm_pool.c
+@@ -383,7 +383,8 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
+ 	else
+ 		gfp_flags |= GFP_HIGHUSER;
+ 
+-	for (order = min(MAX_ORDER - 1UL, __fls(num_pages)); num_pages;
++	for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages));
++	     num_pages;
+ 	     order = min_t(unsigned int, order, __fls(num_pages))) {
+ 		bool apply_caching = false;
+ 		struct ttm_pool_type *pt;
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index f91d37beb1133..3b391dee30445 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -166,8 +166,6 @@ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ 	struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector);
+ 	bool connected = false;
+ 
+-	WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
+-
+ 	if (vc4_hdmi->hpd_gpio &&
+ 	    gpiod_get_value_cansleep(vc4_hdmi->hpd_gpio)) {
+ 		connected = true;
+@@ -188,12 +186,10 @@ vc4_hdmi_connector_detect(struct drm_connector *connector, bool force)
+ 			}
+ 		}
+ 
+-		pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 		return connector_status_connected;
+ 	}
+ 
+ 	cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
+-	pm_runtime_put(&vc4_hdmi->pdev->dev);
+ 	return connector_status_disconnected;
+ }
+ 
+@@ -635,6 +631,7 @@ static void vc4_hdmi_encoder_post_crtc_powerdown(struct drm_encoder *encoder,
+ 		vc4_hdmi->variant->phy_disable(vc4_hdmi);
+ 
+ 	clk_disable_unprepare(vc4_hdmi->pixel_bvb_clock);
++	clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 	clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 
+ 	ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
+@@ -945,6 +942,13 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 		return;
+ 	}
+ 
++	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
++	if (ret) {
++		DRM_ERROR("Failed to turn on HSM clock: %d\n", ret);
++		clk_disable_unprepare(vc4_hdmi->pixel_clock);
++		return;
++	}
++
+ 	vc4_hdmi_cec_update_clk_div(vc4_hdmi);
+ 
+ 	if (pixel_rate > 297000000)
+@@ -957,6 +961,7 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 	ret = clk_set_min_rate(vc4_hdmi->pixel_bvb_clock, bvb_rate);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to set pixel bvb clock rate: %d\n", ret);
++		clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 		return;
+ 	}
+@@ -964,6 +969,7 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 	ret = clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to turn on pixel bvb clock: %d\n", ret);
++		clk_disable_unprepare(vc4_hdmi->hsm_clock);
+ 		clk_disable_unprepare(vc4_hdmi->pixel_clock);
+ 		return;
+ 	}
+@@ -2110,29 +2116,6 @@ static int vc5_hdmi_init_resources(struct vc4_hdmi *vc4_hdmi)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_PM
+-static int vc4_hdmi_runtime_suspend(struct device *dev)
+-{
+-	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+-
+-	clk_disable_unprepare(vc4_hdmi->hsm_clock);
+-
+-	return 0;
+-}
+-
+-static int vc4_hdmi_runtime_resume(struct device *dev)
+-{
+-	struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+-	int ret;
+-
+-	ret = clk_prepare_enable(vc4_hdmi->hsm_clock);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-#endif
+-
+ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ {
+ 	const struct vc4_hdmi_variant *variant = of_device_get_match_data(dev);
+@@ -2380,18 +2363,11 @@ static const struct of_device_id vc4_hdmi_dt_match[] = {
+ 	{}
+ };
+ 
+-static const struct dev_pm_ops vc4_hdmi_pm_ops = {
+-	SET_RUNTIME_PM_OPS(vc4_hdmi_runtime_suspend,
+-			   vc4_hdmi_runtime_resume,
+-			   NULL)
+-};
+-
+ struct platform_driver vc4_hdmi_driver = {
+ 	.probe = vc4_hdmi_dev_probe,
+ 	.remove = vc4_hdmi_dev_remove,
+ 	.driver = {
+ 		.name = "vc4_hdmi",
+ 		.of_match_table = vc4_hdmi_dt_match,
+-		.pm = &vc4_hdmi_pm_ops,
+ 	},
+ };
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 4d5924e9f7666..aca7b595c4c78 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -409,6 +409,7 @@ config MESON_IRQ_GPIO
+ config GOLDFISH_PIC
+        bool "Goldfish programmable interrupt controller"
+        depends on MIPS && (GOLDFISH || COMPILE_TEST)
++       select GENERIC_IRQ_CHIP
+        select IRQ_DOMAIN
+        help
+          Say yes here to enable Goldfish interrupt controller driver used
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index 7557ab5512953..53e0fb0562c11 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -359,16 +359,16 @@ static void armada_370_xp_ipi_send_mask(struct irq_data *d,
+ 		ARMADA_370_XP_SW_TRIG_INT_OFFS);
+ }
+ 
+-static void armada_370_xp_ipi_eoi(struct irq_data *d)
++static void armada_370_xp_ipi_ack(struct irq_data *d)
+ {
+ 	writel(~BIT(d->hwirq), per_cpu_int_base + ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS);
+ }
+ 
+ static struct irq_chip ipi_irqchip = {
+ 	.name		= "IPI",
++	.irq_ack	= armada_370_xp_ipi_ack,
+ 	.irq_mask	= armada_370_xp_ipi_mask,
+ 	.irq_unmask	= armada_370_xp_ipi_unmask,
+-	.irq_eoi	= armada_370_xp_ipi_eoi,
+ 	.ipi_send_mask	= armada_370_xp_ipi_send_mask,
+ };
+ 
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index ba39668c3e085..51584f4cccf46 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -4501,7 +4501,7 @@ static int its_vpe_irq_domain_alloc(struct irq_domain *domain, unsigned int virq
+ 
+ 	if (err) {
+ 		if (i > 0)
+-			its_vpe_irq_domain_free(domain, virq, i - 1);
++			its_vpe_irq_domain_free(domain, virq, i);
+ 
+ 		its_lpi_free(bitmap, base, nr_ids);
+ 		its_free_prop_table(vprop_page);
+diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
+index 38fbb3b598731..38cc8340e817d 100644
+--- a/drivers/mcb/mcb-core.c
++++ b/drivers/mcb/mcb-core.c
+@@ -277,8 +277,8 @@ struct mcb_bus *mcb_alloc_bus(struct device *carrier)
+ 
+ 	bus_nr = ida_simple_get(&mcb_ida, 0, 0, GFP_KERNEL);
+ 	if (bus_nr < 0) {
+-		rc = bus_nr;
+-		goto err_free;
++		kfree(bus);
++		return ERR_PTR(bus_nr);
+ 	}
+ 
+ 	bus->bus_nr = bus_nr;
+@@ -293,12 +293,12 @@ struct mcb_bus *mcb_alloc_bus(struct device *carrier)
+ 	dev_set_name(&bus->dev, "mcb:%d", bus_nr);
+ 	rc = device_add(&bus->dev);
+ 	if (rc)
+-		goto err_free;
++		goto err_put;
+ 
+ 	return bus;
+-err_free:
+-	put_device(carrier);
+-	kfree(bus);
++
++err_put:
++	put_device(&bus->dev);
+ 	return ERR_PTR(rc);
+ }
+ EXPORT_SYMBOL_NS_GPL(mcb_alloc_bus, MCB);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index ae8fe54ea3581..6c0c3d0d905aa 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -5700,10 +5700,6 @@ static int md_alloc(dev_t dev, char *name)
+ 	disk->flags |= GENHD_FL_EXT_DEVT;
+ 	disk->events |= DISK_EVENT_MEDIA_CHANGE;
+ 	mddev->gendisk = disk;
+-	/* As soon as we call add_disk(), another thread could get
+-	 * through to md_open, so make sure it doesn't get too far
+-	 */
+-	mutex_lock(&mddev->open_mutex);
+ 	add_disk(disk);
+ 
+ 	error = kobject_add(&mddev->kobj, &disk_to_dev(disk)->kobj, "%s", "md");
+@@ -5718,7 +5714,6 @@ static int md_alloc(dev_t dev, char *name)
+ 	if (mddev->kobj.sd &&
+ 	    sysfs_create_group(&mddev->kobj, &md_bitmap_group))
+ 		pr_debug("pointless warning\n");
+-	mutex_unlock(&mddev->open_mutex);
+  abort:
+ 	mutex_unlock(&disks_mutex);
+ 	if (!error && mddev->kobj.sd) {
+diff --git a/drivers/misc/bcm-vk/bcm_vk_tty.c b/drivers/misc/bcm-vk/bcm_vk_tty.c
+index dae9eeed84a2b..89edc936b544b 100644
+--- a/drivers/misc/bcm-vk/bcm_vk_tty.c
++++ b/drivers/misc/bcm-vk/bcm_vk_tty.c
+@@ -267,13 +267,13 @@ int bcm_vk_tty_init(struct bcm_vk *vk, char *name)
+ 		struct device *tty_dev;
+ 
+ 		tty_port_init(&vk->tty[i].port);
+-		tty_dev = tty_port_register_device(&vk->tty[i].port, tty_drv,
+-						   i, dev);
++		tty_dev = tty_port_register_device_attr(&vk->tty[i].port,
++							tty_drv, i, dev, vk,
++							NULL);
+ 		if (IS_ERR(tty_dev)) {
+ 			err = PTR_ERR(tty_dev);
+ 			goto unwind;
+ 		}
+-		dev_set_drvdata(tty_dev, vk);
+ 		vk->tty[i].is_opened = false;
+ 	}
+ 
+diff --git a/drivers/misc/genwqe/card_base.c b/drivers/misc/genwqe/card_base.c
+index 2e1befbd1ad99..693981891870c 100644
+--- a/drivers/misc/genwqe/card_base.c
++++ b/drivers/misc/genwqe/card_base.c
+@@ -1090,7 +1090,7 @@ static int genwqe_pci_setup(struct genwqe_dev *cd)
+ 
+ 	/* check for 64-bit DMA address supported (DAC) */
+ 	/* check for 32-bit DMA address supported (SAC) */
+-	if (dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64)) ||
++	if (dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64)) &&
+ 	    dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32))) {
+ 		dev_err(&pci_dev->dev,
+ 			"err: neither DMA32 nor DMA64 supported\n");
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 111a6d5985da6..1c122a1f2f97d 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3012,7 +3012,7 @@ static void mv88e6xxx_teardown(struct dsa_switch *ds)
+ {
+ 	mv88e6xxx_teardown_devlink_params(ds);
+ 	dsa_devlink_resources_unregister(ds);
+-	mv88e6xxx_teardown_devlink_regions(ds);
++	mv88e6xxx_teardown_devlink_regions_global(ds);
+ }
+ 
+ static int mv88e6xxx_setup(struct dsa_switch *ds)
+@@ -3147,7 +3147,7 @@ unlock:
+ 	if (err)
+ 		goto out_resources;
+ 
+-	err = mv88e6xxx_setup_devlink_regions(ds);
++	err = mv88e6xxx_setup_devlink_regions_global(ds);
+ 	if (err)
+ 		goto out_params;
+ 
+@@ -3161,6 +3161,16 @@ out_resources:
+ 	return err;
+ }
+ 
++static int mv88e6xxx_port_setup(struct dsa_switch *ds, int port)
++{
++	return mv88e6xxx_setup_devlink_regions_port(ds, port);
++}
++
++static void mv88e6xxx_port_teardown(struct dsa_switch *ds, int port)
++{
++	mv88e6xxx_teardown_devlink_regions_port(ds, port);
++}
++
+ /* prod_id for switch families which do not have a PHY model number */
+ static const u16 family_prod_id_table[] = {
+ 	[MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341,
+@@ -6055,6 +6065,8 @@ static const struct dsa_switch_ops mv88e6xxx_switch_ops = {
+ 	.change_tag_protocol	= mv88e6xxx_change_tag_protocol,
+ 	.setup			= mv88e6xxx_setup,
+ 	.teardown		= mv88e6xxx_teardown,
++	.port_setup		= mv88e6xxx_port_setup,
++	.port_teardown		= mv88e6xxx_port_teardown,
+ 	.phylink_validate	= mv88e6xxx_validate,
+ 	.phylink_mac_link_state	= mv88e6xxx_serdes_pcs_get_state,
+ 	.phylink_mac_config	= mv88e6xxx_mac_config,
+diff --git a/drivers/net/dsa/mv88e6xxx/devlink.c b/drivers/net/dsa/mv88e6xxx/devlink.c
+index 0c0f5ea6680c3..381068395c63b 100644
+--- a/drivers/net/dsa/mv88e6xxx/devlink.c
++++ b/drivers/net/dsa/mv88e6xxx/devlink.c
+@@ -647,26 +647,25 @@ static struct mv88e6xxx_region mv88e6xxx_regions[] = {
+ 	},
+ };
+ 
+-static void
+-mv88e6xxx_teardown_devlink_regions_global(struct mv88e6xxx_chip *chip)
++void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds)
+ {
++	struct mv88e6xxx_chip *chip = ds->priv;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(mv88e6xxx_regions); i++)
+ 		dsa_devlink_region_destroy(chip->regions[i]);
+ }
+ 
+-static void
+-mv88e6xxx_teardown_devlink_regions_port(struct mv88e6xxx_chip *chip,
+-					int port)
++void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port)
+ {
++	struct mv88e6xxx_chip *chip = ds->priv;
++
+ 	dsa_devlink_region_destroy(chip->ports[port].region);
+ }
+ 
+-static int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds,
+-						struct mv88e6xxx_chip *chip,
+-						int port)
++int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds, int port)
+ {
++	struct mv88e6xxx_chip *chip = ds->priv;
+ 	struct devlink_region *region;
+ 
+ 	region = dsa_devlink_port_region_create(ds,
+@@ -681,40 +680,10 @@ static int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds,
+ 	return 0;
+ }
+ 
+-static void
+-mv88e6xxx_teardown_devlink_regions_ports(struct mv88e6xxx_chip *chip)
+-{
+-	int port;
+-
+-	for (port = 0; port < mv88e6xxx_num_ports(chip); port++)
+-		mv88e6xxx_teardown_devlink_regions_port(chip, port);
+-}
+-
+-static int mv88e6xxx_setup_devlink_regions_ports(struct dsa_switch *ds,
+-						 struct mv88e6xxx_chip *chip)
+-{
+-	int port;
+-	int err;
+-
+-	for (port = 0; port < mv88e6xxx_num_ports(chip); port++) {
+-		err = mv88e6xxx_setup_devlink_regions_port(ds, chip, port);
+-		if (err)
+-			goto out;
+-	}
+-
+-	return 0;
+-
+-out:
+-	while (port-- > 0)
+-		mv88e6xxx_teardown_devlink_regions_port(chip, port);
+-
+-	return err;
+-}
+-
+-static int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds,
+-						  struct mv88e6xxx_chip *chip)
++int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds)
+ {
+ 	bool (*cond)(struct mv88e6xxx_chip *chip);
++	struct mv88e6xxx_chip *chip = ds->priv;
+ 	struct devlink_region_ops *ops;
+ 	struct devlink_region *region;
+ 	u64 size;
+@@ -753,30 +722,6 @@ out:
+ 	return PTR_ERR(region);
+ }
+ 
+-int mv88e6xxx_setup_devlink_regions(struct dsa_switch *ds)
+-{
+-	struct mv88e6xxx_chip *chip = ds->priv;
+-	int err;
+-
+-	err = mv88e6xxx_setup_devlink_regions_global(ds, chip);
+-	if (err)
+-		return err;
+-
+-	err = mv88e6xxx_setup_devlink_regions_ports(ds, chip);
+-	if (err)
+-		mv88e6xxx_teardown_devlink_regions_global(chip);
+-
+-	return err;
+-}
+-
+-void mv88e6xxx_teardown_devlink_regions(struct dsa_switch *ds)
+-{
+-	struct mv88e6xxx_chip *chip = ds->priv;
+-
+-	mv88e6xxx_teardown_devlink_regions_ports(chip);
+-	mv88e6xxx_teardown_devlink_regions_global(chip);
+-}
+-
+ int mv88e6xxx_devlink_info_get(struct dsa_switch *ds,
+ 			       struct devlink_info_req *req,
+ 			       struct netlink_ext_ack *extack)
+diff --git a/drivers/net/dsa/mv88e6xxx/devlink.h b/drivers/net/dsa/mv88e6xxx/devlink.h
+index 3d72db3dcf950..65ce6a6858b9f 100644
+--- a/drivers/net/dsa/mv88e6xxx/devlink.h
++++ b/drivers/net/dsa/mv88e6xxx/devlink.h
+@@ -12,8 +12,10 @@ int mv88e6xxx_devlink_param_get(struct dsa_switch *ds, u32 id,
+ 				struct devlink_param_gset_ctx *ctx);
+ int mv88e6xxx_devlink_param_set(struct dsa_switch *ds, u32 id,
+ 				struct devlink_param_gset_ctx *ctx);
+-int mv88e6xxx_setup_devlink_regions(struct dsa_switch *ds);
+-void mv88e6xxx_teardown_devlink_regions(struct dsa_switch *ds);
++int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds);
++void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds);
++int mv88e6xxx_setup_devlink_regions_port(struct dsa_switch *ds, int port);
++void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port);
+ 
+ int mv88e6xxx_devlink_info_get(struct dsa_switch *ds,
+ 			       struct devlink_info_req *req,
+diff --git a/drivers/net/dsa/realtek-smi-core.c b/drivers/net/dsa/realtek-smi-core.c
+index 8e49d4f85d48c..6bf46d76c0281 100644
+--- a/drivers/net/dsa/realtek-smi-core.c
++++ b/drivers/net/dsa/realtek-smi-core.c
+@@ -368,7 +368,7 @@ int realtek_smi_setup_mdio(struct realtek_smi *smi)
+ 	smi->slave_mii_bus->parent = smi->dev;
+ 	smi->ds->slave_mii_bus = smi->slave_mii_bus;
+ 
+-	ret = of_mdiobus_register(smi->slave_mii_bus, mdio_np);
++	ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
+ 	if (ret) {
+ 		dev_err(smi->dev, "unable to register MDIO bus %s\n",
+ 			smi->slave_mii_bus->id);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index f26d037356191..5b996330f228b 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -419,13 +419,13 @@ static int atl_resume_common(struct device *dev, bool deep)
+ 	if (deep) {
+ 		/* Reinitialize Nic/Vecs objects */
+ 		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
++	}
+ 
++	if (netif_running(nic->ndev)) {
+ 		ret = aq_nic_init(nic);
+ 		if (ret)
+ 			goto err_exit;
+-	}
+ 
+-	if (netif_running(nic->ndev)) {
+ 		ret = aq_nic_start(nic);
+ 		if (ret)
+ 			goto err_exit;
+diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma.c b/drivers/net/ethernet/broadcom/bgmac-bcma.c
+index 85fa0ab7201c7..9513cfb5ba58c 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-bcma.c
++++ b/drivers/net/ethernet/broadcom/bgmac-bcma.c
+@@ -129,6 +129,8 @@ static int bgmac_probe(struct bcma_device *core)
+ 	bcma_set_drvdata(core, bgmac);
+ 
+ 	err = of_get_mac_address(bgmac->dev->of_node, bgmac->net_dev->dev_addr);
++	if (err == -EPROBE_DEFER)
++		return err;
+ 
+ 	/* If no MAC address assigned via device tree, check SPROM */
+ 	if (err) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index fdbf47446a997..f20b57b8cd70e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -385,7 +385,7 @@ static bool bnxt_txr_netif_try_stop_queue(struct bnxt *bp,
+ 	 * netif_tx_queue_stopped().
+ 	 */
+ 	smp_mb();
+-	if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh) {
++	if (bnxt_tx_avail(bp, txr) >= bp->tx_wake_thresh) {
+ 		netif_tx_wake_queue(txq);
+ 		return false;
+ 	}
+@@ -758,7 +758,7 @@ next_tx_int:
+ 	smp_mb();
+ 
+ 	if (unlikely(netif_tx_queue_stopped(txq)) &&
+-	    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
++	    bnxt_tx_avail(bp, txr) >= bp->tx_wake_thresh &&
+ 	    READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
+ 		netif_tx_wake_queue(txq);
+ }
+@@ -2375,7 +2375,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 		if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
+ 			tx_pkts++;
+ 			/* return full budget so NAPI will complete. */
+-			if (unlikely(tx_pkts > bp->tx_wake_thresh)) {
++			if (unlikely(tx_pkts >= bp->tx_wake_thresh)) {
+ 				rx_pkts = budget;
+ 				raw_cons = NEXT_RAW_CMP(raw_cons);
+ 				if (budget)
+@@ -3531,7 +3531,7 @@ static int bnxt_init_tx_rings(struct bnxt *bp)
+ 	u16 i;
+ 
+ 	bp->tx_wake_thresh = max_t(int, bp->tx_ring_size / 2,
+-				   MAX_SKB_FRAGS + 1);
++				   BNXT_MIN_TX_DESC_CNT);
+ 
+ 	for (i = 0; i < bp->tx_nr_rings; i++) {
+ 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index ba4e0fc38520c..d4dca4508d268 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -615,6 +615,11 @@ struct nqe_cn {
+ #define BNXT_MAX_RX_JUM_DESC_CNT	(RX_DESC_CNT * MAX_RX_AGG_PAGES - 1)
+ #define BNXT_MAX_TX_DESC_CNT		(TX_DESC_CNT * MAX_TX_PAGES - 1)
+ 
++/* Minimum TX BDs for a TX packet with MAX_SKB_FRAGS + 1.  We need one extra
++ * BD because the first TX BD is always a long BD.
++ */
++#define BNXT_MIN_TX_DESC_CNT		(MAX_SKB_FRAGS + 2)
++
+ #define RX_RING(x)	(((x) & ~(RX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4))
+ #define RX_IDX(x)	((x) & (RX_DESC_CNT - 1))
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 786ca51e669bc..3a8c284635922 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -784,7 +784,7 @@ static int bnxt_set_ringparam(struct net_device *dev,
+ 
+ 	if ((ering->rx_pending > BNXT_MAX_RX_DESC_CNT) ||
+ 	    (ering->tx_pending > BNXT_MAX_TX_DESC_CNT) ||
+-	    (ering->tx_pending <= MAX_SKB_FRAGS))
++	    (ering->tx_pending < BNXT_MIN_TX_DESC_CNT))
+ 		return -EINVAL;
+ 
+ 	if (netif_running(dev))
+diff --git a/drivers/net/ethernet/cadence/macb_pci.c b/drivers/net/ethernet/cadence/macb_pci.c
+index 8b7b59908a1ab..f66d22de5168d 100644
+--- a/drivers/net/ethernet/cadence/macb_pci.c
++++ b/drivers/net/ethernet/cadence/macb_pci.c
+@@ -111,9 +111,9 @@ static void macb_remove(struct pci_dev *pdev)
+ 	struct platform_device *plat_dev = pci_get_drvdata(pdev);
+ 	struct macb_platform_data *plat_data = dev_get_platdata(&plat_dev->dev);
+ 
+-	platform_device_unregister(plat_dev);
+ 	clk_unregister(plat_data->pclk);
+ 	clk_unregister(plat_data->hclk);
++	platform_device_unregister(plat_dev);
+ }
+ 
+ static const struct pci_device_id dev_id_table[] = {
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 3ca93adb96628..042327b9981fa 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -419,7 +419,7 @@ static void enetc_rx_dim_work(struct work_struct *w)
+ 
+ static void enetc_rx_net_dim(struct enetc_int_vector *v)
+ {
+-	struct dim_sample dim_sample;
++	struct dim_sample dim_sample = {};
+ 
+ 	v->comp_cnt++;
+ 
+@@ -1879,7 +1879,6 @@ static void enetc_clear_bdrs(struct enetc_ndev_priv *priv)
+ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+ {
+ 	struct pci_dev *pdev = priv->si->pdev;
+-	cpumask_t cpu_mask;
+ 	int i, j, err;
+ 
+ 	for (i = 0; i < priv->bdr_int_num; i++) {
+@@ -1908,9 +1907,7 @@ static int enetc_setup_irqs(struct enetc_ndev_priv *priv)
+ 
+ 			enetc_wr(hw, ENETC_SIMSITRV(idx), entry);
+ 		}
+-		cpumask_clear(&cpu_mask);
+-		cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
+-		irq_set_affinity_hint(irq, &cpu_mask);
++		irq_set_affinity_hint(irq, get_cpu_mask(i % num_online_cpus()));
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index ec9a7f8bc3fed..2eeafd61a07ee 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -1878,12 +1878,12 @@ static void hclge_handle_over_8bd_err(struct hclge_dev *hdev,
+ 		return;
+ 	}
+ 
+-	dev_err(dev, "PPU_PF_ABNORMAL_INT_ST over_8bd_no_fe found, vf_id(%u), queue_id(%u)\n",
++	dev_err(dev, "PPU_PF_ABNORMAL_INT_ST over_8bd_no_fe found, vport(%u), queue_id(%u)\n",
+ 		vf_id, q_id);
+ 
+ 	if (vf_id) {
+ 		if (vf_id >= hdev->num_alloc_vport) {
+-			dev_err(dev, "invalid vf id(%u)\n", vf_id);
++			dev_err(dev, "invalid vport(%u)\n", vf_id);
+ 			return;
+ 		}
+ 
+@@ -1896,8 +1896,8 @@ static void hclge_handle_over_8bd_err(struct hclge_dev *hdev,
+ 
+ 		ret = hclge_inform_reset_assert_to_vf(&hdev->vport[vf_id]);
+ 		if (ret)
+-			dev_err(dev, "inform reset to vf(%u) failed %d!\n",
+-				hdev->vport->vport_id, ret);
++			dev_err(dev, "inform reset to vport(%u) failed %d!\n",
++				vf_id, ret);
+ 	} else {
+ 		set_bit(HNAE3_FUNC_RESET, reset_requests);
+ 	}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 72d55c028ac4b..90a72c79fec99 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -3660,7 +3660,8 @@ static int hclge_set_all_vf_rst(struct hclge_dev *hdev, bool reset)
+ 		if (ret) {
+ 			dev_err(&hdev->pdev->dev,
+ 				"set vf(%u) rst failed %d!\n",
+-				vport->vport_id, ret);
++				vport->vport_id - HCLGE_VF_VPORT_START_NUM,
++				ret);
+ 			return ret;
+ 		}
+ 
+@@ -3675,7 +3676,8 @@ static int hclge_set_all_vf_rst(struct hclge_dev *hdev, bool reset)
+ 		if (ret)
+ 			dev_warn(&hdev->pdev->dev,
+ 				 "inform reset to vf(%u) failed %d!\n",
+-				 vport->vport_id, ret);
++				 vport->vport_id - HCLGE_VF_VPORT_START_NUM,
++				 ret);
+ 	}
+ 
+ 	return 0;
+@@ -4734,6 +4736,24 @@ static int hclge_get_rss(struct hnae3_handle *handle, u32 *indir,
+ 	return 0;
+ }
+ 
++static int hclge_parse_rss_hfunc(struct hclge_vport *vport, const u8 hfunc,
++				 u8 *hash_algo)
++{
++	switch (hfunc) {
++	case ETH_RSS_HASH_TOP:
++		*hash_algo = HCLGE_RSS_HASH_ALGO_TOEPLITZ;
++		return 0;
++	case ETH_RSS_HASH_XOR:
++		*hash_algo = HCLGE_RSS_HASH_ALGO_SIMPLE;
++		return 0;
++	case ETH_RSS_HASH_NO_CHANGE:
++		*hash_algo = vport->rss_algo;
++		return 0;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int hclge_set_rss(struct hnae3_handle *handle, const u32 *indir,
+ 			 const  u8 *key, const  u8 hfunc)
+ {
+@@ -4743,30 +4763,27 @@ static int hclge_set_rss(struct hnae3_handle *handle, const u32 *indir,
+ 	u8 hash_algo;
+ 	int ret, i;
+ 
++	ret = hclge_parse_rss_hfunc(vport, hfunc, &hash_algo);
++	if (ret) {
++		dev_err(&hdev->pdev->dev, "invalid hfunc type %u\n", hfunc);
++		return ret;
++	}
++
+ 	/* Set the RSS Hash Key if specififed by the user */
+ 	if (key) {
+-		switch (hfunc) {
+-		case ETH_RSS_HASH_TOP:
+-			hash_algo = HCLGE_RSS_HASH_ALGO_TOEPLITZ;
+-			break;
+-		case ETH_RSS_HASH_XOR:
+-			hash_algo = HCLGE_RSS_HASH_ALGO_SIMPLE;
+-			break;
+-		case ETH_RSS_HASH_NO_CHANGE:
+-			hash_algo = vport->rss_algo;
+-			break;
+-		default:
+-			return -EINVAL;
+-		}
+-
+ 		ret = hclge_set_rss_algo_key(hdev, hash_algo, key);
+ 		if (ret)
+ 			return ret;
+ 
+ 		/* Update the shadow RSS key with user specified qids */
+ 		memcpy(vport->rss_hash_key, key, HCLGE_RSS_KEY_SIZE);
+-		vport->rss_algo = hash_algo;
++	} else {
++		ret = hclge_set_rss_algo_key(hdev, hash_algo,
++					     vport->rss_hash_key);
++		if (ret)
++			return ret;
+ 	}
++	vport->rss_algo = hash_algo;
+ 
+ 	/* Update the shadow RSS table with user specified qids */
+ 	for (i = 0; i < ae_dev->dev_specs.rss_ind_tbl_size; i++)
+@@ -6620,10 +6637,13 @@ static int hclge_fd_parse_ring_cookie(struct hclge_dev *hdev, u64 ring_cookie,
+ 		u8 vf = ethtool_get_flow_spec_ring_vf(ring_cookie);
+ 		u16 tqps;
+ 
++		/* To keep consistent with user's configuration, minus 1 when
++		 * printing 'vf', because vf id from ethtool is added 1 for vf.
++		 */
+ 		if (vf > hdev->num_req_vfs) {
+ 			dev_err(&hdev->pdev->dev,
+-				"Error: vf id (%u) > max vf num (%u)\n",
+-				vf, hdev->num_req_vfs);
++				"Error: vf id (%u) should be less than %u\n",
++				vf - 1, hdev->num_req_vfs);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -9790,6 +9810,9 @@ static int hclge_set_vlan_filter_hw(struct hclge_dev *hdev, __be16 proto,
+ 	if (is_kill && !vlan_id)
+ 		return 0;
+ 
++	if (vlan_id >= VLAN_N_VID)
++		return -EINVAL;
++
+ 	ret = hclge_set_vf_vlan_common(hdev, vport_id, is_kill, vlan_id);
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+@@ -10696,7 +10719,8 @@ static int hclge_reset_tqp_cmd_send(struct hclge_dev *hdev, u16 queue_id,
+ 	return 0;
+ }
+ 
+-static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id)
++static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id,
++				  u8 *reset_status)
+ {
+ 	struct hclge_reset_tqp_queue_cmd *req;
+ 	struct hclge_desc desc;
+@@ -10714,7 +10738,9 @@ static int hclge_get_reset_status(struct hclge_dev *hdev, u16 queue_id)
+ 		return ret;
+ 	}
+ 
+-	return hnae3_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B);
++	*reset_status = hnae3_get_bit(req->ready_to_reset, HCLGE_TQP_RESET_B);
++
++	return 0;
+ }
+ 
+ u16 hclge_covert_handle_qid_global(struct hnae3_handle *handle, u16 queue_id)
+@@ -10733,7 +10759,7 @@ static int hclge_reset_tqp_cmd(struct hnae3_handle *handle)
+ 	struct hclge_vport *vport = hclge_get_vport(handle);
+ 	struct hclge_dev *hdev = vport->back;
+ 	u16 reset_try_times = 0;
+-	int reset_status;
++	u8 reset_status;
+ 	u16 queue_gid;
+ 	int ret;
+ 	u16 i;
+@@ -10749,7 +10775,11 @@ static int hclge_reset_tqp_cmd(struct hnae3_handle *handle)
+ 		}
+ 
+ 		while (reset_try_times++ < HCLGE_TQP_RESET_TRY_TIMES) {
+-			reset_status = hclge_get_reset_status(hdev, queue_gid);
++			ret = hclge_get_reset_status(hdev, queue_gid,
++						     &reset_status);
++			if (ret)
++				return ret;
++
+ 			if (reset_status)
+ 				break;
+ 
+@@ -11442,11 +11472,11 @@ static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+ 		struct hclge_vport *vport = &hdev->vport[i];
+ 		int ret;
+ 
+-		 /* Send cmd to clear VF's FUNC_RST_ING */
++		 /* Send cmd to clear vport's FUNC_RST_ING */
+ 		ret = hclge_set_vf_rst(hdev, vport->vport_id, false);
+ 		if (ret)
+ 			dev_warn(&hdev->pdev->dev,
+-				 "clear vf(%u) rst failed %d!\n",
++				 "clear vport(%u) rst failed %d!\n",
+ 				 vport->vport_id, ret);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 0dbed35645eda..c1a4b79a70504 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -564,7 +564,7 @@ static int hclge_reset_vf(struct hclge_vport *vport)
+ 	struct hclge_dev *hdev = vport->back;
+ 
+ 	dev_warn(&hdev->pdev->dev, "PF received VF reset request from VF %u!",
+-		 vport->vport_id);
++		 vport->vport_id - HCLGE_VF_VPORT_START_NUM);
+ 
+ 	return hclge_func_reset_cmd(hdev, vport->vport_id);
+ }
+@@ -588,9 +588,17 @@ static void hclge_get_queue_id_in_pf(struct hclge_vport *vport,
+ 				     struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+ 				     struct hclge_respond_to_vf_msg *resp_msg)
+ {
++	struct hnae3_handle *handle = &vport->nic;
++	struct hclge_dev *hdev = vport->back;
+ 	u16 queue_id, qid_in_pf;
+ 
+ 	memcpy(&queue_id, mbx_req->msg.data, sizeof(queue_id));
++	if (queue_id >= handle->kinfo.num_tqps) {
++		dev_err(&hdev->pdev->dev, "Invalid queue id(%u) from VF %u\n",
++			queue_id, mbx_req->mbx_src_vfid);
++		return;
++	}
++
+ 	qid_in_pf = hclge_covert_handle_qid_global(&vport->nic, queue_id);
+ 	memcpy(resp_msg->data, &qid_in_pf, sizeof(qid_in_pf));
+ 	resp_msg->len = sizeof(qid_in_pf);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 78d5bf1ea5610..44618cc4cca10 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -581,7 +581,7 @@ int hclge_tm_qs_shaper_cfg(struct hclge_vport *vport, int max_tx_rate)
+ 		ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+ 		if (ret) {
+ 			dev_err(&hdev->pdev->dev,
+-				"vf%u, qs%u failed to set tx_rate:%d, ret=%d\n",
++				"vport%u, qs%u failed to set tx_rate:%d, ret=%d\n",
+ 				vport->vport_id, shap_cfg_cmd->qs_id,
+ 				max_tx_rate, ret);
+ 			return ret;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index be3ea7023ed8c..22cf66004dfa2 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -814,40 +814,56 @@ static int hclgevf_get_rss(struct hnae3_handle *handle, u32 *indir, u8 *key,
+ 	return 0;
+ }
+ 
++static int hclgevf_parse_rss_hfunc(struct hclgevf_dev *hdev, const u8 hfunc,
++				   u8 *hash_algo)
++{
++	switch (hfunc) {
++	case ETH_RSS_HASH_TOP:
++		*hash_algo = HCLGEVF_RSS_HASH_ALGO_TOEPLITZ;
++		return 0;
++	case ETH_RSS_HASH_XOR:
++		*hash_algo = HCLGEVF_RSS_HASH_ALGO_SIMPLE;
++		return 0;
++	case ETH_RSS_HASH_NO_CHANGE:
++		*hash_algo = hdev->rss_cfg.hash_algo;
++		return 0;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int hclgevf_set_rss(struct hnae3_handle *handle, const u32 *indir,
+ 			   const u8 *key, const u8 hfunc)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ 	struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg;
++	u8 hash_algo;
+ 	int ret, i;
+ 
+ 	if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2) {
++		ret = hclgevf_parse_rss_hfunc(hdev, hfunc, &hash_algo);
++		if (ret)
++			return ret;
++
+ 		/* Set the RSS Hash Key if specififed by the user */
+ 		if (key) {
+-			switch (hfunc) {
+-			case ETH_RSS_HASH_TOP:
+-				rss_cfg->hash_algo =
+-					HCLGEVF_RSS_HASH_ALGO_TOEPLITZ;
+-				break;
+-			case ETH_RSS_HASH_XOR:
+-				rss_cfg->hash_algo =
+-					HCLGEVF_RSS_HASH_ALGO_SIMPLE;
+-				break;
+-			case ETH_RSS_HASH_NO_CHANGE:
+-				break;
+-			default:
+-				return -EINVAL;
+-			}
+-
+-			ret = hclgevf_set_rss_algo_key(hdev, rss_cfg->hash_algo,
+-						       key);
+-			if (ret)
++			ret = hclgevf_set_rss_algo_key(hdev, hash_algo, key);
++			if (ret) {
++				dev_err(&hdev->pdev->dev,
++					"invalid hfunc type %u\n", hfunc);
+ 				return ret;
++			}
+ 
+ 			/* Update the shadow RSS key with user specified qids */
+ 			memcpy(rss_cfg->rss_hash_key, key,
+ 			       HCLGEVF_RSS_KEY_SIZE);
++		} else {
++			ret = hclgevf_set_rss_algo_key(hdev, hash_algo,
++						       rss_cfg->rss_hash_key);
++			if (ret)
++				return ret;
+ 		}
++		rss_cfg->hash_algo = hash_algo;
+ 	}
+ 
+ 	/* update the shadow RSS table with user specified qids */
+diff --git a/drivers/net/ethernet/i825xx/82596.c b/drivers/net/ethernet/i825xx/82596.c
+index fc8c7cd674712..8b12a5ab3818c 100644
+--- a/drivers/net/ethernet/i825xx/82596.c
++++ b/drivers/net/ethernet/i825xx/82596.c
+@@ -1155,7 +1155,7 @@ struct net_device * __init i82596_probe(int unit)
+ 			err = -ENODEV;
+ 			goto out;
+ 		}
+-		memcpy(eth_addr, (void *) 0xfffc1f2c, ETH_ALEN);	/* YUCK! Get addr from NOVRAM */
++		memcpy(eth_addr, absolute_pointer(0xfffc1f2c), ETH_ALEN); /* YUCK! Get addr from NOVRAM */
+ 		dev->base_addr = MVME_I596_BASE;
+ 		dev->irq = (unsigned) MVME16x_IRQ_I596;
+ 		goto found;
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+index b5f68f66d42a8..7bb1f20002b58 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+@@ -186,6 +186,9 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f)
+ 	int hash;
+ 	int i;
+ 
++	if (rhashtable_lookup(&eth->flow_table, &f->cookie, mtk_flow_ht_params))
++		return -EEXIST;
++
+ 	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META)) {
+ 		struct flow_match_meta match;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 5d0c9c62382dc..1e672bc36c4dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -372,6 +372,9 @@ mlx4_en_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb,
+ 	int nhoff = skb_network_offset(skb);
+ 	int ret = 0;
+ 
++	if (skb->encapsulation)
++		return -EPROTONOSUPPORT;
++
+ 	if (skb->protocol != htons(ETH_P_IP))
+ 		return -EPROTONOSUPPORT;
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 2948d731a1c1c..512dff9551669 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1260,14 +1260,19 @@ static u32 ocelot_get_bond_mask(struct ocelot *ocelot, struct net_device *bond,
+ 	return mask;
+ }
+ 
+-static u32 ocelot_get_bridge_fwd_mask(struct ocelot *ocelot,
++static u32 ocelot_get_bridge_fwd_mask(struct ocelot *ocelot, int src_port,
+ 				      struct net_device *bridge)
+ {
++	struct ocelot_port *ocelot_port = ocelot->ports[src_port];
+ 	u32 mask = 0;
+ 	int port;
+ 
++	if (!ocelot_port || ocelot_port->bridge != bridge ||
++	    ocelot_port->stp_state != BR_STATE_FORWARDING)
++		return 0;
++
+ 	for (port = 0; port < ocelot->num_phys_ports; port++) {
+-		struct ocelot_port *ocelot_port = ocelot->ports[port];
++		ocelot_port = ocelot->ports[port];
+ 
+ 		if (!ocelot_port)
+ 			continue;
+@@ -1333,7 +1338,7 @@ void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot)
+ 			struct net_device *bridge = ocelot_port->bridge;
+ 			struct net_device *bond = ocelot_port->bond;
+ 
+-			mask = ocelot_get_bridge_fwd_mask(ocelot, bridge);
++			mask = ocelot_get_bridge_fwd_mask(ocelot, port, bridge);
+ 			mask |= cpu_fwd_mask;
+ 			mask &= ~BIT(port);
+ 			if (bond) {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index a99861124630a..68fbe536a1f32 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -1297,6 +1297,14 @@ qed_iwarp_wait_cid_map_cleared(struct qed_hwfn *p_hwfn, struct qed_bmap *bmap)
+ 	prev_weight = weight;
+ 
+ 	while (weight) {
++		/* If the HW device is during recovery, all resources are
++		 * immediately reset without receiving a per-cid indication
++		 * from HW. In this case we don't expect the cid_map to be
++		 * cleared.
++		 */
++		if (p_hwfn->cdev->recov_in_prog)
++			return 0;
++
+ 		msleep(QED_IWARP_MAX_CID_CLEAN_TIME);
+ 
+ 		weight = bitmap_weight(bmap->bitmap, bmap->max_count);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+index f16a157bb95a0..cf5baa5e59bcc 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+@@ -77,6 +77,14 @@ void qed_roce_stop(struct qed_hwfn *p_hwfn)
+ 	 * Beyond the added delay we clear the bitmap anyway.
+ 	 */
+ 	while (bitmap_weight(rcid_map->bitmap, rcid_map->max_count)) {
++		/* If the HW device is during recovery, all resources are
++		 * immediately reset without receiving a per-cid indication
++		 * from HW. In this case we don't expect the cid bitmap to be
++		 * cleared.
++		 */
++		if (p_hwfn->cdev->recov_in_prog)
++			return;
++
+ 		msleep(100);
+ 		if (wait_count++ > 20) {
+ 			DP_NOTICE(p_hwfn, "cid bitmap wait timed out\n");
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 0dbd189c2721d..2218bc3a624b4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -309,7 +309,7 @@ static void stmmac_clk_csr_set(struct stmmac_priv *priv)
+ 			priv->clk_csr = STMMAC_CSR_100_150M;
+ 		else if ((clk_rate >= CSR_F_150M) && (clk_rate < CSR_F_250M))
+ 			priv->clk_csr = STMMAC_CSR_150_250M;
+-		else if ((clk_rate >= CSR_F_250M) && (clk_rate < CSR_F_300M))
++		else if ((clk_rate >= CSR_F_250M) && (clk_rate <= CSR_F_300M))
+ 			priv->clk_csr = STMMAC_CSR_250_300M;
+ 	}
+ 
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index 8fe8887d506a3..6192244b304ab 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -68,9 +68,9 @@
+ #define SIXP_DAMA_OFF		0
+ 
+ /* default level 2 parameters */
+-#define SIXP_TXDELAY			(HZ/4)	/* in 1 s */
++#define SIXP_TXDELAY			25	/* 250 ms */
+ #define SIXP_PERSIST			50	/* in 256ths */
+-#define SIXP_SLOTTIME			(HZ/10)	/* in 1 s */
++#define SIXP_SLOTTIME			10	/* 100 ms */
+ #define SIXP_INIT_RESYNC_TIMEOUT	(3*HZ/2) /* in 1 s */
+ #define SIXP_RESYNC_TIMEOUT		5*HZ	/* in 1 s */
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 42e5a681183f3..0d3d9c3ee83c8 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -1604,6 +1604,32 @@ int phylink_ethtool_ksettings_set(struct phylink *pl,
+ 	if (config.an_enabled && phylink_is_empty_linkmode(config.advertising))
+ 		return -EINVAL;
+ 
++	/* If this link is with an SFP, ensure that changes to advertised modes
++	 * also cause the associated interface to be selected such that the
++	 * link can be configured correctly.
++	 */
++	if (pl->sfp_port && pl->sfp_bus) {
++		config.interface = sfp_select_interface(pl->sfp_bus,
++							config.advertising);
++		if (config.interface == PHY_INTERFACE_MODE_NA) {
++			phylink_err(pl,
++				    "selection of interface failed, advertisement %*pb\n",
++				    __ETHTOOL_LINK_MODE_MASK_NBITS,
++				    config.advertising);
++			return -EINVAL;
++		}
++
++		/* Revalidate with the selected interface */
++		linkmode_copy(support, pl->supported);
++		if (phylink_validate(pl, support, &config)) {
++			phylink_err(pl, "validation of %s/%s with support %*pb failed\n",
++				    phylink_an_mode_str(pl->cur_link_an_mode),
++				    phy_modes(config.interface),
++				    __ETHTOOL_LINK_MODE_MASK_NBITS, support);
++			return -EINVAL;
++		}
++	}
++
+ 	mutex_lock(&pl->state_mutex);
+ 	pl->link_config.speed = config.speed;
+ 	pl->link_config.duplex = config.duplex;
+@@ -2183,7 +2209,9 @@ static int phylink_sfp_config(struct phylink *pl, u8 mode,
+ 	if (phy_interface_mode_is_8023z(iface) && pl->phydev)
+ 		return -EINVAL;
+ 
+-	changed = !linkmode_equal(pl->supported, support);
++	changed = !linkmode_equal(pl->supported, support) ||
++		  !linkmode_equal(pl->link_config.advertising,
++				  config.advertising);
+ 	if (changed) {
+ 		linkmode_copy(pl->supported, support);
+ 		linkmode_copy(pl->link_config.advertising, config.advertising);
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 18e0ca85f6537..3c7120ec70798 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2720,14 +2720,14 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
+ 
+ 	serial = kzalloc(sizeof(*serial), GFP_KERNEL);
+ 	if (!serial)
+-		goto exit;
++		goto err_free_dev;
+ 
+ 	hso_dev->port_data.dev_serial = serial;
+ 	serial->parent = hso_dev;
+ 
+ 	if (hso_serial_common_create
+ 	    (serial, 1, CTRL_URB_RX_SIZE, CTRL_URB_TX_SIZE))
+-		goto exit;
++		goto err_free_serial;
+ 
+ 	serial->tx_data_length--;
+ 	serial->write_data = hso_mux_serial_write_data;
+@@ -2743,11 +2743,9 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
+ 	/* done, return it */
+ 	return hso_dev;
+ 
+-exit:
+-	if (serial) {
+-		tty_unregister_device(tty_drv, serial->minor);
+-		kfree(serial);
+-	}
++err_free_serial:
++	kfree(serial);
++err_free_dev:
+ 	kfree(hso_dev);
+ 	return NULL;
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index eee493685aad5..fb96658bb91ff 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -435,6 +435,10 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
+ 
+ 		skb_reserve(skb, p - buf);
+ 		skb_put(skb, len);
++
++		page = (struct page *)page->private;
++		if (page)
++			give_pages(rq, page);
+ 		goto ok;
+ 	}
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 5a8df5a195cb5..141635a35c28a 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -4756,12 +4756,12 @@ static void __net_exit vxlan_exit_batch_net(struct list_head *net_list)
+ 	LIST_HEAD(list);
+ 	unsigned int h;
+ 
+-	rtnl_lock();
+ 	list_for_each_entry(net, net_list, exit_list) {
+ 		struct vxlan_net *vn = net_generic(net, vxlan_net_id);
+ 
+ 		unregister_nexthop_notifier(net, &vn->nexthop_notifier_block);
+ 	}
++	rtnl_lock();
+ 	list_for_each_entry(net, net_list, exit_list)
+ 		vxlan_destroy_tunnels(net, &list);
+ 
+diff --git a/drivers/nfc/st-nci/spi.c b/drivers/nfc/st-nci/spi.c
+index 250d56f204c3e..e62b1a0916d89 100644
+--- a/drivers/nfc/st-nci/spi.c
++++ b/drivers/nfc/st-nci/spi.c
+@@ -278,6 +278,7 @@ static int st_nci_spi_remove(struct spi_device *dev)
+ 
+ static struct spi_device_id st_nci_spi_id_table[] = {
+ 	{ST_NCI_SPI_DRIVER_NAME, 0},
++	{"st21nfcb-spi", 0},
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(spi, st_nci_spi_id_table);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 84e7cb9f19681..e2374319df61a 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -13,7 +13,6 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/backing-dev.h>
+-#include <linux/list_sort.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ #include <linux/pr.h>
+@@ -3688,15 +3687,6 @@ out_unlock:
+ 	return ret;
+ }
+ 
+-static int ns_cmp(void *priv, const struct list_head *a,
+-		const struct list_head *b)
+-{
+-	struct nvme_ns *nsa = container_of(a, struct nvme_ns, list);
+-	struct nvme_ns *nsb = container_of(b, struct nvme_ns, list);
+-
+-	return nsa->head->ns_id - nsb->head->ns_id;
+-}
+-
+ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ {
+ 	struct nvme_ns *ns, *ret = NULL;
+@@ -3717,6 +3707,22 @@ struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ }
+ EXPORT_SYMBOL_NS_GPL(nvme_find_get_ns, NVME_TARGET_PASSTHRU);
+ 
++/*
++ * Add the namespace to the controller list while keeping the list ordered.
++ */
++static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
++{
++	struct nvme_ns *tmp;
++
++	list_for_each_entry_reverse(tmp, &ns->ctrl->namespaces, list) {
++		if (tmp->head->ns_id < ns->head->ns_id) {
++			list_add(&ns->list, &tmp->list);
++			return;
++		}
++	}
++	list_add(&ns->list, &ns->ctrl->namespaces);
++}
++
+ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ 		struct nvme_ns_ids *ids)
+ {
+@@ -3778,9 +3784,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ 	}
+ 
+ 	down_write(&ctrl->namespaces_rwsem);
+-	list_add_tail(&ns->list, &ctrl->namespaces);
++	nvme_ns_add_to_ctrl_list(ns);
+ 	up_write(&ctrl->namespaces_rwsem);
+-
+ 	nvme_get_ctrl(ctrl);
+ 
+ 	device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
+@@ -4059,10 +4064,6 @@ static void nvme_scan_work(struct work_struct *work)
+ 	if (nvme_scan_ns_list(ctrl) != 0)
+ 		nvme_scan_ns_sequential(ctrl);
+ 	mutex_unlock(&ctrl->scan_lock);
+-
+-	down_write(&ctrl->namespaces_rwsem);
+-	list_sort(NULL, &ctrl->namespaces, ns_cmp);
+-	up_write(&ctrl->namespaces_rwsem);
+ }
+ 
+ /*
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 3f32c5e86bfcb..abc9bdfd48bde 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -583,14 +583,17 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ 
+ 	down_read(&ctrl->namespaces_rwsem);
+ 	list_for_each_entry(ns, &ctrl->namespaces, list) {
+-		unsigned nsid = le32_to_cpu(desc->nsids[n]);
+-
++		unsigned nsid;
++again:
++		nsid = le32_to_cpu(desc->nsids[n]);
+ 		if (ns->head->ns_id < nsid)
+ 			continue;
+ 		if (ns->head->ns_id == nsid)
+ 			nvme_update_ns_ana_state(desc, ns);
+ 		if (++n == nr_nsids)
+ 			break;
++		if (ns->head->ns_id > nsid)
++			goto again;
+ 	}
+ 	up_read(&ctrl->namespaces_rwsem);
+ 	return 0;
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index a68704e39084e..042c594bc57e2 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -656,8 +656,8 @@ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
+ 	if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
+ 		return;
+ 
+-	nvme_rdma_destroy_queue_ib(queue);
+ 	rdma_destroy_id(queue->cm_id);
++	nvme_rdma_destroy_queue_ib(queue);
+ 	mutex_destroy(&queue->queue_lock);
+ }
+ 
+@@ -1815,14 +1815,10 @@ static int nvme_rdma_conn_established(struct nvme_rdma_queue *queue)
+ 	for (i = 0; i < queue->queue_size; i++) {
+ 		ret = nvme_rdma_post_recv(queue, &queue->rsp_ring[i]);
+ 		if (ret)
+-			goto out_destroy_queue_ib;
++			return ret;
+ 	}
+ 
+ 	return 0;
+-
+-out_destroy_queue_ib:
+-	nvme_rdma_destroy_queue_ib(queue);
+-	return ret;
+ }
+ 
+ static int nvme_rdma_conn_rejected(struct nvme_rdma_queue *queue,
+@@ -1916,14 +1912,10 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
+ 	if (ret) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"rdma_connect_locked failed (%d).\n", ret);
+-		goto out_destroy_queue_ib;
++		return ret;
+ 	}
+ 
+ 	return 0;
+-
+-out_destroy_queue_ib:
+-	nvme_rdma_destroy_queue_ib(queue);
+-	return ret;
+ }
+ 
+ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
+@@ -1954,8 +1946,6 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ 	case RDMA_CM_EVENT_ROUTE_ERROR:
+ 	case RDMA_CM_EVENT_CONNECT_ERROR:
+ 	case RDMA_CM_EVENT_UNREACHABLE:
+-		nvme_rdma_destroy_queue_ib(queue);
+-		fallthrough;
+ 	case RDMA_CM_EVENT_ADDR_ERROR:
+ 		dev_dbg(queue->ctrl->ctrl.device,
+ 			"CM error event %d\n", ev->event);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 19a711395cdc3..fd28a23d45ed6 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -614,7 +614,7 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 		cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
+ 	data->ttag = pdu->ttag;
+ 	data->command_id = nvme_cid(rq);
+-	data->data_offset = cpu_to_le32(req->data_sent);
++	data->data_offset = pdu->r2t_offset;
+ 	data->data_length = cpu_to_le32(req->pdu_len);
+ 	return 0;
+ }
+@@ -940,7 +940,15 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 			nvme_tcp_ddgst_update(queue->snd_hash, page,
+ 					offset, ret);
+ 
+-		/* fully successful last write*/
++		/*
++		 * update the request iterator except for the last payload send
++		 * in the request where we don't want to modify it as we may
++		 * compete with the RX path completing the request.
++		 */
++		if (req->data_sent + ret < req->data_len)
++			nvme_tcp_advance_req(req, ret);
++
++		/* fully successful last send in current PDU */
+ 		if (last && ret == len) {
+ 			if (queue->data_digest) {
+ 				nvme_tcp_ddgst_final(queue->snd_hash,
+@@ -952,7 +960,6 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 			}
+ 			return 1;
+ 		}
+-		nvme_tcp_advance_req(req, ret);
+ 	}
+ 	return -EAGAIN;
+ }
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index fa88bf9cba4d0..3e5053c5ec836 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1067,7 +1067,7 @@ static ssize_t nvmet_subsys_attr_serial_show(struct config_item *item,
+ {
+ 	struct nvmet_subsys *subsys = to_subsys(item);
+ 
+-	return snprintf(page, PAGE_SIZE, "%*s\n",
++	return snprintf(page, PAGE_SIZE, "%.*s\n",
+ 			NVMET_SN_MAX_SIZE, subsys->serial);
+ }
+ 
+diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
+index 3481479a2942f..d6a7c896ac866 100644
+--- a/drivers/platform/x86/amd-pmc.c
++++ b/drivers/platform/x86/amd-pmc.c
+@@ -71,7 +71,7 @@
+ #define AMD_CPU_ID_YC			0x14B5
+ 
+ #define PMC_MSG_DELAY_MIN_US		100
+-#define RESPONSE_REGISTER_LOOP_MAX	200
++#define RESPONSE_REGISTER_LOOP_MAX	20000
+ 
+ #define SOC_SUBSYSTEM_IP_MAX	12
+ #define DELAY_MIN_US		2000
+diff --git a/drivers/platform/x86/dell/Kconfig b/drivers/platform/x86/dell/Kconfig
+index 9e7314d90bea8..1e3da9700005e 100644
+--- a/drivers/platform/x86/dell/Kconfig
++++ b/drivers/platform/x86/dell/Kconfig
+@@ -166,8 +166,7 @@ config DELL_WMI
+ 
+ config DELL_WMI_PRIVACY
+ 	bool "Dell WMI Hardware Privacy Support"
+-	depends on DELL_WMI
+-	depends on LEDS_TRIGGER_AUDIO
++	depends on LEDS_TRIGGER_AUDIO = y || DELL_WMI = LEDS_TRIGGER_AUDIO
+ 	help
+ 	  This option adds integration with the "Dell Hardware Privacy"
+ 	  feature of Dell laptops to the dell-wmi driver.
+diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
+index f58b8543f6ac5..66bb39fd0ef90 100644
+--- a/drivers/platform/x86/intel_punit_ipc.c
++++ b/drivers/platform/x86/intel_punit_ipc.c
+@@ -8,7 +8,6 @@
+  * which provide mailbox interface for power management usage.
+  */
+ 
+-#include <linux/acpi.h>
+ #include <linux/bitops.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+@@ -319,7 +318,7 @@ static struct platform_driver intel_punit_ipc_driver = {
+ 	.remove = intel_punit_ipc_remove,
+ 	.driver = {
+ 		.name = "intel_punit_ipc",
+-		.acpi_match_table = ACPI_PTR(punit_ipc_acpi_ids),
++		.acpi_match_table = punit_ipc_acpi_ids,
+ 	},
+ };
+ 
+diff --git a/drivers/regulator/max14577-regulator.c b/drivers/regulator/max14577-regulator.c
+index 1d78b455cc48c..e34face736f48 100644
+--- a/drivers/regulator/max14577-regulator.c
++++ b/drivers/regulator/max14577-regulator.c
+@@ -269,5 +269,3 @@ module_exit(max14577_regulator_exit);
+ MODULE_AUTHOR("Krzysztof Kozlowski <krzk@kernel.org>");
+ MODULE_DESCRIPTION("Maxim 14577/77836 regulator driver");
+ MODULE_LICENSE("GPL");
+-MODULE_ALIAS("platform:max14577-regulator");
+-MODULE_ALIAS("platform:max77836-regulator");
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index 6cca910a76ded..7f458d510483f 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -991,7 +991,7 @@ static const struct rpmh_vreg_init_data pm8009_1_vreg_data[] = {
+ 	RPMH_VREG("ldo4",   "ldo%s4",  &pmic5_nldo,      "vdd-l4"),
+ 	RPMH_VREG("ldo5",   "ldo%s5",  &pmic5_pldo,      "vdd-l5-l6"),
+ 	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_pldo,      "vdd-l5-l6"),
+-	RPMH_VREG("ldo7",   "ldo%s6",  &pmic5_pldo_lv,   "vdd-l7"),
++	RPMH_VREG("ldo7",   "ldo%s7",  &pmic5_pldo_lv,   "vdd-l7"),
+ 	{}
+ };
+ 
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 62f88ccbd03f8..51f7f4e680c34 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -207,6 +207,9 @@ static void qeth_clear_working_pool_list(struct qeth_card *card)
+ 				 &card->qdio.in_buf_pool.entry_list, list)
+ 		list_del(&pool_entry->list);
+ 
++	if (!queue)
++		return;
++
+ 	for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
+ 		queue->bufs[i].pool_entry = NULL;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index eb88aaaf36eb3..c34a7f7446013 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -6022,7 +6022,8 @@ lpfc_sg_seg_cnt_show(struct device *dev, struct device_attribute *attr,
+ 	len = scnprintf(buf, PAGE_SIZE, "SGL sz: %d  total SGEs: %d\n",
+ 		       phba->cfg_sg_dma_buf_size, phba->cfg_total_seg_cnt);
+ 
+-	len += scnprintf(buf + len, PAGE_SIZE, "Cfg: %d  SCSI: %d  NVME: %d\n",
++	len += scnprintf(buf + len, PAGE_SIZE - len,
++			"Cfg: %d  SCSI: %d  NVME: %d\n",
+ 			phba->cfg_sg_seg_cnt, phba->cfg_scsi_seg_cnt,
+ 			phba->cfg_nvme_seg_cnt);
+ 	return len;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index f8f471157109e..70b507d177f14 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -7014,7 +7014,8 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 				return 0;
+ 			break;
+ 		case QLA2XXX_INI_MODE_DUAL:
+-			if (!qla_dual_mode_enabled(vha))
++			if (!qla_dual_mode_enabled(vha) &&
++			    !qla_ini_mode_enabled(vha))
+ 				return 0;
+ 			break;
+ 		case QLA2XXX_INI_MODE_ENABLED:
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index d8b05d8b54708..922e4c7bd88e4 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -441,9 +441,7 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 	struct iscsi_transport *t = iface->transport;
+ 	int param = -1;
+ 
+-	if (attr == &dev_attr_iface_enabled.attr)
+-		param = ISCSI_NET_PARAM_IFACE_ENABLE;
+-	else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
++	if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
+ 		param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
+ 	else if (attr == &dev_attr_iface_header_digest.attr)
+ 		param = ISCSI_IFACE_PARAM_HDRDGST_EN;
+@@ -483,7 +481,9 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 	if (param != -1)
+ 		return t->attr_is_visible(ISCSI_IFACE_PARAM, param);
+ 
+-	if (attr == &dev_attr_iface_vlan_id.attr)
++	if (attr == &dev_attr_iface_enabled.attr)
++		param = ISCSI_NET_PARAM_IFACE_ENABLE;
++	else if (attr == &dev_attr_iface_vlan_id.attr)
+ 		param = ISCSI_NET_PARAM_VLAN_ID;
+ 	else if (attr == &dev_attr_iface_vlan_priority.attr)
+ 		param = ISCSI_NET_PARAM_VLAN_PRIORITY;
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index 186b5ff52c3ab..06ee1f045e976 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -154,8 +154,8 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ 
+ 	/*
+ 	 * Report zone buffer size should be at most 64B times the number of
+-	 * zones requested plus the 64B reply header, but should be at least
+-	 * SECTOR_SIZE for ATA devices.
++	 * zones requested plus the 64B reply header, but should be aligned
++	 * to SECTOR_SIZE for ATA devices.
+ 	 * Make sure that this size does not exceed the hardware capabilities.
+ 	 * Furthermore, since the report zone command cannot be split, make
+ 	 * sure that the allocated buffer can always be mapped by limiting the
+@@ -174,7 +174,7 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ 			*buflen = bufsize;
+ 			return buf;
+ 		}
+-		bufsize >>= 1;
++		bufsize = rounddown(bufsize >> 1, SECTOR_SIZE);
+ 	}
+ 
+ 	return NULL;
+@@ -280,7 +280,7 @@ static void sd_zbc_update_wp_offset_workfn(struct work_struct *work)
+ {
+ 	struct scsi_disk *sdkp;
+ 	unsigned long flags;
+-	unsigned int zno;
++	sector_t zno;
+ 	int ret;
+ 
+ 	sdkp = container_of(work, struct scsi_disk, zone_wp_offset_work);
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 15ac5fa148058..3a204324151a8 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -2112,6 +2112,7 @@ static inline
+ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
+ {
+ 	struct ufshcd_lrb *lrbp = &hba->lrb[task_tag];
++	unsigned long flags;
+ 
+ 	lrbp->issue_time_stamp = ktime_get();
+ 	lrbp->compl_time_stamp = ktime_set(0, 0);
+@@ -2120,19 +2121,10 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
+ 	ufshcd_clk_scaling_start_busy(hba);
+ 	if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
+ 		ufshcd_start_monitor(hba, lrbp);
+-	if (ufshcd_has_utrlcnr(hba)) {
+-		set_bit(task_tag, &hba->outstanding_reqs);
+-		ufshcd_writel(hba, 1 << task_tag,
+-			      REG_UTP_TRANSFER_REQ_DOOR_BELL);
+-	} else {
+-		unsigned long flags;
+-
+-		spin_lock_irqsave(hba->host->host_lock, flags);
+-		set_bit(task_tag, &hba->outstanding_reqs);
+-		ufshcd_writel(hba, 1 << task_tag,
+-			      REG_UTP_TRANSFER_REQ_DOOR_BELL);
+-		spin_unlock_irqrestore(hba->host->host_lock, flags);
+-	}
++	spin_lock_irqsave(hba->host->host_lock, flags);
++	set_bit(task_tag, &hba->outstanding_reqs);
++	ufshcd_writel(hba, 1 << task_tag, REG_UTP_TRANSFER_REQ_DOOR_BELL);
++	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 	/* Make sure that doorbell is committed immediately */
+ 	wmb();
+ }
+@@ -5237,10 +5229,12 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
+ /**
+  * __ufshcd_transfer_req_compl - handle SCSI and query command completion
+  * @hba: per adapter instance
+- * @completed_reqs: requests to complete
++ * @completed_reqs: bitmask that indicates which requests to complete
++ * @retry_requests: whether to ask the SCSI core to retry completed requests
+  */
+ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
+-					unsigned long completed_reqs)
++					unsigned long completed_reqs,
++					bool retry_requests)
+ {
+ 	struct ufshcd_lrb *lrbp;
+ 	struct scsi_cmnd *cmd;
+@@ -5258,7 +5252,8 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
+ 			if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
+ 				ufshcd_update_monitor(hba, lrbp);
+ 			ufshcd_add_command_trace(hba, index, UFS_CMD_COMP);
+-			result = ufshcd_transfer_rsp_status(hba, lrbp);
++			result = retry_requests ? DID_BUS_BUSY << 16 :
++				ufshcd_transfer_rsp_status(hba, lrbp);
+ 			scsi_dma_unmap(cmd);
+ 			cmd->result = result;
+ 			/* Mark completed command as NULL in LRB */
+@@ -5282,17 +5277,19 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
+ }
+ 
+ /**
+- * ufshcd_trc_handler - handle transfer requests completion
++ * ufshcd_transfer_req_compl - handle SCSI and query command completion
+  * @hba: per adapter instance
+- * @use_utrlcnr: get completed requests from UTRLCNR
++ * @retry_requests: whether or not to ask to retry requests
+  *
+  * Returns
+  *  IRQ_HANDLED - If interrupt is valid
+  *  IRQ_NONE    - If invalid interrupt
+  */
+-static irqreturn_t ufshcd_trc_handler(struct ufs_hba *hba, bool use_utrlcnr)
++static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba,
++					     bool retry_requests)
+ {
+-	unsigned long completed_reqs = 0;
++	unsigned long completed_reqs, flags;
++	u32 tr_doorbell;
+ 
+ 	/* Resetting interrupt aggregation counters first and reading the
+ 	 * DOOR_BELL afterward allows us to handle all the completed requests.
+@@ -5305,27 +5302,14 @@ static irqreturn_t ufshcd_trc_handler(struct ufs_hba *hba, bool use_utrlcnr)
+ 	    !(hba->quirks & UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR))
+ 		ufshcd_reset_intr_aggr(hba);
+ 
+-	if (use_utrlcnr) {
+-		u32 utrlcnr;
+-
+-		utrlcnr = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_LIST_COMPL);
+-		if (utrlcnr) {
+-			ufshcd_writel(hba, utrlcnr,
+-				      REG_UTP_TRANSFER_REQ_LIST_COMPL);
+-			completed_reqs = utrlcnr;
+-		}
+-	} else {
+-		unsigned long flags;
+-		u32 tr_doorbell;
+-
+-		spin_lock_irqsave(hba->host->host_lock, flags);
+-		tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
+-		completed_reqs = tr_doorbell ^ hba->outstanding_reqs;
+-		spin_unlock_irqrestore(hba->host->host_lock, flags);
+-	}
++	spin_lock_irqsave(hba->host->host_lock, flags);
++	tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
++	completed_reqs = tr_doorbell ^ hba->outstanding_reqs;
++	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 
+ 	if (completed_reqs) {
+-		__ufshcd_transfer_req_compl(hba, completed_reqs);
++		__ufshcd_transfer_req_compl(hba, completed_reqs,
++					    retry_requests);
+ 		return IRQ_HANDLED;
+ 	} else {
+ 		return IRQ_NONE;
+@@ -5804,7 +5788,13 @@ out:
+ /* Complete requests that have door-bell cleared */
+ static void ufshcd_complete_requests(struct ufs_hba *hba)
+ {
+-	ufshcd_trc_handler(hba, false);
++	ufshcd_transfer_req_compl(hba, /*retry_requests=*/false);
++	ufshcd_tmc_handler(hba);
++}
++
++static void ufshcd_retry_aborted_requests(struct ufs_hba *hba)
++{
++	ufshcd_transfer_req_compl(hba, /*retry_requests=*/true);
+ 	ufshcd_tmc_handler(hba);
+ }
+ 
+@@ -6146,8 +6136,7 @@ static void ufshcd_err_handler(struct work_struct *work)
+ 	}
+ 
+ lock_skip_pending_xfer_clear:
+-	/* Complete the requests that are cleared by s/w */
+-	ufshcd_complete_requests(hba);
++	ufshcd_retry_aborted_requests(hba);
+ 
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+ 	hba->silence_err_logs = false;
+@@ -6445,7 +6434,7 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
+ 		retval |= ufshcd_tmc_handler(hba);
+ 
+ 	if (intr_status & UTP_TRANSFER_REQ_COMPL)
+-		retval |= ufshcd_trc_handler(hba, ufshcd_has_utrlcnr(hba));
++		retval |= ufshcd_transfer_req_compl(hba, /*retry_requests=*/false);
+ 
+ 	return retval;
+ }
+@@ -6869,7 +6858,7 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)
+ 			err = ufshcd_clear_cmd(hba, pos);
+ 			if (err)
+ 				break;
+-			__ufshcd_transfer_req_compl(hba, pos);
++			__ufshcd_transfer_req_compl(hba, 1U << pos, false);
+ 		}
+ 	}
+ 
+@@ -7040,7 +7029,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
+ 		dev_err(hba->dev,
+ 		"%s: cmd was completed, but without a notifying intr, tag = %d",
+ 		__func__, tag);
+-		__ufshcd_transfer_req_compl(hba, 1UL << tag);
++		__ufshcd_transfer_req_compl(hba, 1UL << tag, /*retry_requests=*/false);
+ 		goto release;
+ 	}
+ 
+@@ -7105,7 +7094,7 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
+ 	 */
+ 	ufshcd_hba_stop(hba);
+ 	hba->silence_err_logs = true;
+-	ufshcd_complete_requests(hba);
++	ufshcd_retry_aborted_requests(hba);
+ 	hba->silence_err_logs = false;
+ 
+ 	/* scale up clocks to max frequency before full reinitialization */
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 194755c9ddfeb..86d4765a17b83 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -1160,11 +1160,6 @@ static inline u32 ufshcd_vops_get_ufs_hci_version(struct ufs_hba *hba)
+ 	return ufshcd_readl(hba, REG_UFS_VERSION);
+ }
+ 
+-static inline bool ufshcd_has_utrlcnr(struct ufs_hba *hba)
+-{
+-	return (hba->ufs_version >= ufshci_version(3, 0));
+-}
+-
+ static inline int ufshcd_vops_clk_scale_notify(struct ufs_hba *hba,
+ 			bool up, enum ufs_notify_change_status status)
+ {
+diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
+index 5affb1fce5ad0..de95be5d11d4e 100644
+--- a/drivers/scsi/ufs/ufshci.h
++++ b/drivers/scsi/ufs/ufshci.h
+@@ -39,7 +39,6 @@ enum {
+ 	REG_UTP_TRANSFER_REQ_DOOR_BELL		= 0x58,
+ 	REG_UTP_TRANSFER_REQ_LIST_CLEAR		= 0x5C,
+ 	REG_UTP_TRANSFER_REQ_LIST_RUN_STOP	= 0x60,
+-	REG_UTP_TRANSFER_REQ_LIST_COMPL		= 0x64,
+ 	REG_UTP_TASK_REQ_LIST_BASE_L		= 0x70,
+ 	REG_UTP_TASK_REQ_LIST_BASE_H		= 0x74,
+ 	REG_UTP_TASK_REQ_DOOR_BELL		= 0x78,
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 6a726c95ac7a8..dc1a6899ba3b2 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1206,7 +1206,7 @@ static int tegra_slink_resume(struct device *dev)
+ }
+ #endif
+ 
+-static int tegra_slink_runtime_suspend(struct device *dev)
++static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev)
+ {
+ 	struct spi_master *master = dev_get_drvdata(dev);
+ 	struct tegra_slink_data *tspi = spi_master_get_devdata(master);
+@@ -1218,7 +1218,7 @@ static int tegra_slink_runtime_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int tegra_slink_runtime_resume(struct device *dev)
++static int __maybe_unused tegra_slink_runtime_resume(struct device *dev)
+ {
+ 	struct spi_master *master = dev_get_drvdata(dev);
+ 	struct tegra_slink_data *tspi = spi_master_get_devdata(master);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index e4dc593b1f32a..f95f7666cb5b7 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -58,10 +58,6 @@ modalias_show(struct device *dev, struct device_attribute *a, char *buf)
+ 	const struct spi_device	*spi = to_spi_device(dev);
+ 	int len;
+ 
+-	len = of_device_modalias(dev, buf, PAGE_SIZE);
+-	if (len != -ENODEV)
+-		return len;
+-
+ 	len = acpi_device_modalias(dev, buf, PAGE_SIZE - 1);
+ 	if (len != -ENODEV)
+ 		return len;
+@@ -367,10 +363,6 @@ static int spi_uevent(struct device *dev, struct kobj_uevent_env *env)
+ 	const struct spi_device		*spi = to_spi_device(dev);
+ 	int rc;
+ 
+-	rc = of_device_uevent_modalias(dev, env);
+-	if (rc != -ENODEV)
+-		return rc;
+-
+ 	rc = acpi_device_uevent_modalias(dev, env);
+ 	if (rc != -ENODEV)
+ 		return rc;
+diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c
+index 73f01ed1e5b72..a943fce322be8 100644
+--- a/drivers/staging/greybus/uart.c
++++ b/drivers/staging/greybus/uart.c
+@@ -761,6 +761,17 @@ out:
+ 	gbphy_runtime_put_autosuspend(gb_tty->gbphy_dev);
+ }
+ 
++static void gb_tty_port_destruct(struct tty_port *port)
++{
++	struct gb_tty *gb_tty = container_of(port, struct gb_tty, port);
++
++	if (gb_tty->minor != GB_NUM_MINORS)
++		release_minor(gb_tty);
++	kfifo_free(&gb_tty->write_fifo);
++	kfree(gb_tty->buffer);
++	kfree(gb_tty);
++}
++
+ static const struct tty_operations gb_ops = {
+ 	.install =		gb_tty_install,
+ 	.open =			gb_tty_open,
+@@ -786,6 +797,7 @@ static const struct tty_port_operations gb_port_ops = {
+ 	.dtr_rts =		gb_tty_dtr_rts,
+ 	.activate =		gb_tty_port_activate,
+ 	.shutdown =		gb_tty_port_shutdown,
++	.destruct =		gb_tty_port_destruct,
+ };
+ 
+ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+@@ -798,17 +810,11 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 	int retval;
+ 	int minor;
+ 
+-	gb_tty = kzalloc(sizeof(*gb_tty), GFP_KERNEL);
+-	if (!gb_tty)
+-		return -ENOMEM;
+-
+ 	connection = gb_connection_create(gbphy_dev->bundle,
+ 					  le16_to_cpu(gbphy_dev->cport_desc->id),
+ 					  gb_uart_request_handler);
+-	if (IS_ERR(connection)) {
+-		retval = PTR_ERR(connection);
+-		goto exit_tty_free;
+-	}
++	if (IS_ERR(connection))
++		return PTR_ERR(connection);
+ 
+ 	max_payload = gb_operation_get_payload_size_max(connection);
+ 	if (max_payload < sizeof(struct gb_uart_send_data_request)) {
+@@ -816,13 +822,23 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 		goto exit_connection_destroy;
+ 	}
+ 
++	gb_tty = kzalloc(sizeof(*gb_tty), GFP_KERNEL);
++	if (!gb_tty) {
++		retval = -ENOMEM;
++		goto exit_connection_destroy;
++	}
++
++	tty_port_init(&gb_tty->port);
++	gb_tty->port.ops = &gb_port_ops;
++	gb_tty->minor = GB_NUM_MINORS;
++
+ 	gb_tty->buffer_payload_max = max_payload -
+ 			sizeof(struct gb_uart_send_data_request);
+ 
+ 	gb_tty->buffer = kzalloc(gb_tty->buffer_payload_max, GFP_KERNEL);
+ 	if (!gb_tty->buffer) {
+ 		retval = -ENOMEM;
+-		goto exit_connection_destroy;
++		goto exit_put_port;
+ 	}
+ 
+ 	INIT_WORK(&gb_tty->tx_work, gb_uart_tx_write_work);
+@@ -830,7 +846,7 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 	retval = kfifo_alloc(&gb_tty->write_fifo, GB_UART_WRITE_FIFO_SIZE,
+ 			     GFP_KERNEL);
+ 	if (retval)
+-		goto exit_buf_free;
++		goto exit_put_port;
+ 
+ 	gb_tty->credits = GB_UART_FIRMWARE_CREDITS;
+ 	init_completion(&gb_tty->credits_complete);
+@@ -844,7 +860,7 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 		} else {
+ 			retval = minor;
+ 		}
+-		goto exit_kfifo_free;
++		goto exit_put_port;
+ 	}
+ 
+ 	gb_tty->minor = minor;
+@@ -853,9 +869,6 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 	init_waitqueue_head(&gb_tty->wioctl);
+ 	mutex_init(&gb_tty->mutex);
+ 
+-	tty_port_init(&gb_tty->port);
+-	gb_tty->port.ops = &gb_port_ops;
+-
+ 	gb_tty->connection = connection;
+ 	gb_tty->gbphy_dev = gbphy_dev;
+ 	gb_connection_set_data(connection, gb_tty);
+@@ -863,7 +876,7 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 
+ 	retval = gb_connection_enable_tx(connection);
+ 	if (retval)
+-		goto exit_release_minor;
++		goto exit_put_port;
+ 
+ 	send_control(gb_tty, gb_tty->ctrlout);
+ 
+@@ -890,16 +903,10 @@ static int gb_uart_probe(struct gbphy_device *gbphy_dev,
+ 
+ exit_connection_disable:
+ 	gb_connection_disable(connection);
+-exit_release_minor:
+-	release_minor(gb_tty);
+-exit_kfifo_free:
+-	kfifo_free(&gb_tty->write_fifo);
+-exit_buf_free:
+-	kfree(gb_tty->buffer);
++exit_put_port:
++	tty_port_put(&gb_tty->port);
+ exit_connection_destroy:
+ 	gb_connection_destroy(connection);
+-exit_tty_free:
+-	kfree(gb_tty);
+ 
+ 	return retval;
+ }
+@@ -930,15 +937,10 @@ static void gb_uart_remove(struct gbphy_device *gbphy_dev)
+ 	gb_connection_disable_rx(connection);
+ 	tty_unregister_device(gb_tty_driver, gb_tty->minor);
+ 
+-	/* FIXME - free transmit / receive buffers */
+-
+ 	gb_connection_disable(connection);
+-	tty_port_destroy(&gb_tty->port);
+ 	gb_connection_destroy(connection);
+-	release_minor(gb_tty);
+-	kfifo_free(&gb_tty->write_fifo);
+-	kfree(gb_tty->buffer);
+-	kfree(gb_tty);
++
++	tty_port_put(&gb_tty->port);
+ }
+ 
+ static int gb_tty_init(void)
+diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
+index 102ec644bc8a0..023bd4516a681 100644
+--- a/drivers/target/target_core_configfs.c
++++ b/drivers/target/target_core_configfs.c
+@@ -1110,20 +1110,24 @@ static ssize_t alua_support_store(struct config_item *item,
+ {
+ 	struct se_dev_attrib *da = to_attrib(item);
+ 	struct se_device *dev = da->da_dev;
+-	bool flag;
++	bool flag, oldflag;
+ 	int ret;
+ 
++	ret = strtobool(page, &flag);
++	if (ret < 0)
++		return ret;
++
++	oldflag = !(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_ALUA);
++	if (flag == oldflag)
++		return count;
++
+ 	if (!(dev->transport->transport_flags_changeable &
+ 	      TRANSPORT_FLAG_PASSTHROUGH_ALUA)) {
+ 		pr_err("dev[%p]: Unable to change SE Device alua_support:"
+ 			" alua_support has fixed value\n", dev);
+-		return -EINVAL;
++		return -ENOSYS;
+ 	}
+ 
+-	ret = strtobool(page, &flag);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (flag)
+ 		dev->transport_flags &= ~TRANSPORT_FLAG_PASSTHROUGH_ALUA;
+ 	else
+@@ -1145,20 +1149,24 @@ static ssize_t pgr_support_store(struct config_item *item,
+ {
+ 	struct se_dev_attrib *da = to_attrib(item);
+ 	struct se_device *dev = da->da_dev;
+-	bool flag;
++	bool flag, oldflag;
+ 	int ret;
+ 
++	ret = strtobool(page, &flag);
++	if (ret < 0)
++		return ret;
++
++	oldflag = !(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR);
++	if (flag == oldflag)
++		return count;
++
+ 	if (!(dev->transport->transport_flags_changeable &
+ 	      TRANSPORT_FLAG_PASSTHROUGH_PGR)) {
+ 		pr_err("dev[%p]: Unable to change SE Device pgr_support:"
+ 			" pgr_support has fixed value\n", dev);
+-		return -EINVAL;
++		return -ENOSYS;
+ 	}
+ 
+-	ret = strtobool(page, &flag);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (flag)
+ 		dev->transport_flags &= ~TRANSPORT_FLAG_PASSTHROUGH_PGR;
+ 	else
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 0f0038af2ad48..fb64acfd5e07d 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -107,7 +107,7 @@ static int tcc_offset_update(unsigned int tcc)
+ 	return 0;
+ }
+ 
+-static unsigned int tcc_offset_save;
++static int tcc_offset_save = -1;
+ 
+ static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
+ 				struct device_attribute *attr, const char *buf,
+@@ -352,7 +352,8 @@ int proc_thermal_resume(struct device *dev)
+ 	proc_dev = dev_get_drvdata(dev);
+ 	proc_thermal_read_ppcc(proc_dev);
+ 
+-	tcc_offset_update(tcc_offset_save);
++	if (tcc_offset_save >= 0)
++		tcc_offset_update(tcc_offset_save);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 97ef9b040b84a..51374f4e1ccaf 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -222,15 +222,14 @@ int thermal_build_list_of_policies(char *buf)
+ {
+ 	struct thermal_governor *pos;
+ 	ssize_t count = 0;
+-	ssize_t size = PAGE_SIZE;
+ 
+ 	mutex_lock(&thermal_governor_lock);
+ 
+ 	list_for_each_entry(pos, &thermal_governor_list, governor_list) {
+-		size = PAGE_SIZE - count;
+-		count += scnprintf(buf + count, size, "%s ", pos->name);
++		count += scnprintf(buf + count, PAGE_SIZE - count, "%s ",
++				   pos->name);
+ 	}
+-	count += scnprintf(buf + count, size, "\n");
++	count += scnprintf(buf + count, PAGE_SIZE - count, "\n");
+ 
+ 	mutex_unlock(&thermal_governor_lock);
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index b6c731a267d26..7223e22c4b886 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -106,7 +106,7 @@
+ #define UART_OMAP_EFR2_TIMEOUT_BEHAVE	BIT(6)
+ 
+ /* RX FIFO occupancy indicator */
+-#define UART_OMAP_RX_LVL		0x64
++#define UART_OMAP_RX_LVL		0x19
+ 
+ struct omap8250_priv {
+ 	int line;
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 231de29a64521..ab226da75f7ba 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -163,7 +163,7 @@ static unsigned int mvebu_uart_tx_empty(struct uart_port *port)
+ 	st = readl(port->membase + UART_STAT);
+ 	spin_unlock_irqrestore(&port->lock, flags);
+ 
+-	return (st & STAT_TX_FIFO_EMP) ? TIOCSER_TEMT : 0;
++	return (st & STAT_TX_EMP) ? TIOCSER_TEMT : 0;
+ }
+ 
+ static unsigned int mvebu_uart_get_mctrl(struct uart_port *port)
+diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c
+index 5bb928b7873e7..2f5fbd7db7cac 100644
+--- a/drivers/tty/synclink_gt.c
++++ b/drivers/tty/synclink_gt.c
+@@ -438,8 +438,8 @@ static void reset_tbufs(struct slgt_info *info);
+ static void tdma_reset(struct slgt_info *info);
+ static bool tx_load(struct slgt_info *info, const char *buf, unsigned int count);
+ 
+-static void get_signals(struct slgt_info *info);
+-static void set_signals(struct slgt_info *info);
++static void get_gtsignals(struct slgt_info *info);
++static void set_gtsignals(struct slgt_info *info);
+ static void set_rate(struct slgt_info *info, u32 data_rate);
+ 
+ static void bh_transmit(struct slgt_info *info);
+@@ -720,7 +720,7 @@ static void set_termios(struct tty_struct *tty, struct ktermios *old_termios)
+ 	if ((old_termios->c_cflag & CBAUD) && !C_BAUD(tty)) {
+ 		info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+ 		spin_lock_irqsave(&info->lock,flags);
+-		set_signals(info);
++		set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ 
+@@ -730,7 +730,7 @@ static void set_termios(struct tty_struct *tty, struct ktermios *old_termios)
+ 		if (!C_CRTSCTS(tty) || !tty_throttled(tty))
+ 			info->signals |= SerialSignal_RTS;
+ 		spin_lock_irqsave(&info->lock,flags);
+-	 	set_signals(info);
++	 	set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ 
+@@ -1181,7 +1181,7 @@ static inline void line_info(struct seq_file *m, struct slgt_info *info)
+ 
+ 	/* output current serial signal states */
+ 	spin_lock_irqsave(&info->lock,flags);
+-	get_signals(info);
++	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 
+ 	stat_buf[0] = 0;
+@@ -1281,7 +1281,7 @@ static void throttle(struct tty_struct * tty)
+ 	if (C_CRTSCTS(tty)) {
+ 		spin_lock_irqsave(&info->lock,flags);
+ 		info->signals &= ~SerialSignal_RTS;
+-		set_signals(info);
++		set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ }
+@@ -1306,7 +1306,7 @@ static void unthrottle(struct tty_struct * tty)
+ 	if (C_CRTSCTS(tty)) {
+ 		spin_lock_irqsave(&info->lock,flags);
+ 		info->signals |= SerialSignal_RTS;
+-		set_signals(info);
++		set_gtsignals(info);
+ 		spin_unlock_irqrestore(&info->lock,flags);
+ 	}
+ }
+@@ -1477,7 +1477,7 @@ static int hdlcdev_open(struct net_device *dev)
+ 
+ 	/* inform generic HDLC layer of current DCD status */
+ 	spin_lock_irqsave(&info->lock, flags);
+-	get_signals(info);
++	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock, flags);
+ 	if (info->signals & SerialSignal_DCD)
+ 		netif_carrier_on(dev);
+@@ -2232,7 +2232,7 @@ static void isr_txeom(struct slgt_info *info, unsigned short status)
+ 		if (info->params.mode != MGSL_MODE_ASYNC && info->drop_rts_on_tx_done) {
+ 			info->signals &= ~SerialSignal_RTS;
+ 			info->drop_rts_on_tx_done = false;
+-			set_signals(info);
++			set_gtsignals(info);
+ 		}
+ 
+ #if SYNCLINK_GENERIC_HDLC
+@@ -2397,7 +2397,7 @@ static void shutdown(struct slgt_info *info)
+ 
+  	if (!info->port.tty || info->port.tty->termios.c_cflag & HUPCL) {
+ 		info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+-		set_signals(info);
++		set_gtsignals(info);
+ 	}
+ 
+ 	flush_cond_wait(&info->gpio_wait_q);
+@@ -2425,7 +2425,7 @@ static void program_hw(struct slgt_info *info)
+ 	else
+ 		async_mode(info);
+ 
+-	set_signals(info);
++	set_gtsignals(info);
+ 
+ 	info->dcd_chkcount = 0;
+ 	info->cts_chkcount = 0;
+@@ -2433,7 +2433,7 @@ static void program_hw(struct slgt_info *info)
+ 	info->dsr_chkcount = 0;
+ 
+ 	slgt_irq_on(info, IRQ_DCD | IRQ_CTS | IRQ_DSR | IRQ_RI);
+-	get_signals(info);
++	get_gtsignals(info);
+ 
+ 	if (info->netcount ||
+ 	    (info->port.tty && info->port.tty->termios.c_cflag & CREAD))
+@@ -2670,7 +2670,7 @@ static int wait_mgsl_event(struct slgt_info *info, int __user *mask_ptr)
+ 	spin_lock_irqsave(&info->lock,flags);
+ 
+ 	/* return immediately if state matches requested events */
+-	get_signals(info);
++	get_gtsignals(info);
+ 	s = info->signals;
+ 
+ 	events = mask &
+@@ -3088,7 +3088,7 @@ static int tiocmget(struct tty_struct *tty)
+  	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&info->lock,flags);
+- 	get_signals(info);
++ 	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 
+ 	result = ((info->signals & SerialSignal_RTS) ? TIOCM_RTS:0) +
+@@ -3127,7 +3127,7 @@ static int tiocmset(struct tty_struct *tty,
+ 		info->signals &= ~SerialSignal_DTR;
+ 
+ 	spin_lock_irqsave(&info->lock,flags);
+-	set_signals(info);
++	set_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 	return 0;
+ }
+@@ -3138,7 +3138,7 @@ static int carrier_raised(struct tty_port *port)
+ 	struct slgt_info *info = container_of(port, struct slgt_info, port);
+ 
+ 	spin_lock_irqsave(&info->lock,flags);
+-	get_signals(info);
++	get_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ 	return (info->signals & SerialSignal_DCD) ? 1 : 0;
+ }
+@@ -3153,7 +3153,7 @@ static void dtr_rts(struct tty_port *port, int on)
+ 		info->signals |= SerialSignal_RTS | SerialSignal_DTR;
+ 	else
+ 		info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+-	set_signals(info);
++	set_gtsignals(info);
+ 	spin_unlock_irqrestore(&info->lock,flags);
+ }
+ 
+@@ -3951,10 +3951,10 @@ static void tx_start(struct slgt_info *info)
+ 
+ 		if (info->params.mode != MGSL_MODE_ASYNC) {
+ 			if (info->params.flags & HDLC_FLAG_AUTO_RTS) {
+-				get_signals(info);
++				get_gtsignals(info);
+ 				if (!(info->signals & SerialSignal_RTS)) {
+ 					info->signals |= SerialSignal_RTS;
+-					set_signals(info);
++					set_gtsignals(info);
+ 					info->drop_rts_on_tx_done = true;
+ 				}
+ 			}
+@@ -4008,7 +4008,7 @@ static void reset_port(struct slgt_info *info)
+ 	rx_stop(info);
+ 
+ 	info->signals &= ~(SerialSignal_RTS | SerialSignal_DTR);
+-	set_signals(info);
++	set_gtsignals(info);
+ 
+ 	slgt_irq_off(info, IRQ_ALL | IRQ_MASTER);
+ }
+@@ -4430,7 +4430,7 @@ static void tx_set_idle(struct slgt_info *info)
+ /*
+  * get state of V24 status (input) signals
+  */
+-static void get_signals(struct slgt_info *info)
++static void get_gtsignals(struct slgt_info *info)
+ {
+ 	unsigned short status = rd_reg16(info, SSR);
+ 
+@@ -4492,7 +4492,7 @@ static void msc_set_vcr(struct slgt_info *info)
+ /*
+  * set state of V24 control (output) signals
+  */
+-static void set_signals(struct slgt_info *info)
++static void set_gtsignals(struct slgt_info *info)
+ {
+ 	unsigned char val = rd_reg8(info, VCR);
+ 	if (info->signals & SerialSignal_DTR)
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index 5d8c982019afc..1f3b4a1422126 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -1100,6 +1100,19 @@ static int cdns3_ep_run_stream_transfer(struct cdns3_endpoint *priv_ep,
+ 	return 0;
+ }
+ 
++static void cdns3_rearm_drdy_if_needed(struct cdns3_endpoint *priv_ep)
++{
++	struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
++
++	if (priv_dev->dev_ver < DEV_VER_V3)
++		return;
++
++	if (readl(&priv_dev->regs->ep_sts) & EP_STS_TRBERR) {
++		writel(EP_STS_TRBERR, &priv_dev->regs->ep_sts);
++		writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
++	}
++}
++
+ /**
+  * cdns3_ep_run_transfer - start transfer on no-default endpoint hardware
+  * @priv_ep: endpoint object
+@@ -1351,6 +1364,7 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 		/*clearing TRBERR and EP_STS_DESCMIS before seting DRDY*/
+ 		writel(EP_STS_TRBERR | EP_STS_DESCMIS, &priv_dev->regs->ep_sts);
+ 		writel(EP_CMD_DRDY, &priv_dev->regs->ep_cmd);
++		cdns3_rearm_drdy_if_needed(priv_ep);
+ 		trace_cdns3_doorbell_epx(priv_ep->name,
+ 					 readl(&priv_dev->regs->ep_traddr));
+ 	}
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 4895325b16a46..5b90d0979c607 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -726,7 +726,8 @@ static void acm_port_destruct(struct tty_port *port)
+ {
+ 	struct acm *acm = container_of(port, struct acm, port);
+ 
+-	acm_release_minor(acm);
++	if (acm->minor != ACM_MINOR_INVALID)
++		acm_release_minor(acm);
+ 	usb_put_intf(acm->control);
+ 	kfree(acm->country_codes);
+ 	kfree(acm);
+@@ -1323,8 +1324,10 @@ made_compressed_probe:
+ 	usb_get_intf(acm->control); /* undone in destruct() */
+ 
+ 	minor = acm_alloc_minor(acm);
+-	if (minor < 0)
++	if (minor < 0) {
++		acm->minor = ACM_MINOR_INVALID;
+ 		goto err_put_port;
++	}
+ 
+ 	acm->minor = minor;
+ 	acm->dev = usb_dev;
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index 8aef5eb769a0d..3aa7f0a3ad71e 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -22,6 +22,8 @@
+ #define ACM_TTY_MAJOR		166
+ #define ACM_TTY_MINORS		256
+ 
++#define ACM_MINOR_INVALID	ACM_TTY_MINORS
++
+ /*
+  * Requests.
+  */
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 0f8b7c93310ea..99ff2d23be05e 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2775,6 +2775,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ 	int retval;
+ 	struct usb_device *rhdev;
++	struct usb_hcd *shared_hcd;
+ 
+ 	if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ 		hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2935,13 +2936,26 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 		goto err_hcd_driver_start;
+ 	}
+ 
++	/* starting here, usbcore will pay attention to the shared HCD roothub */
++	shared_hcd = hcd->shared_hcd;
++	if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
++		retval = register_root_hub(shared_hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
++
++		if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
++			usb_hcd_poll_rh_status(shared_hcd);
++	}
++
+ 	/* starting here, usbcore will pay attention to this root hub */
+-	retval = register_root_hub(hcd);
+-	if (retval != 0)
+-		goto err_register_root_hub;
++	if (!HCD_DEFER_RH_REGISTER(hcd)) {
++		retval = register_root_hub(hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
+ 
+-	if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+-		usb_hcd_poll_rh_status(hcd);
++		if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++			usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ 
+@@ -2985,6 +2999,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ 	struct usb_device *rhdev = hcd->self.root_hub;
++	bool rh_registered;
+ 
+ 	dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+ 
+@@ -2995,6 +3010,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 
+ 	dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ 	spin_lock_irq (&hcd_root_hub_lock);
++	rh_registered = hcd->rh_registered;
+ 	hcd->rh_registered = 0;
+ 	spin_unlock_irq (&hcd_root_hub_lock);
+ 
+@@ -3004,7 +3020,8 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 	cancel_work_sync(&hcd->died_work);
+ 
+ 	mutex_lock(&usb_bus_idr_lock);
+-	usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
++	if (rh_registered)
++		usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
+ 	mutex_unlock(&usb_bus_idr_lock);
+ 
+ 	/*
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 3146df6e6510d..8f7ee70f5bdcf 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -115,10 +115,16 @@ static inline bool using_desc_dma(struct dwc2_hsotg *hsotg)
+  */
+ static inline void dwc2_gadget_incr_frame_num(struct dwc2_hsotg_ep *hs_ep)
+ {
++	struct dwc2_hsotg *hsotg = hs_ep->parent;
++	u16 limit = DSTS_SOFFN_LIMIT;
++
++	if (hsotg->gadget.speed != USB_SPEED_HIGH)
++		limit >>= 3;
++
+ 	hs_ep->target_frame += hs_ep->interval;
+-	if (hs_ep->target_frame > DSTS_SOFFN_LIMIT) {
++	if (hs_ep->target_frame > limit) {
+ 		hs_ep->frame_overrun = true;
+-		hs_ep->target_frame &= DSTS_SOFFN_LIMIT;
++		hs_ep->target_frame &= limit;
+ 	} else {
+ 		hs_ep->frame_overrun = false;
+ 	}
+@@ -136,10 +142,16 @@ static inline void dwc2_gadget_incr_frame_num(struct dwc2_hsotg_ep *hs_ep)
+  */
+ static inline void dwc2_gadget_dec_frame_num_by_one(struct dwc2_hsotg_ep *hs_ep)
+ {
++	struct dwc2_hsotg *hsotg = hs_ep->parent;
++	u16 limit = DSTS_SOFFN_LIMIT;
++
++	if (hsotg->gadget.speed != USB_SPEED_HIGH)
++		limit >>= 3;
++
+ 	if (hs_ep->target_frame)
+ 		hs_ep->target_frame -= 1;
+ 	else
+-		hs_ep->target_frame = DSTS_SOFFN_LIMIT;
++		hs_ep->target_frame = limit;
+ }
+ 
+ /**
+@@ -1018,6 +1030,12 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
+ 	dwc2_writel(hsotg, ctrl, depctl);
+ }
+ 
++static bool dwc2_gadget_target_frame_elapsed(struct dwc2_hsotg_ep *hs_ep);
++static void dwc2_hsotg_complete_request(struct dwc2_hsotg *hsotg,
++					struct dwc2_hsotg_ep *hs_ep,
++				       struct dwc2_hsotg_req *hs_req,
++				       int result);
++
+ /**
+  * dwc2_hsotg_start_req - start a USB request from an endpoint's queue
+  * @hsotg: The controller state.
+@@ -1170,14 +1188,19 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
+ 		}
+ 	}
+ 
+-	if (hs_ep->isochronous && hs_ep->interval == 1) {
+-		hs_ep->target_frame = dwc2_hsotg_read_frameno(hsotg);
+-		dwc2_gadget_incr_frame_num(hs_ep);
+-
+-		if (hs_ep->target_frame & 0x1)
+-			ctrl |= DXEPCTL_SETODDFR;
+-		else
+-			ctrl |= DXEPCTL_SETEVENFR;
++	if (hs_ep->isochronous) {
++		if (!dwc2_gadget_target_frame_elapsed(hs_ep)) {
++			if (hs_ep->interval == 1) {
++				if (hs_ep->target_frame & 0x1)
++					ctrl |= DXEPCTL_SETODDFR;
++				else
++					ctrl |= DXEPCTL_SETEVENFR;
++			}
++			ctrl |= DXEPCTL_CNAK;
++		} else {
++			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
++			return;
++		}
+ 	}
+ 
+ 	ctrl |= DXEPCTL_EPENA;	/* ensure ep enabled */
+@@ -1325,12 +1348,16 @@ static bool dwc2_gadget_target_frame_elapsed(struct dwc2_hsotg_ep *hs_ep)
+ 	u32 target_frame = hs_ep->target_frame;
+ 	u32 current_frame = hsotg->frame_number;
+ 	bool frame_overrun = hs_ep->frame_overrun;
++	u16 limit = DSTS_SOFFN_LIMIT;
++
++	if (hsotg->gadget.speed != USB_SPEED_HIGH)
++		limit >>= 3;
+ 
+ 	if (!frame_overrun && current_frame >= target_frame)
+ 		return true;
+ 
+ 	if (frame_overrun && current_frame >= target_frame &&
+-	    ((current_frame - target_frame) < DSTS_SOFFN_LIMIT / 2))
++	    ((current_frame - target_frame) < limit / 2))
+ 		return true;
+ 
+ 	return false;
+@@ -1713,11 +1740,9 @@ static struct dwc2_hsotg_req *get_ep_head(struct dwc2_hsotg_ep *hs_ep)
+  */
+ static void dwc2_gadget_start_next_request(struct dwc2_hsotg_ep *hs_ep)
+ {
+-	u32 mask;
+ 	struct dwc2_hsotg *hsotg = hs_ep->parent;
+ 	int dir_in = hs_ep->dir_in;
+ 	struct dwc2_hsotg_req *hs_req;
+-	u32 epmsk_reg = dir_in ? DIEPMSK : DOEPMSK;
+ 
+ 	if (!list_empty(&hs_ep->queue)) {
+ 		hs_req = get_ep_head(hs_ep);
+@@ -1733,9 +1758,6 @@ static void dwc2_gadget_start_next_request(struct dwc2_hsotg_ep *hs_ep)
+ 	} else {
+ 		dev_dbg(hsotg->dev, "%s: No more ISOC-OUT requests\n",
+ 			__func__);
+-		mask = dwc2_readl(hsotg, epmsk_reg);
+-		mask |= DOEPMSK_OUTTKNEPDISMSK;
+-		dwc2_writel(hsotg, mask, epmsk_reg);
+ 	}
+ }
+ 
+@@ -2305,19 +2327,6 @@ static void dwc2_hsotg_ep0_zlp(struct dwc2_hsotg *hsotg, bool dir_in)
+ 	dwc2_hsotg_program_zlp(hsotg, hsotg->eps_out[0]);
+ }
+ 
+-static void dwc2_hsotg_change_ep_iso_parity(struct dwc2_hsotg *hsotg,
+-					    u32 epctl_reg)
+-{
+-	u32 ctrl;
+-
+-	ctrl = dwc2_readl(hsotg, epctl_reg);
+-	if (ctrl & DXEPCTL_EOFRNUM)
+-		ctrl |= DXEPCTL_SETEVENFR;
+-	else
+-		ctrl |= DXEPCTL_SETODDFR;
+-	dwc2_writel(hsotg, ctrl, epctl_reg);
+-}
+-
+ /*
+  * dwc2_gadget_get_xfersize_ddma - get transferred bytes amount from desc
+  * @hs_ep - The endpoint on which transfer went
+@@ -2438,20 +2447,11 @@ static void dwc2_hsotg_handle_outdone(struct dwc2_hsotg *hsotg, int epnum)
+ 			dwc2_hsotg_ep0_zlp(hsotg, true);
+ 	}
+ 
+-	/*
+-	 * Slave mode OUT transfers do not go through XferComplete so
+-	 * adjust the ISOC parity here.
+-	 */
+-	if (!using_dma(hsotg)) {
+-		if (hs_ep->isochronous && hs_ep->interval == 1)
+-			dwc2_hsotg_change_ep_iso_parity(hsotg, DOEPCTL(epnum));
+-		else if (hs_ep->isochronous && hs_ep->interval > 1)
+-			dwc2_gadget_incr_frame_num(hs_ep);
+-	}
+-
+ 	/* Set actual frame number for completed transfers */
+-	if (!using_desc_dma(hsotg) && hs_ep->isochronous)
+-		req->frame_number = hsotg->frame_number;
++	if (!using_desc_dma(hsotg) && hs_ep->isochronous) {
++		req->frame_number = hs_ep->target_frame;
++		dwc2_gadget_incr_frame_num(hs_ep);
++	}
+ 
+ 	dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, result);
+ }
+@@ -2765,6 +2765,12 @@ static void dwc2_hsotg_complete_in(struct dwc2_hsotg *hsotg,
+ 		return;
+ 	}
+ 
++	/* Set actual frame number for completed transfers */
++	if (!using_desc_dma(hsotg) && hs_ep->isochronous) {
++		hs_req->req.frame_number = hs_ep->target_frame;
++		dwc2_gadget_incr_frame_num(hs_ep);
++	}
++
+ 	dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
+ }
+ 
+@@ -2825,23 +2831,18 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ 
+ 		dwc2_hsotg_txfifo_flush(hsotg, hs_ep->fifo_index);
+ 
+-		if (hs_ep->isochronous) {
+-			dwc2_hsotg_complete_in(hsotg, hs_ep);
+-			return;
+-		}
+-
+ 		if ((epctl & DXEPCTL_STALL) && (epctl & DXEPCTL_EPTYPE_BULK)) {
+ 			int dctl = dwc2_readl(hsotg, DCTL);
+ 
+ 			dctl |= DCTL_CGNPINNAK;
+ 			dwc2_writel(hsotg, dctl, DCTL);
+ 		}
+-		return;
+-	}
++	} else {
+ 
+-	if (dctl & DCTL_GOUTNAKSTS) {
+-		dctl |= DCTL_CGOUTNAK;
+-		dwc2_writel(hsotg, dctl, DCTL);
++		if (dctl & DCTL_GOUTNAKSTS) {
++			dctl |= DCTL_CGOUTNAK;
++			dwc2_writel(hsotg, dctl, DCTL);
++		}
+ 	}
+ 
+ 	if (!hs_ep->isochronous)
+@@ -2862,8 +2863,6 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ 		/* Update current frame number value. */
+ 		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
+ 	} while (dwc2_gadget_target_frame_elapsed(hs_ep));
+-
+-	dwc2_gadget_start_next_request(hs_ep);
+ }
+ 
+ /**
+@@ -2880,8 +2879,8 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
+ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ {
+ 	struct dwc2_hsotg *hsotg = ep->parent;
++	struct dwc2_hsotg_req *hs_req;
+ 	int dir_in = ep->dir_in;
+-	u32 doepmsk;
+ 
+ 	if (dir_in || !ep->isochronous)
+ 		return;
+@@ -2895,28 +2894,39 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ 		return;
+ 	}
+ 
+-	if (ep->interval > 1 &&
+-	    ep->target_frame == TARGET_FRAME_INITIAL) {
++	if (ep->target_frame == TARGET_FRAME_INITIAL) {
+ 		u32 ctrl;
+ 
+ 		ep->target_frame = hsotg->frame_number;
+-		dwc2_gadget_incr_frame_num(ep);
++		if (ep->interval > 1) {
++			ctrl = dwc2_readl(hsotg, DOEPCTL(ep->index));
++			if (ep->target_frame & 0x1)
++				ctrl |= DXEPCTL_SETODDFR;
++			else
++				ctrl |= DXEPCTL_SETEVENFR;
+ 
+-		ctrl = dwc2_readl(hsotg, DOEPCTL(ep->index));
+-		if (ep->target_frame & 0x1)
+-			ctrl |= DXEPCTL_SETODDFR;
+-		else
+-			ctrl |= DXEPCTL_SETEVENFR;
++			dwc2_writel(hsotg, ctrl, DOEPCTL(ep->index));
++		}
++	}
++
++	while (dwc2_gadget_target_frame_elapsed(ep)) {
++		hs_req = get_ep_head(ep);
++		if (hs_req)
++			dwc2_hsotg_complete_request(hsotg, ep, hs_req, -ENODATA);
+ 
+-		dwc2_writel(hsotg, ctrl, DOEPCTL(ep->index));
++		dwc2_gadget_incr_frame_num(ep);
++		/* Update current frame number value. */
++		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
+ 	}
+ 
+-	dwc2_gadget_start_next_request(ep);
+-	doepmsk = dwc2_readl(hsotg, DOEPMSK);
+-	doepmsk &= ~DOEPMSK_OUTTKNEPDISMSK;
+-	dwc2_writel(hsotg, doepmsk, DOEPMSK);
++	if (!ep->req)
++		dwc2_gadget_start_next_request(ep);
++
+ }
+ 
++static void dwc2_hsotg_ep_stop_xfr(struct dwc2_hsotg *hsotg,
++				   struct dwc2_hsotg_ep *hs_ep);
++
+ /**
+  * dwc2_gadget_handle_nak - handle NAK interrupt
+  * @hs_ep: The endpoint on which interrupt is asserted.
+@@ -2934,7 +2944,9 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
+ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
+ {
+ 	struct dwc2_hsotg *hsotg = hs_ep->parent;
++	struct dwc2_hsotg_req *hs_req;
+ 	int dir_in = hs_ep->dir_in;
++	u32 ctrl;
+ 
+ 	if (!dir_in || !hs_ep->isochronous)
+ 		return;
+@@ -2976,13 +2988,29 @@ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
+ 
+ 			dwc2_writel(hsotg, ctrl, DIEPCTL(hs_ep->index));
+ 		}
+-
+-		dwc2_hsotg_complete_request(hsotg, hs_ep,
+-					    get_ep_head(hs_ep), 0);
+ 	}
+ 
+-	if (!using_desc_dma(hsotg))
++	if (using_desc_dma(hsotg))
++		return;
++
++	ctrl = dwc2_readl(hsotg, DIEPCTL(hs_ep->index));
++	if (ctrl & DXEPCTL_EPENA)
++		dwc2_hsotg_ep_stop_xfr(hsotg, hs_ep);
++	else
++		dwc2_hsotg_txfifo_flush(hsotg, hs_ep->fifo_index);
++
++	while (dwc2_gadget_target_frame_elapsed(hs_ep)) {
++		hs_req = get_ep_head(hs_ep);
++		if (hs_req)
++			dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, -ENODATA);
++
+ 		dwc2_gadget_incr_frame_num(hs_ep);
++		/* Update current frame number value. */
++		hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
++	}
++
++	if (!hs_ep->req)
++		dwc2_gadget_start_next_request(hs_ep);
+ }
+ 
+ /**
+@@ -3038,21 +3066,15 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
+ 
+ 		/* In DDMA handle isochronous requests separately */
+ 		if (using_desc_dma(hsotg) && hs_ep->isochronous) {
+-			/* XferCompl set along with BNA */
+-			if (!(ints & DXEPINT_BNAINTR))
+-				dwc2_gadget_complete_isoc_request_ddma(hs_ep);
++			dwc2_gadget_complete_isoc_request_ddma(hs_ep);
+ 		} else if (dir_in) {
+ 			/*
+ 			 * We get OutDone from the FIFO, so we only
+ 			 * need to look at completing IN requests here
+ 			 * if operating slave mode
+ 			 */
+-			if (hs_ep->isochronous && hs_ep->interval > 1)
+-				dwc2_gadget_incr_frame_num(hs_ep);
+-
+-			dwc2_hsotg_complete_in(hsotg, hs_ep);
+-			if (ints & DXEPINT_NAKINTRPT)
+-				ints &= ~DXEPINT_NAKINTRPT;
++			if (!hs_ep->isochronous || !(ints & DXEPINT_NAKINTRPT))
++				dwc2_hsotg_complete_in(hsotg, hs_ep);
+ 
+ 			if (idx == 0 && !hs_ep->req)
+ 				dwc2_hsotg_enqueue_setup(hsotg);
+@@ -3061,10 +3083,8 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
+ 			 * We're using DMA, we need to fire an OutDone here
+ 			 * as we ignore the RXFIFO.
+ 			 */
+-			if (hs_ep->isochronous && hs_ep->interval > 1)
+-				dwc2_gadget_incr_frame_num(hs_ep);
+-
+-			dwc2_hsotg_handle_outdone(hsotg, idx);
++			if (!hs_ep->isochronous || !(ints & DXEPINT_OUTTKNEPDIS))
++				dwc2_hsotg_handle_outdone(hsotg, idx);
+ 		}
+ 	}
+ 
+@@ -4083,6 +4103,7 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
+ 			mask |= DIEPMSK_NAKMSK;
+ 			dwc2_writel(hsotg, mask, DIEPMSK);
+ 		} else {
++			epctrl |= DXEPCTL_SNAK;
+ 			mask = dwc2_readl(hsotg, DOEPMSK);
+ 			mask |= DOEPMSK_OUTTKNEPDISMSK;
+ 			dwc2_writel(hsotg, mask, DOEPMSK);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index ba74ad7f6995e..2522d15c42447 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -264,19 +264,6 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+ {
+ 	u32		reg;
+ 	int		retries = 1000;
+-	int		ret;
+-
+-	usb_phy_init(dwc->usb2_phy);
+-	usb_phy_init(dwc->usb3_phy);
+-	ret = phy_init(dwc->usb2_generic_phy);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = phy_init(dwc->usb3_generic_phy);
+-	if (ret < 0) {
+-		phy_exit(dwc->usb2_generic_phy);
+-		return ret;
+-	}
+ 
+ 	/*
+ 	 * We're resetting only the device side because, if we're in host mode,
+@@ -310,9 +297,6 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 			udelay(1);
+ 	} while (--retries);
+ 
+-	phy_exit(dwc->usb3_generic_phy);
+-	phy_exit(dwc->usb2_generic_phy);
+-
+ 	return -ETIMEDOUT;
+ 
+ done:
+@@ -982,9 +966,21 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 		dwc->phys_ready = true;
+ 	}
+ 
++	usb_phy_init(dwc->usb2_phy);
++	usb_phy_init(dwc->usb3_phy);
++	ret = phy_init(dwc->usb2_generic_phy);
++	if (ret < 0)
++		goto err0a;
++
++	ret = phy_init(dwc->usb3_generic_phy);
++	if (ret < 0) {
++		phy_exit(dwc->usb2_generic_phy);
++		goto err0a;
++	}
++
+ 	ret = dwc3_core_soft_reset(dwc);
+ 	if (ret)
+-		goto err0a;
++		goto err1;
+ 
+ 	if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD &&
+ 	    !DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) {
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index ae29ff2b2b686..37c94031af1ed 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -348,6 +348,14 @@ static struct usb_endpoint_descriptor ss_epin_fback_desc = {
+ 	.bInterval = 4,
+ };
+ 
++static struct usb_ss_ep_comp_descriptor ss_epin_fback_desc_comp = {
++	.bLength		= sizeof(ss_epin_fback_desc_comp),
++	.bDescriptorType	= USB_DT_SS_ENDPOINT_COMP,
++	.bMaxBurst		= 0,
++	.bmAttributes		= 0,
++	.wBytesPerInterval	= cpu_to_le16(4),
++};
++
+ 
+ /* Audio Streaming IN Interface - Alt0 */
+ static struct usb_interface_descriptor std_as_in_if0_desc = {
+@@ -527,6 +535,7 @@ static struct usb_descriptor_header *ss_audio_desc[] = {
+ 	(struct usb_descriptor_header *)&ss_epout_desc_comp,
+ 	(struct usb_descriptor_header *)&as_iso_out_desc,
+ 	(struct usb_descriptor_header *)&ss_epin_fback_desc,
++	(struct usb_descriptor_header *)&ss_epin_fback_desc_comp,
+ 
+ 	(struct usb_descriptor_header *)&std_as_in_if0_desc,
+ 	(struct usb_descriptor_header *)&std_as_in_if1_desc,
+@@ -604,6 +613,7 @@ static void setup_headers(struct f_uac2_opts *opts,
+ {
+ 	struct usb_ss_ep_comp_descriptor *epout_desc_comp = NULL;
+ 	struct usb_ss_ep_comp_descriptor *epin_desc_comp = NULL;
++	struct usb_ss_ep_comp_descriptor *epin_fback_desc_comp = NULL;
+ 	struct usb_endpoint_descriptor *epout_desc;
+ 	struct usb_endpoint_descriptor *epin_desc;
+ 	struct usb_endpoint_descriptor *epin_fback_desc;
+@@ -626,6 +636,7 @@ static void setup_headers(struct f_uac2_opts *opts,
+ 		epout_desc_comp = &ss_epout_desc_comp;
+ 		epin_desc_comp = &ss_epin_desc_comp;
+ 		epin_fback_desc = &ss_epin_fback_desc;
++		epin_fback_desc_comp = &ss_epin_fback_desc_comp;
+ 	}
+ 
+ 	i = 0;
+@@ -654,8 +665,11 @@ static void setup_headers(struct f_uac2_opts *opts,
+ 
+ 		headers[i++] = USBDHDR(&as_iso_out_desc);
+ 
+-		if (EPOUT_FBACK_IN_EN(opts))
++		if (EPOUT_FBACK_IN_EN(opts)) {
+ 			headers[i++] = USBDHDR(epin_fback_desc);
++			if (epin_fback_desc_comp)
++				headers[i++] = USBDHDR(epin_fback_desc_comp);
++		}
+ 	}
+ 	if (EPIN_EN(opts)) {
+ 		headers[i++] = USBDHDR(&std_as_in_if0_desc);
+@@ -937,6 +951,9 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ 	agdev->out_ep_maxpsize = max_t(u16, agdev->out_ep_maxpsize,
+ 				le16_to_cpu(ss_epout_desc.wMaxPacketSize));
+ 
++	ss_epin_desc_comp.wBytesPerInterval = ss_epin_desc.wMaxPacketSize;
++	ss_epout_desc_comp.wBytesPerInterval = ss_epout_desc.wMaxPacketSize;
++
+ 	hs_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
+ 	hs_epin_fback_desc.bEndpointAddress = fs_epin_fback_desc.bEndpointAddress;
+ 	hs_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress;
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index 9e5c950612d06..b1aef892bfa38 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -76,11 +76,13 @@ static const struct snd_pcm_hardware uac_pcm_hardware = {
+ };
+ 
+ static void u_audio_set_fback_frequency(enum usb_device_speed speed,
++					struct usb_ep *out_ep,
+ 					unsigned long long freq,
+ 					unsigned int pitch,
+ 					void *buf)
+ {
+ 	u32 ff = 0;
++	const struct usb_endpoint_descriptor *ep_desc;
+ 
+ 	/*
+ 	 * Because the pitch base is 1000000, the final divider here
+@@ -108,8 +110,13 @@ static void u_audio_set_fback_frequency(enum usb_device_speed speed,
+ 		 * byte fromat (that is Q16.16)
+ 		 *
+ 		 * ff = (freq << 16) / 8000
++		 *
++		 * Win10 and OSX UAC2 drivers require number of samples per packet
++		 * in order to honor the feedback value.
++		 * Linux snd-usb-audio detects the applied bit-shift automatically.
+ 		 */
+-		freq <<= 4;
++		ep_desc = out_ep->desc;
++		freq <<= 4 + (ep_desc->bInterval - 1);
+ 	}
+ 
+ 	ff = DIV_ROUND_CLOSEST_ULL((freq * pitch), 1953125);
+@@ -247,7 +254,7 @@ static void u_audio_iso_fback_complete(struct usb_ep *ep,
+ 		pr_debug("%s: iso_complete status(%d) %d/%d\n",
+ 			__func__, status, req->actual, req->length);
+ 
+-	u_audio_set_fback_frequency(audio_dev->gadget->speed,
++	u_audio_set_fback_frequency(audio_dev->gadget->speed, audio_dev->out_ep,
+ 				    params->c_srate, prm->pitch,
+ 				    req->buf);
+ 
+@@ -506,7 +513,7 @@ int u_audio_start_capture(struct g_audio *audio_dev)
+ 	 * be meauserd at start of playback
+ 	 */
+ 	prm->pitch = 1000000;
+-	u_audio_set_fback_frequency(audio_dev->gadget->speed,
++	u_audio_set_fback_frequency(audio_dev->gadget->speed, ep,
+ 				    params->c_srate, prm->pitch,
+ 				    req_fback->buf);
+ 
+diff --git a/drivers/usb/gadget/udc/r8a66597-udc.c b/drivers/usb/gadget/udc/r8a66597-udc.c
+index 65cae48834545..38e4d6b505a05 100644
+--- a/drivers/usb/gadget/udc/r8a66597-udc.c
++++ b/drivers/usb/gadget/udc/r8a66597-udc.c
+@@ -1250,7 +1250,7 @@ static void set_feature(struct r8a66597 *r8a66597, struct usb_ctrlrequest *ctrl)
+ 			do {
+ 				tmp = r8a66597_read(r8a66597, INTSTS0) & CTSQ;
+ 				udelay(1);
+-			} while (tmp != CS_IDST || timeout-- > 0);
++			} while (tmp != CS_IDST && timeout-- > 0);
+ 
+ 			if (tmp == CS_IDST)
+ 				r8a66597_bset(r8a66597,
+diff --git a/drivers/usb/host/bcma-hcd.c b/drivers/usb/host/bcma-hcd.c
+index 337b425dd4b04..2df52f75f6b3c 100644
+--- a/drivers/usb/host/bcma-hcd.c
++++ b/drivers/usb/host/bcma-hcd.c
+@@ -406,12 +406,9 @@ static int bcma_hcd_probe(struct bcma_device *core)
+ 		return -ENOMEM;
+ 	usb_dev->core = core;
+ 
+-	if (core->dev.of_node) {
++	if (core->dev.of_node)
+ 		usb_dev->gpio_desc = devm_gpiod_get(&core->dev, "vcc",
+ 						    GPIOD_OUT_HIGH);
+-		if (IS_ERR(usb_dev->gpio_desc))
+-			return PTR_ERR(usb_dev->gpio_desc);
+-	}
+ 
+ 	switch (core->id.id) {
+ 	case BCMA_CORE_USB20_HOST:
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 18a203c9011eb..4a1346e3de1b2 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -692,6 +692,7 @@ int xhci_run(struct usb_hcd *hcd)
+ 		if (ret)
+ 			xhci_free_command(xhci, command);
+ 	}
++	set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Finished xhci_run for USB2 roothub");
+ 
+diff --git a/drivers/usb/isp1760/isp1760-hcd.c b/drivers/usb/isp1760/isp1760-hcd.c
+index e517376c32917..cf13db3d1695d 100644
+--- a/drivers/usb/isp1760/isp1760-hcd.c
++++ b/drivers/usb/isp1760/isp1760-hcd.c
+@@ -251,7 +251,7 @@ static int isp1760_hcd_set_and_wait(struct usb_hcd *hcd, u32 field,
+ 	isp1760_hcd_set(hcd, field);
+ 
+ 	return regmap_field_read_poll_timeout(priv->fields[field], val,
+-					      val, 10, timeout_us);
++					      val, 0, timeout_us);
+ }
+ 
+ static int isp1760_hcd_set_and_wait_swap(struct usb_hcd *hcd, u32 field,
+@@ -263,7 +263,7 @@ static int isp1760_hcd_set_and_wait_swap(struct usb_hcd *hcd, u32 field,
+ 	isp1760_hcd_set(hcd, field);
+ 
+ 	return regmap_field_read_poll_timeout(priv->fields[field], val,
+-					      !val, 10, timeout_us);
++					      !val, 0, timeout_us);
+ }
+ 
+ static int isp1760_hcd_clear_and_wait(struct usb_hcd *hcd, u32 field,
+@@ -275,7 +275,7 @@ static int isp1760_hcd_clear_and_wait(struct usb_hcd *hcd, u32 field,
+ 	isp1760_hcd_clear(hcd, field);
+ 
+ 	return regmap_field_read_poll_timeout(priv->fields[field], val,
+-					      !val, 10, timeout_us);
++					      !val, 0, timeout_us);
+ }
+ 
+ static bool isp1760_hcd_is_set(struct usb_hcd *hcd, u32 field)
+diff --git a/drivers/usb/musb/tusb6010.c b/drivers/usb/musb/tusb6010.c
+index c429376922079..c968ecda42aa8 100644
+--- a/drivers/usb/musb/tusb6010.c
++++ b/drivers/usb/musb/tusb6010.c
+@@ -190,6 +190,7 @@ tusb_fifo_write_unaligned(void __iomem *fifo, const u8 *buf, u16 len)
+ 	}
+ 	if (len > 0) {
+ 		/* Write the rest 1 - 3 bytes to FIFO */
++		val = 0;
+ 		memcpy(&val, buf, len);
+ 		musb_writel(fifo, 0, val);
+ 	}
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index d48bed5782a5c..3aaf52d9985bd 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -233,6 +233,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */
+ 	{ USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */
+ 	{ USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */
++	{ USB_DEVICE(0x2184, 0x0030) }, /* GW Instek GDM-834x Digital Multimeter */
+ 	{ USB_DEVICE(0x2626, 0xEA60) }, /* Aruba Networks 7xxx USB Serial Console */
+ 	{ USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
+ 	{ USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
+@@ -258,6 +259,7 @@ struct cp210x_serial_private {
+ 	speed_t			max_speed;
+ 	bool			use_actual_rate;
+ 	bool			no_flow_control;
++	bool			no_event_mode;
+ };
+ 
+ enum cp210x_event_state {
+@@ -1112,12 +1114,16 @@ static void cp210x_change_speed(struct tty_struct *tty,
+ 
+ static void cp210x_enable_event_mode(struct usb_serial_port *port)
+ {
++	struct cp210x_serial_private *priv = usb_get_serial_data(port->serial);
+ 	struct cp210x_port_private *port_priv = usb_get_serial_port_data(port);
+ 	int ret;
+ 
+ 	if (port_priv->event_mode)
+ 		return;
+ 
++	if (priv->no_event_mode)
++		return;
++
+ 	port_priv->event_state = ES_DATA;
+ 	port_priv->event_mode = true;
+ 
+@@ -2097,6 +2103,33 @@ static void cp210x_init_max_speed(struct usb_serial *serial)
+ 	priv->use_actual_rate = use_actual_rate;
+ }
+ 
++static void cp2102_determine_quirks(struct usb_serial *serial)
++{
++	struct cp210x_serial_private *priv = usb_get_serial_data(serial);
++	u8 *buf;
++	int ret;
++
++	buf = kmalloc(2, GFP_KERNEL);
++	if (!buf)
++		return;
++	/*
++	 * Some (possibly counterfeit) CP2102 do not support event-insertion
++	 * mode and respond differently to malformed vendor requests.
++	 * Specifically, they return one instead of two bytes when sent a
++	 * two-byte part-number request.
++	 */
++	ret = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
++			CP210X_VENDOR_SPECIFIC, REQTYPE_DEVICE_TO_HOST,
++			CP210X_GET_PARTNUM, 0, buf, 2, USB_CTRL_GET_TIMEOUT);
++	if (ret == 1) {
++		dev_dbg(&serial->interface->dev,
++				"device does not support event-insertion mode\n");
++		priv->no_event_mode = true;
++	}
++
++	kfree(buf);
++}
++
+ static int cp210x_get_fw_version(struct usb_serial *serial, u16 value)
+ {
+ 	struct cp210x_serial_private *priv = usb_get_serial_data(serial);
+@@ -2122,6 +2155,9 @@ static void cp210x_determine_quirks(struct usb_serial *serial)
+ 	int ret;
+ 
+ 	switch (priv->partnum) {
++	case CP210X_PARTNUM_CP2102:
++		cp2102_determine_quirks(serial);
++		break;
+ 	case CP210X_PARTNUM_CP2102N_QFN28:
+ 	case CP210X_PARTNUM_CP2102N_QFN24:
+ 	case CP210X_PARTNUM_CP2102N_QFN20:
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index d7fe33ca73e4c..925067a7978d4 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -107,7 +107,6 @@
+ #define BANDB_DEVICE_ID_USOPTL4_2P       0xBC02
+ #define BANDB_DEVICE_ID_USOPTL4_4        0xAC44
+ #define BANDB_DEVICE_ID_USOPTL4_4P       0xBC03
+-#define BANDB_DEVICE_ID_USOPTL2_4        0xAC24
+ 
+ /* Interrupt Routine Defines    */
+ 
+@@ -186,7 +185,6 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_2P) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4) },
+ 	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL4_4P) },
+-	{ USB_DEVICE(USB_VENDOR_ID_BANDB, BANDB_DEVICE_ID_USOPTL2_4) },
+ 	{}			/* terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 29c765cc84957..6cfb5d33609fb 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1205,6 +1205,14 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff),	/* Telit FD980 */
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1060, 0xff),	/* Telit LN920 (rmnet) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1061, 0xff),	/* Telit LN920 (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1062, 0xff),	/* Telit LN920 (RNDIS) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff),	/* Telit LN920 (ECM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -1650,7 +1658,6 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0060, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0070, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0073, 0xff, 0xff, 0xff) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0094, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0133, 0xff, 0xff, 0xff),
+@@ -2068,6 +2075,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(0x0489, 0xe0b5),						/* Foxconn T77W968 ESIM */
+ 	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe0db, 0xff),			/* Foxconn T99W265 MBIM */
++	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+ 	  .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ 	{ USB_DEVICE(0x2cb7, 0x0104),						/* Fibocom NL678 series */
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index efa972be2ee34..c6b3fcf901805 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -416,9 +416,16 @@ UNUSUAL_DEV(  0x04cb, 0x0100, 0x0000, 0x2210,
+ 		USB_SC_UFI, USB_PR_DEVICE, NULL, US_FL_FIX_INQUIRY | US_FL_SINGLE_LUN),
+ 
+ /*
+- * Reported by Ondrej Zary <linux@rainbow-software.org>
++ * Reported by Ondrej Zary <linux@zary.sk>
+  * The device reports one sector more and breaks when that sector is accessed
++ * Firmwares older than 2.6c (the latest one and the only that claims Linux
++ * support) have also broken tag handling
+  */
++UNUSUAL_DEV(  0x04ce, 0x0002, 0x0000, 0x026b,
++		"ScanLogic",
++		"SL11R-IDE",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_FIX_CAPACITY | US_FL_BULK_IGNORE_TAG),
+ UNUSUAL_DEV(  0x04ce, 0x0002, 0x026c, 0x026c,
+ 		"ScanLogic",
+ 		"SL11R-IDE",
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c35a6db993f1b..4051c8cd0cd8a 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -50,7 +50,7 @@ UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
+ 		"LaCie",
+ 		"Rugged USB3-FW",
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_IGNORE_UAS),
++		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 671c71245a7b2..43ebfe36ac276 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -43,6 +43,8 @@
+ #include <linux/sched.h>
+ #include <linux/cred.h>
+ #include <linux/errno.h>
++#include <linux/freezer.h>
++#include <linux/kthread.h>
+ #include <linux/mm.h>
+ #include <linux/memblock.h>
+ #include <linux/pagemap.h>
+@@ -115,7 +117,7 @@ static struct ctl_table xen_root[] = {
+ #define EXTENT_ORDER (fls(XEN_PFN_PER_PAGE) - 1)
+ 
+ /*
+- * balloon_process() state:
++ * balloon_thread() state:
+  *
+  * BP_DONE: done or nothing to do,
+  * BP_WAIT: wait to be rescheduled,
+@@ -130,6 +132,8 @@ enum bp_state {
+ 	BP_ECANCELED
+ };
+ 
++/* Main waiting point for xen-balloon thread. */
++static DECLARE_WAIT_QUEUE_HEAD(balloon_thread_wq);
+ 
+ static DEFINE_MUTEX(balloon_mutex);
+ 
+@@ -144,10 +148,6 @@ static xen_pfn_t frame_list[PAGE_SIZE / sizeof(xen_pfn_t)];
+ static LIST_HEAD(ballooned_pages);
+ static DECLARE_WAIT_QUEUE_HEAD(balloon_wq);
+ 
+-/* Main work function, always executed in process context. */
+-static void balloon_process(struct work_struct *work);
+-static DECLARE_DELAYED_WORK(balloon_worker, balloon_process);
+-
+ /* When ballooning out (allocating memory to return to Xen) we don't really
+    want the kernel to try too hard since that can trigger the oom killer. */
+ #define GFP_BALLOON \
+@@ -366,7 +366,7 @@ static void xen_online_page(struct page *page, unsigned int order)
+ static int xen_memory_notifier(struct notifier_block *nb, unsigned long val, void *v)
+ {
+ 	if (val == MEM_ONLINE)
+-		schedule_delayed_work(&balloon_worker, 0);
++		wake_up(&balloon_thread_wq);
+ 
+ 	return NOTIFY_OK;
+ }
+@@ -491,18 +491,43 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
+ }
+ 
+ /*
+- * As this is a work item it is guaranteed to run as a single instance only.
++ * Stop waiting if either state is not BP_EAGAIN and ballooning action is
++ * needed, or if the credit has changed while state is BP_EAGAIN.
++ */
++static bool balloon_thread_cond(enum bp_state state, long credit)
++{
++	if (state != BP_EAGAIN)
++		credit = 0;
++
++	return current_credit() != credit || kthread_should_stop();
++}
++
++/*
++ * As this is a kthread it is guaranteed to run as a single instance only.
+  * We may of course race updates of the target counts (which are protected
+  * by the balloon lock), or with changes to the Xen hard limit, but we will
+  * recover from these in time.
+  */
+-static void balloon_process(struct work_struct *work)
++static int balloon_thread(void *unused)
+ {
+ 	enum bp_state state = BP_DONE;
+ 	long credit;
++	unsigned long timeout;
++
++	set_freezable();
++	for (;;) {
++		if (state == BP_EAGAIN)
++			timeout = balloon_stats.schedule_delay * HZ;
++		else
++			timeout = 3600 * HZ;
++		credit = current_credit();
+ 
++		wait_event_freezable_timeout(balloon_thread_wq,
++			balloon_thread_cond(state, credit), timeout);
++
++		if (kthread_should_stop())
++			return 0;
+ 
+-	do {
+ 		mutex_lock(&balloon_mutex);
+ 
+ 		credit = current_credit();
+@@ -529,12 +554,7 @@ static void balloon_process(struct work_struct *work)
+ 		mutex_unlock(&balloon_mutex);
+ 
+ 		cond_resched();
+-
+-	} while (credit && state == BP_DONE);
+-
+-	/* Schedule more work if there is some still to be done. */
+-	if (state == BP_EAGAIN)
+-		schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ);
++	}
+ }
+ 
+ /* Resets the Xen limit, sets new target, and kicks off processing. */
+@@ -542,7 +562,7 @@ void balloon_set_new_target(unsigned long target)
+ {
+ 	/* No need for lock. Not read-modify-write updates. */
+ 	balloon_stats.target_pages = target;
+-	schedule_delayed_work(&balloon_worker, 0);
++	wake_up(&balloon_thread_wq);
+ }
+ EXPORT_SYMBOL_GPL(balloon_set_new_target);
+ 
+@@ -647,7 +667,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
+ 
+ 	/* The balloon may be too large now. Shrink it if needed. */
+ 	if (current_credit())
+-		schedule_delayed_work(&balloon_worker, 0);
++		wake_up(&balloon_thread_wq);
+ 
+ 	mutex_unlock(&balloon_mutex);
+ }
+@@ -679,6 +699,8 @@ static void __init balloon_add_region(unsigned long start_pfn,
+ 
+ static int __init balloon_init(void)
+ {
++	struct task_struct *task;
++
+ 	if (!xen_domain())
+ 		return -ENODEV;
+ 
+@@ -722,6 +744,12 @@ static int __init balloon_init(void)
+ 	}
+ #endif
+ 
++	task = kthread_run(balloon_thread, NULL, "xen-balloon");
++	if (IS_ERR(task)) {
++		pr_err("xen-balloon thread could not be started, ballooning will not work!\n");
++		return PTR_ERR(task);
++	}
++
+ 	/* Init the xen-balloon driver. */
+ 	xen_balloon_init();
+ 
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index ac829e63c5704..54ee54ae36bc8 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1077,9 +1077,9 @@ static struct dentry *afs_lookup(struct inode *dir, struct dentry *dentry,
+  */
+ static int afs_d_revalidate_rcu(struct dentry *dentry)
+ {
+-	struct afs_vnode *dvnode, *vnode;
++	struct afs_vnode *dvnode;
+ 	struct dentry *parent;
+-	struct inode *dir, *inode;
++	struct inode *dir;
+ 	long dir_version, de_version;
+ 
+ 	_enter("%p", dentry);
+@@ -1109,18 +1109,6 @@ static int afs_d_revalidate_rcu(struct dentry *dentry)
+ 			return -ECHILD;
+ 	}
+ 
+-	/* Check to see if the vnode referred to by the dentry still
+-	 * has a callback.
+-	 */
+-	if (d_really_is_positive(dentry)) {
+-		inode = d_inode_rcu(dentry);
+-		if (inode) {
+-			vnode = AFS_FS_I(inode);
+-			if (!afs_check_validity(vnode))
+-				return -ECHILD;
+-		}
+-	}
+-
+ 	return 1; /* Still valid */
+ }
+ 
+@@ -1156,17 +1144,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	if (IS_ERR(key))
+ 		key = NULL;
+ 
+-	if (d_really_is_positive(dentry)) {
+-		inode = d_inode(dentry);
+-		if (inode) {
+-			vnode = AFS_FS_I(inode);
+-			afs_validate(vnode, key);
+-			if (test_bit(AFS_VNODE_DELETED, &vnode->flags))
+-				goto out_bad;
+-		}
+-	}
+-
+-	/* lock down the parent dentry so we can peer at it */
++	/* Hold the parent dentry so we can peer at it */
+ 	parent = dget_parent(dentry);
+ 	dir = AFS_FS_I(d_inode(parent));
+ 
+@@ -1175,7 +1153,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 
+ 	if (test_bit(AFS_VNODE_DELETED, &dir->flags)) {
+ 		_debug("%pd: parent dir deleted", dentry);
+-		goto out_bad_parent;
++		goto not_found;
+ 	}
+ 
+ 	/* We only need to invalidate a dentry if the server's copy changed
+@@ -1201,12 +1179,12 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	case 0:
+ 		/* the filename maps to something */
+ 		if (d_really_is_negative(dentry))
+-			goto out_bad_parent;
++			goto not_found;
+ 		inode = d_inode(dentry);
+ 		if (is_bad_inode(inode)) {
+ 			printk("kAFS: afs_d_revalidate: %pd2 has bad inode\n",
+ 			       dentry);
+-			goto out_bad_parent;
++			goto not_found;
+ 		}
+ 
+ 		vnode = AFS_FS_I(inode);
+@@ -1228,9 +1206,6 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 			       dentry, fid.unique,
+ 			       vnode->fid.unique,
+ 			       vnode->vfs_inode.i_generation);
+-			write_seqlock(&vnode->cb_lock);
+-			set_bit(AFS_VNODE_DELETED, &vnode->flags);
+-			write_sequnlock(&vnode->cb_lock);
+ 			goto not_found;
+ 		}
+ 		goto out_valid;
+@@ -1245,7 +1220,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	default:
+ 		_debug("failed to iterate dir %pd: %d",
+ 		       parent, ret);
+-		goto out_bad_parent;
++		goto not_found;
+ 	}
+ 
+ out_valid:
+@@ -1256,16 +1231,9 @@ out_valid_noupdate:
+ 	_leave(" = 1 [valid]");
+ 	return 1;
+ 
+-	/* the dirent, if it exists, now points to a different vnode */
+ not_found:
+-	spin_lock(&dentry->d_lock);
+-	dentry->d_flags |= DCACHE_NFSFS_RENAMED;
+-	spin_unlock(&dentry->d_lock);
+-
+-out_bad_parent:
+ 	_debug("dropping dentry %pd2", dentry);
+ 	dput(parent);
+-out_bad:
+ 	key_put(key);
+ 
+ 	_leave(" = 0 [bad]");
+diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
+index f4600c1353adf..540b9fc96824a 100644
+--- a/fs/afs/dir_edit.c
++++ b/fs/afs/dir_edit.c
+@@ -263,7 +263,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
+ 		if (b == nr_blocks) {
+ 			_debug("init %u", b);
+ 			afs_edit_init_block(meta, block, b);
+-			i_size_write(&vnode->vfs_inode, (b + 1) * AFS_DIR_BLOCK_SIZE);
++			afs_set_i_size(vnode, (b + 1) * AFS_DIR_BLOCK_SIZE);
+ 		}
+ 
+ 		/* Only lower dir pages have a counter in the header. */
+@@ -296,7 +296,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
+ new_directory:
+ 	afs_edit_init_block(meta, meta, 0);
+ 	i_size = AFS_DIR_BLOCK_SIZE;
+-	i_size_write(&vnode->vfs_inode, i_size);
++	afs_set_i_size(vnode, i_size);
+ 	slot = AFS_DIR_RESV_BLOCKS0;
+ 	page = page0;
+ 	block = meta;
+diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
+index e7e98ad63a91a..c0031a3ab42f5 100644
+--- a/fs/afs/fs_probe.c
++++ b/fs/afs/fs_probe.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include "afs_fs.h"
+ #include "internal.h"
++#include "protocol_afs.h"
+ #include "protocol_yfs.h"
+ 
+ static unsigned int afs_fs_probe_fast_poll_interval = 30 * HZ;
+@@ -102,7 +103,7 @@ void afs_fileserver_probe_result(struct afs_call *call)
+ 	struct afs_addr_list *alist = call->alist;
+ 	struct afs_server *server = call->server;
+ 	unsigned int index = call->addr_ix;
+-	unsigned int rtt_us = 0;
++	unsigned int rtt_us = 0, cap0;
+ 	int ret = call->error;
+ 
+ 	_enter("%pU,%u", &server->uuid, index);
+@@ -159,6 +160,11 @@ responded:
+ 			clear_bit(AFS_SERVER_FL_IS_YFS, &server->flags);
+ 			alist->addrs[index].srx_service = call->service_id;
+ 		}
++		cap0 = ntohl(call->tmp);
++		if (cap0 & AFS3_VICED_CAPABILITY_64BITFILES)
++			set_bit(AFS_SERVER_FL_HAS_FS64, &server->flags);
++		else
++			clear_bit(AFS_SERVER_FL_HAS_FS64, &server->flags);
+ 	}
+ 
+ 	if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index dd3f45d906d23..4943413d9c5f7 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -456,9 +456,7 @@ void afs_fs_fetch_data(struct afs_operation *op)
+ 	struct afs_read *req = op->fetch.req;
+ 	__be32 *bp;
+ 
+-	if (upper_32_bits(req->pos) ||
+-	    upper_32_bits(req->len) ||
+-	    upper_32_bits(req->pos + req->len))
++	if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags))
+ 		return afs_fs_fetch_data64(op);
+ 
+ 	_enter("");
+@@ -1113,9 +1111,7 @@ void afs_fs_store_data(struct afs_operation *op)
+ 	       (unsigned long long)op->store.pos,
+ 	       (unsigned long long)op->store.i_size);
+ 
+-	if (upper_32_bits(op->store.pos) ||
+-	    upper_32_bits(op->store.size) ||
+-	    upper_32_bits(op->store.i_size))
++	if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags))
+ 		return afs_fs_store_data64(op);
+ 
+ 	call = afs_alloc_flat_call(op->net, &afs_RXFSStoreData,
+@@ -1229,7 +1225,7 @@ static void afs_fs_setattr_size(struct afs_operation *op)
+ 	       key_serial(op->key), vp->fid.vid, vp->fid.vnode);
+ 
+ 	ASSERT(attr->ia_valid & ATTR_SIZE);
+-	if (upper_32_bits(attr->ia_size))
++	if (test_bit(AFS_SERVER_FL_HAS_FS64, &op->server->flags))
+ 		return afs_fs_setattr_size64(op);
+ 
+ 	call = afs_alloc_flat_call(op->net, &afs_RXFSStoreData_as_Status,
+@@ -1657,20 +1653,33 @@ static int afs_deliver_fs_get_capabilities(struct afs_call *call)
+ 			return ret;
+ 
+ 		count = ntohl(call->tmp);
+-
+ 		call->count = count;
+ 		call->count2 = count;
+-		afs_extract_discard(call, count * sizeof(__be32));
++		if (count == 0) {
++			call->unmarshall = 4;
++			call->tmp = 0;
++			break;
++		}
++
++		/* Extract the first word of the capabilities to call->tmp */
++		afs_extract_to_tmp(call);
+ 		call->unmarshall++;
+ 		fallthrough;
+ 
+-		/* Extract capabilities words */
+ 	case 2:
+ 		ret = afs_extract_data(call, false);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		/* TODO: Examine capabilities */
++		afs_extract_discard(call, (count - 1) * sizeof(__be32));
++		call->unmarshall++;
++		fallthrough;
++
++		/* Extract remaining capabilities words */
++	case 3:
++		ret = afs_extract_data(call, false);
++		if (ret < 0)
++			return ret;
+ 
+ 		call->unmarshall++;
+ 		break;
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 80b6c8d967d5c..c18cbc69fa582 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -53,16 +53,6 @@ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *paren
+ 		dump_stack();
+ }
+ 
+-/*
+- * Set the file size and block count.  Estimate the number of 512 bytes blocks
+- * used, rounded up to nearest 1K for consistency with other AFS clients.
+- */
+-static void afs_set_i_size(struct afs_vnode *vnode, u64 size)
+-{
+-	i_size_write(&vnode->vfs_inode, size);
+-	vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
+-}
+-
+ /*
+  * Initialise an inode from the vnode status.
+  */
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 5ed416f4ff335..345494881f655 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -516,6 +516,7 @@ struct afs_server {
+ #define AFS_SERVER_FL_IS_YFS	16		/* Server is YFS not AFS */
+ #define AFS_SERVER_FL_NO_IBULK	17		/* Fileserver doesn't support FS.InlineBulkStatus */
+ #define AFS_SERVER_FL_NO_RM2	18		/* Fileserver doesn't support YFS.RemoveFile2 */
++#define AFS_SERVER_FL_HAS_FS64	19		/* Fileserver supports FS.{Fetch,Store}Data64 */
+ 	atomic_t		ref;		/* Object refcount */
+ 	atomic_t		active;		/* Active user count */
+ 	u32			addr_version;	/* Address list version */
+@@ -1585,6 +1586,16 @@ static inline void afs_update_dentry_version(struct afs_operation *op,
+ 			(void *)(unsigned long)dir_vp->scb.status.data_version;
+ }
+ 
++/*
++ * Set the file size and block count.  Estimate the number of 512 bytes blocks
++ * used, rounded up to nearest 1K for consistency with other AFS clients.
++ */
++static inline void afs_set_i_size(struct afs_vnode *vnode, u64 size)
++{
++	i_size_write(&vnode->vfs_inode, size);
++	vnode->vfs_inode.i_blocks = ((size + 1023) >> 10) << 1;
++}
++
+ /*
+  * Check for a conflicting operation on a directory that we just unlinked from.
+  * If someone managed to sneak a link or an unlink in on the file we just
+diff --git a/fs/afs/protocol_afs.h b/fs/afs/protocol_afs.h
+new file mode 100644
+index 0000000000000..0c39358c8b702
+--- /dev/null
++++ b/fs/afs/protocol_afs.h
+@@ -0,0 +1,15 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/* AFS protocol bits
++ *
++ * Copyright (C) 2021 Red Hat, Inc. All Rights Reserved.
++ * Written by David Howells (dhowells@redhat.com)
++ */
++
++
++#define AFSCAPABILITIESMAX 196 /* Maximum number of words in a capability set */
++
++/* AFS3 Fileserver capabilities word 0 */
++#define AFS3_VICED_CAPABILITY_ERRORTRANS	0x0001 /* Uses UAE errors */
++#define AFS3_VICED_CAPABILITY_64BITFILES	0x0002 /* FetchData64 & StoreData64 supported */
++#define AFS3_VICED_CAPABILITY_WRITELOCKACL	0x0004 /* Can lock a file even without lock perm */
++#define AFS3_VICED_CAPABILITY_SANEACLS		0x0008 /* ACLs reviewed for sanity - don't use */
+diff --git a/fs/afs/protocol_yfs.h b/fs/afs/protocol_yfs.h
+index b5bd03b1d3c7f..e4cd89c44c465 100644
+--- a/fs/afs/protocol_yfs.h
++++ b/fs/afs/protocol_yfs.h
+@@ -168,3 +168,9 @@ enum yfs_lock_type {
+ 	yfs_LockMandatoryWrite	= 0x101,
+ 	yfs_LockMandatoryExtend	= 0x102,
+ };
++
++/* RXYFS Viced Capability Flags */
++#define YFS_VICED_CAPABILITY_ERRORTRANS		0x0001 /* Deprecated v0.195 */
++#define YFS_VICED_CAPABILITY_64BITFILES		0x0002 /* Deprecated v0.195 */
++#define YFS_VICED_CAPABILITY_WRITELOCKACL	0x0004 /* Can lock a file even without lock perm */
++#define YFS_VICED_CAPABILITY_SANEACLS		0x0008 /* Deprecated v0.195 */
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index c0534697268ef..e86f5a245514d 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -137,7 +137,7 @@ int afs_write_end(struct file *file, struct address_space *mapping,
+ 		write_seqlock(&vnode->cb_lock);
+ 		i_size = i_size_read(&vnode->vfs_inode);
+ 		if (maybe_i_size > i_size)
+-			i_size_write(&vnode->vfs_inode, maybe_i_size);
++			afs_set_i_size(vnode, maybe_i_size);
+ 		write_sequnlock(&vnode->cb_lock);
+ 	}
+ 
+@@ -471,13 +471,18 @@ static void afs_extend_writeback(struct address_space *mapping,
+ 			}
+ 
+ 			/* Has the page moved or been split? */
+-			if (unlikely(page != xas_reload(&xas)))
++			if (unlikely(page != xas_reload(&xas))) {
++				put_page(page);
+ 				break;
++			}
+ 
+-			if (!trylock_page(page))
++			if (!trylock_page(page)) {
++				put_page(page);
+ 				break;
++			}
+ 			if (!PageDirty(page) || PageWriteback(page)) {
+ 				unlock_page(page);
++				put_page(page);
+ 				break;
+ 			}
+ 
+@@ -487,6 +492,7 @@ static void afs_extend_writeback(struct address_space *mapping,
+ 			t = afs_page_dirty_to(page, priv);
+ 			if (f != 0 && !new_content) {
+ 				unlock_page(page);
++				put_page(page);
+ 				break;
+ 			}
+ 
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 46e8415fa2c55..0842efa6f7120 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -414,9 +414,10 @@ static void __btrfs_dump_space_info(struct btrfs_fs_info *fs_info,
+ {
+ 	lockdep_assert_held(&info->lock);
+ 
+-	btrfs_info(fs_info, "space_info %llu has %llu free, is %sfull",
++	/* The free space could be negative in case of overcommit */
++	btrfs_info(fs_info, "space_info %llu has %lld free, is %sfull",
+ 		   info->flags,
+-		   info->total_bytes - btrfs_space_info_used(info, true),
++		   (s64)(info->total_bytes - btrfs_space_info_used(info, true)),
+ 		   info->full ? "" : "not ");
+ 	btrfs_info(fs_info,
+ 		"space_info total=%llu, used=%llu, pinned=%llu, reserved=%llu, may_use=%llu, readonly=%llu zone_unusable=%llu",
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index c6a9542ca281b..cf2141483b37f 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1403,6 +1403,7 @@ struct cifsInodeInfo {
+ #define CIFS_INO_INVALID_MAPPING	  (4) /* pagecache is invalid */
+ #define CIFS_INO_LOCK			  (5) /* lock bit for synchronization */
+ #define CIFS_INO_MODIFIED_ATTR            (6) /* Indicate change in mtime/ctime */
++#define CIFS_INO_CLOSE_ON_LOCK            (7) /* Not to defer the close when lock is set */
+ 	unsigned long flags;
+ 	spinlock_t writers_lock;
+ 	unsigned int writers;		/* Number of writers on this inode */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 3781eee9360af..65d3cf80444bf 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2382,9 +2382,10 @@ cifs_match_super(struct super_block *sb, void *data)
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	cifs_sb = CIFS_SB(sb);
+ 	tlink = cifs_get_tlink(cifs_sb_master_tlink(cifs_sb));
+-	if (IS_ERR(tlink)) {
++	if (tlink == NULL) {
++		/* can not match superblock if tlink were ever null */
+ 		spin_unlock(&cifs_tcp_ses_lock);
+-		return rc;
++		return 0;
+ 	}
+ 	tcon = tlink_tcon(tlink);
+ 	ses = tcon->ses;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index bb98fbdd22a99..ab2734159c192 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -881,6 +881,7 @@ int cifs_close(struct inode *inode, struct file *file)
+ 		dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL);
+ 		if ((cinode->oplock == CIFS_CACHE_RHW_FLG) &&
+ 		    cinode->lease_granted &&
++		    !test_bit(CIFS_INO_CLOSE_ON_LOCK, &cinode->flags) &&
+ 		    dclose) {
+ 			if (test_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags))
+ 				inode->i_ctime = inode->i_mtime = current_time(inode);
+@@ -1861,6 +1862,7 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *flock)
+ 	cifs_read_flock(flock, &type, &lock, &unlock, &wait_flag,
+ 			tcon->ses->server);
+ 	cifs_sb = CIFS_FILE_SB(file);
++	set_bit(CIFS_INO_CLOSE_ON_LOCK, &CIFS_I(d_inode(cfile->dentry))->flags);
+ 
+ 	if (cap_unix(tcon->ses) &&
+ 	    (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&
+@@ -3108,7 +3110,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
+ 	struct cifs_tcon *tcon;
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct dentry *dentry = ctx->cfile->dentry;
+-	int rc;
++	ssize_t rc;
+ 
+ 	tcon = tlink_tcon(ctx->cfile->tlink);
+ 	cifs_sb = CIFS_SB(dentry->d_sb);
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 9469f1cf0b46a..57e695e3c969b 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -736,7 +736,7 @@ cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+ 			if (cancel_delayed_work(&cfile->deferred)) {
+ 				tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ 				if (tmp_list == NULL)
+-					continue;
++					break;
+ 				tmp_list->cfile = cfile;
+ 				list_add_tail(&tmp_list->list, &file_head);
+ 			}
+@@ -767,7 +767,7 @@ cifs_close_all_deferred_files(struct cifs_tcon *tcon)
+ 			if (cancel_delayed_work(&cfile->deferred)) {
+ 				tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ 				if (tmp_list == NULL)
+-					continue;
++					break;
+ 				tmp_list->cfile = cfile;
+ 				list_add_tail(&tmp_list->list, &file_head);
+ 			}
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 754d59f734d84..699a08d724c24 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4043,7 +4043,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+ 	int i, bid = pbuf->bid;
+ 
+ 	for (i = 0; i < pbuf->nbufs; i++) {
+-		buf = kmalloc(sizeof(*buf), GFP_KERNEL);
++		buf = kmalloc(sizeof(*buf), GFP_KERNEL_ACCOUNT);
+ 		if (!buf)
+ 			break;
+ 
+@@ -4969,7 +4969,7 @@ static bool io_poll_complete(struct io_kiocb *req, __poll_t mask)
+ 	if (req->poll.events & EPOLLONESHOT)
+ 		flags = 0;
+ 	if (!io_cqring_fill_event(ctx, req->user_data, error, flags)) {
+-		req->poll.done = true;
++		req->poll.events |= EPOLLONESHOT;
+ 		flags = 0;
+ 	}
+ 	if (flags & IORING_CQE_F_MORE)
+@@ -4993,6 +4993,7 @@ static void io_poll_task_func(struct io_kiocb *req)
+ 		if (done) {
+ 			io_poll_remove_double(req);
+ 			hash_del(&req->hash_node);
++			req->poll.done = true;
+ 		} else {
+ 			req->result = 0;
+ 			add_wait_queue(req->poll.head, &req->poll.wait);
+@@ -5126,6 +5127,7 @@ static void io_async_task_func(struct io_kiocb *req)
+ 
+ 	hash_del(&req->hash_node);
+ 	io_poll_remove_double(req);
++	apoll->poll.done = true;
+ 	spin_unlock_irq(&ctx->completion_lock);
+ 
+ 	if (!READ_ONCE(apoll->poll.canceled))
+@@ -5917,19 +5919,16 @@ static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
+ 	struct io_uring_rsrc_update2 up;
+ 	int ret;
+ 
+-	if (issue_flags & IO_URING_F_NONBLOCK)
+-		return -EAGAIN;
+-
+ 	up.offset = req->rsrc_update.offset;
+ 	up.data = req->rsrc_update.arg;
+ 	up.nr = 0;
+ 	up.tags = 0;
+ 	up.resv = 0;
+ 
+-	mutex_lock(&ctx->uring_lock);
++	io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
+ 	ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
+ 					&up, req->rsrc_update.nr_args);
+-	mutex_unlock(&ctx->uring_lock);
++	io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
+ 
+ 	if (ret < 0)
+ 		req_set_fail(req);
+diff --git a/fs/lockd/svcxdr.h b/fs/lockd/svcxdr.h
+index c69a0bb76c940..4f1a451da5ba2 100644
+--- a/fs/lockd/svcxdr.h
++++ b/fs/lockd/svcxdr.h
+@@ -134,18 +134,9 @@ svcxdr_decode_owner(struct xdr_stream *xdr, struct xdr_netobj *obj)
+ static inline bool
+ svcxdr_encode_owner(struct xdr_stream *xdr, const struct xdr_netobj *obj)
+ {
+-	unsigned int quadlen = XDR_QUADLEN(obj->len);
+-	__be32 *p;
+-
+-	if (xdr_stream_encode_u32(xdr, obj->len) < 0)
+-		return false;
+-	p = xdr_reserve_space(xdr, obj->len);
+-	if (!p)
++	if (obj->len > XDR_MAX_NETOBJ)
+ 		return false;
+-	p[quadlen - 1] = 0;	/* XDR pad */
+-	memcpy(p, obj->data, obj->len);
+-
+-	return true;
++	return xdr_stream_encode_opaque(xdr, obj->data, obj->len) > 0;
+ }
+ 
+ #endif /* _LOCKD_SVCXDR_H_ */
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 48fd369c29a4b..a2a2ae37b859a 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -3939,7 +3939,7 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
+ 		oi = OCFS2_I(inode);
+ 		oi->ip_dir_lock_gen++;
+ 		mlog(0, "generation: %u\n", oi->ip_dir_lock_gen);
+-		goto out;
++		goto out_forget;
+ 	}
+ 
+ 	if (!S_ISREG(inode->i_mode))
+@@ -3970,6 +3970,7 @@ static int ocfs2_data_convert_worker(struct ocfs2_lock_res *lockres,
+ 		filemap_fdatawait(mapping);
+ 	}
+ 
++out_forget:
+ 	forget_all_cached_acls(inode);
+ 
+ out:
+diff --git a/fs/qnx4/dir.c b/fs/qnx4/dir.c
+index a6ee23aadd283..66645a5a35f30 100644
+--- a/fs/qnx4/dir.c
++++ b/fs/qnx4/dir.c
+@@ -15,13 +15,48 @@
+ #include <linux/buffer_head.h>
+ #include "qnx4.h"
+ 
++/*
++ * A qnx4 directory entry is an inode entry or link info
++ * depending on the status field in the last byte. The
++ * first byte is where the name start either way, and a
++ * zero means it's empty.
++ *
++ * Also, due to a bug in gcc, we don't want to use the
++ * real (differently sized) name arrays in the inode and
++ * link entries, but always the 'de_name[]' one in the
++ * fake struct entry.
++ *
++ * See
++ *
++ *   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99578#c6
++ *
++ * for details, but basically gcc will take the size of the
++ * 'name' array from one of the used union entries randomly.
++ *
++ * This use of 'de_name[]' (48 bytes) avoids the false positive
++ * warnings that would happen if gcc decides to use 'inode.di_name'
++ * (16 bytes) even when the pointer and size were to come from
++ * 'link.dl_name' (48 bytes).
++ *
++ * In all cases the actual name pointer itself is the same, it's
++ * only the gcc internal 'what is the size of this field' logic
++ * that can get confused.
++ */
++union qnx4_directory_entry {
++	struct {
++		const char de_name[48];
++		u8 de_pad[15];
++		u8 de_status;
++	};
++	struct qnx4_inode_entry inode;
++	struct qnx4_link_info link;
++};
++
+ static int qnx4_readdir(struct file *file, struct dir_context *ctx)
+ {
+ 	struct inode *inode = file_inode(file);
+ 	unsigned int offset;
+ 	struct buffer_head *bh;
+-	struct qnx4_inode_entry *de;
+-	struct qnx4_link_info *le;
+ 	unsigned long blknum;
+ 	int ix, ino;
+ 	int size;
+@@ -38,27 +73,27 @@ static int qnx4_readdir(struct file *file, struct dir_context *ctx)
+ 		}
+ 		ix = (ctx->pos >> QNX4_DIR_ENTRY_SIZE_BITS) % QNX4_INODES_PER_BLOCK;
+ 		for (; ix < QNX4_INODES_PER_BLOCK; ix++, ctx->pos += QNX4_DIR_ENTRY_SIZE) {
++			union qnx4_directory_entry *de;
++
+ 			offset = ix * QNX4_DIR_ENTRY_SIZE;
+-			de = (struct qnx4_inode_entry *) (bh->b_data + offset);
+-			if (!de->di_fname[0])
++			de = (union qnx4_directory_entry *) (bh->b_data + offset);
++
++			if (!de->de_name[0])
+ 				continue;
+-			if (!(de->di_status & (QNX4_FILE_USED|QNX4_FILE_LINK)))
++			if (!(de->de_status & (QNX4_FILE_USED|QNX4_FILE_LINK)))
+ 				continue;
+-			if (!(de->di_status & QNX4_FILE_LINK))
+-				size = QNX4_SHORT_NAME_MAX;
+-			else
+-				size = QNX4_NAME_MAX;
+-			size = strnlen(de->di_fname, size);
+-			QNX4DEBUG((KERN_INFO "qnx4_readdir:%.*s\n", size, de->di_fname));
+-			if (!(de->di_status & QNX4_FILE_LINK))
++			if (!(de->de_status & QNX4_FILE_LINK)) {
++				size = sizeof(de->inode.di_fname);
+ 				ino = blknum * QNX4_INODES_PER_BLOCK + ix - 1;
+-			else {
+-				le  = (struct qnx4_link_info*)de;
+-				ino = ( le32_to_cpu(le->dl_inode_blk) - 1 ) *
++			} else {
++				size = sizeof(de->link.dl_fname);
++				ino = ( le32_to_cpu(de->link.dl_inode_blk) - 1 ) *
+ 					QNX4_INODES_PER_BLOCK +
+-					le->dl_inode_ndx;
++					de->link.dl_inode_ndx;
+ 			}
+-			if (!dir_emit(ctx, de->di_fname, size, ino, DT_UNKNOWN)) {
++			size = strnlen(de->de_name, size);
++			QNX4DEBUG((KERN_INFO "qnx4_readdir:%.*s\n", size, name));
++			if (!dir_emit(ctx, de->de_name, size, ino, DT_UNKNOWN)) {
+ 				brelse(bh);
+ 				return 0;
+ 			}
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index b67261a1e3e9c..3d5af56337bdb 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -188,6 +188,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+     (typeof(ptr)) (__ptr + (off)); })
+ #endif
+ 
++#define absolute_pointer(val)	RELOC_HIDE((void *)(val), 0)
++
+ #ifndef OPTIMIZER_HIDE_VAR
+ /* Make the optimizer believe the variable can be manipulated arbitrarily. */
+ #define OPTIMIZER_HIDE_VAR(var)						\
+diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h
+index 6beb26b7151d2..86be8bf27b41b 100644
+--- a/include/linux/pkeys.h
++++ b/include/linux/pkeys.h
+@@ -4,6 +4,8 @@
+ 
+ #include <linux/mm.h>
+ 
++#define ARCH_DEFAULT_PKEY	0
++
+ #ifdef CONFIG_ARCH_HAS_PKEYS
+ #include <asm/pkeys.h>
+ #else /* ! CONFIG_ARCH_HAS_PKEYS */
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 548a028f2dabb..2c1fc9212cf28 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,6 +124,7 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+ #define HCD_FLAG_DEAD			6	/* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED	7	/* authorize interfaces? */
++#define HCD_FLAG_DEFER_RH_REGISTER	8	/* Defer roothub registration */
+ 
+ 	/* The flags can be tested using these macros; they are likely to
+ 	 * be slightly faster than test_bit().
+@@ -134,6 +135,7 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
++#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+ 
+ 	/*
+ 	 * Specifies if interfaces are authorized by default
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index d833f717e8022..004514a21e306 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -575,8 +575,16 @@ struct dsa_switch_ops {
+ 	int	(*change_tag_protocol)(struct dsa_switch *ds, int port,
+ 				       enum dsa_tag_protocol proto);
+ 
++	/* Optional switch-wide initialization and destruction methods */
+ 	int	(*setup)(struct dsa_switch *ds);
+ 	void	(*teardown)(struct dsa_switch *ds);
++
++	/* Per-port initialization and destruction methods. Mandatory if the
++	 * driver registers devlink port regions, optional otherwise.
++	 */
++	int	(*port_setup)(struct dsa_switch *ds, int port);
++	void	(*port_teardown)(struct dsa_switch *ds, int port);
++
+ 	u32	(*get_phy_flags)(struct dsa_switch *ds, int port);
+ 
+ 	/*
+diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h
+index bf9806fd13065..db4f2cec83606 100644
+--- a/include/trace/events/erofs.h
++++ b/include/trace/events/erofs.h
+@@ -35,20 +35,20 @@ TRACE_EVENT(erofs_lookup,
+ 	TP_STRUCT__entry(
+ 		__field(dev_t,		dev	)
+ 		__field(erofs_nid_t,	nid	)
+-		__field(const char *,	name	)
++		__string(name,		dentry->d_name.name	)
+ 		__field(unsigned int,	flags	)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->dev	= dir->i_sb->s_dev;
+ 		__entry->nid	= EROFS_I(dir)->nid;
+-		__entry->name	= dentry->d_name.name;
++		__assign_str(name, dentry->d_name.name);
+ 		__entry->flags	= flags;
+ 	),
+ 
+ 	TP_printk("dev = (%d,%d), pnid = %llu, name:%s, flags:%x",
+ 		show_dev_nid(__entry),
+-		__entry->name,
++		__get_str(name),
+ 		__entry->flags)
+ );
+ 
+diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
+index 20e435fe657a1..3246f2c746969 100644
+--- a/include/uapi/linux/android/binder.h
++++ b/include/uapi/linux/android/binder.h
+@@ -225,7 +225,14 @@ struct binder_freeze_info {
+ 
+ struct binder_frozen_status_info {
+ 	__u32            pid;
++
++	/* process received sync transactions since last frozen
++	 * bit 0: received sync transaction after being frozen
++	 * bit 1: new pending sync transaction during freezing
++	 */
+ 	__u32            sync_recv;
++
++	/* process received async transactions since last frozen */
+ 	__u32            async_recv;
+ };
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 9d94ac6ff50c4..592b9b68cbd93 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -9641,6 +9641,8 @@ static int check_btf_line(struct bpf_verifier_env *env,
+ 	nr_linfo = attr->line_info_cnt;
+ 	if (!nr_linfo)
+ 		return 0;
++	if (nr_linfo > INT_MAX / sizeof(struct bpf_line_info))
++		return -EINVAL;
+ 
+ 	rec_size = attr->line_info_rec_size;
+ 	if (rec_size < MIN_BPF_LINEINFO_SIZE ||
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index f2faa13534e57..70519f67556f9 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -567,7 +567,8 @@ static void add_dma_entry(struct dma_debug_entry *entry)
+ 		pr_err("cacheline tracking ENOMEM, dma-debug disabled\n");
+ 		global_disable = true;
+ 	} else if (rc == -EEXIST) {
+-		pr_err("cacheline tracking EEXIST, overlapping mappings aren't supported\n");
++		err_printk(entry->dev, entry,
++			"cacheline tracking EEXIST, overlapping mappings aren't supported\n");
+ 	}
+ }
+ 
+diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c
+index 49972ee99aff6..049fd06b4c3de 100644
+--- a/kernel/entry/kvm.c
++++ b/kernel/entry/kvm.c
+@@ -19,8 +19,10 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
+ 		if (ti_work & _TIF_NEED_RESCHED)
+ 			schedule();
+ 
+-		if (ti_work & _TIF_NOTIFY_RESUME)
++		if (ti_work & _TIF_NOTIFY_RESUME) {
+ 			tracehook_notify_resume(NULL);
++			rseq_handle_notify_resume(NULL, NULL);
++		}
+ 
+ 		ret = arch_xfer_to_guest_mode_handle_work(vcpu, ti_work);
+ 		if (ret)
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index 35f7bd0fced0e..6d45ac3dae7fb 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -282,9 +282,17 @@ void __rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs)
+ 
+ 	if (unlikely(t->flags & PF_EXITING))
+ 		return;
+-	ret = rseq_ip_fixup(regs);
+-	if (unlikely(ret < 0))
+-		goto error;
++
++	/*
++	 * regs is NULL if and only if the caller is in a syscall path.  Skip
++	 * fixup and leave rseq_cs as is so that rseq_sycall() will detect and
++	 * kill a misbehaving userspace on debug kernels.
++	 */
++	if (regs) {
++		ret = rseq_ip_fixup(regs);
++		if (unlikely(ret < 0))
++			goto error;
++	}
+ 	if (unlikely(rseq_update_cpu_id(t)))
+ 		goto error;
+ 	return;
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index c221e4c3f625c..fa91f398f28b7 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1605,6 +1605,14 @@ static int blk_trace_remove_queue(struct request_queue *q)
+ 	if (bt == NULL)
+ 		return -EINVAL;
+ 
++	if (bt->trace_state == Blktrace_running) {
++		bt->trace_state = Blktrace_stopped;
++		spin_lock_irq(&running_trace_lock);
++		list_del_init(&bt->running_list);
++		spin_unlock_irq(&running_trace_lock);
++		relay_flush(bt->rchan);
++	}
++
+ 	put_probe_ref();
+ 	synchronize_rcu();
+ 	blk_trace_free(bt);
+diff --git a/mm/debug.c b/mm/debug.c
+index e73fe0a8ec3d2..e61037cded980 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -24,7 +24,8 @@ const char *migrate_reason_names[MR_TYPES] = {
+ 	"syscall_or_cpuset",
+ 	"mempolicy_mbind",
+ 	"numa_misplaced",
+-	"cma",
++	"contig_range",
++	"longterm_pin",
+ };
+ 
+ const struct trace_print_flags pageflag_names[] = {
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 83811c976c0cb..7df9fde18004c 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1127,7 +1127,7 @@ static int page_action(struct page_state *ps, struct page *p,
+  */
+ static inline bool HWPoisonHandlable(struct page *page)
+ {
+-	return PageLRU(page) || __PageMovable(page);
++	return PageLRU(page) || __PageMovable(page) || is_free_buddy_page(page);
+ }
+ 
+ static int __get_hwpoison_page(struct page *page)
+diff --git a/mm/util.c b/mm/util.c
+index 9043d03750a73..c18202b3e659d 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -768,7 +768,7 @@ int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
+ 		size_t *lenp, loff_t *ppos)
+ {
+ 	struct ctl_table t;
+-	int new_policy;
++	int new_policy = -1;
+ 	int ret;
+ 
+ 	/*
+@@ -786,7 +786,7 @@ int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
+ 		t = *table;
+ 		t.data = &new_policy;
+ 		ret = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
+-		if (ret)
++		if (ret || new_policy == -1)
+ 			return ret;
+ 
+ 		mm_compute_batch(new_policy);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 8f1a47ad6781a..693f15a056304 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6988,12 +6988,16 @@ EXPORT_SYMBOL(napi_disable);
+  */
+ void napi_enable(struct napi_struct *n)
+ {
+-	BUG_ON(!test_bit(NAPI_STATE_SCHED, &n->state));
+-	smp_mb__before_atomic();
+-	clear_bit(NAPI_STATE_SCHED, &n->state);
+-	clear_bit(NAPI_STATE_NPSVC, &n->state);
+-	if (n->dev->threaded && n->thread)
+-		set_bit(NAPI_STATE_THREADED, &n->state);
++	unsigned long val, new;
++
++	do {
++		val = READ_ONCE(n->state);
++		BUG_ON(!test_bit(NAPI_STATE_SCHED, &val));
++
++		new = val & ~(NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC);
++		if (n->dev->threaded && n->thread)
++			new |= NAPIF_STATE_THREADED;
++	} while (cmpxchg(&n->state, val, new) != val);
+ }
+ EXPORT_SYMBOL(napi_enable);
+ 
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 79267b00af68f..76ed5ef0e36a8 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -342,6 +342,7 @@ static int dsa_port_setup(struct dsa_port *dp)
+ {
+ 	struct devlink_port *dlp = &dp->devlink_port;
+ 	bool dsa_port_link_registered = false;
++	struct dsa_switch *ds = dp->ds;
+ 	bool dsa_port_enabled = false;
+ 	int err = 0;
+ 
+@@ -351,6 +352,12 @@ static int dsa_port_setup(struct dsa_port *dp)
+ 	INIT_LIST_HEAD(&dp->fdbs);
+ 	INIT_LIST_HEAD(&dp->mdbs);
+ 
++	if (ds->ops->port_setup) {
++		err = ds->ops->port_setup(ds, dp->index);
++		if (err)
++			return err;
++	}
++
+ 	switch (dp->type) {
+ 	case DSA_PORT_TYPE_UNUSED:
+ 		dsa_port_disable(dp);
+@@ -393,8 +400,11 @@ static int dsa_port_setup(struct dsa_port *dp)
+ 		dsa_port_disable(dp);
+ 	if (err && dsa_port_link_registered)
+ 		dsa_port_link_unregister_of(dp);
+-	if (err)
++	if (err) {
++		if (ds->ops->port_teardown)
++			ds->ops->port_teardown(ds, dp->index);
+ 		return err;
++	}
+ 
+ 	dp->setup = true;
+ 
+@@ -446,11 +456,15 @@ static int dsa_port_devlink_setup(struct dsa_port *dp)
+ static void dsa_port_teardown(struct dsa_port *dp)
+ {
+ 	struct devlink_port *dlp = &dp->devlink_port;
++	struct dsa_switch *ds = dp->ds;
+ 	struct dsa_mac_addr *a, *tmp;
+ 
+ 	if (!dp->setup)
+ 		return;
+ 
++	if (ds->ops->port_teardown)
++		ds->ops->port_teardown(ds, dp->index);
++
+ 	devlink_port_type_clear(dlp);
+ 
+ 	switch (dp->type) {
+@@ -494,6 +508,36 @@ static void dsa_port_devlink_teardown(struct dsa_port *dp)
+ 	dp->devlink_port_setup = false;
+ }
+ 
++/* Destroy the current devlink port, and create a new one which has the UNUSED
++ * flavour. At this point, any call to ds->ops->port_setup has been already
++ * balanced out by a call to ds->ops->port_teardown, so we know that any
++ * devlink port regions the driver had are now unregistered. We then call its
++ * ds->ops->port_setup again, in order for the driver to re-create them on the
++ * new devlink port.
++ */
++static int dsa_port_reinit_as_unused(struct dsa_port *dp)
++{
++	struct dsa_switch *ds = dp->ds;
++	int err;
++
++	dsa_port_devlink_teardown(dp);
++	dp->type = DSA_PORT_TYPE_UNUSED;
++	err = dsa_port_devlink_setup(dp);
++	if (err)
++		return err;
++
++	if (ds->ops->port_setup) {
++		/* On error, leave the devlink port registered,
++		 * dsa_switch_teardown will clean it up later.
++		 */
++		err = ds->ops->port_setup(ds, dp->index);
++		if (err)
++			return err;
++	}
++
++	return 0;
++}
++
+ static int dsa_devlink_info_get(struct devlink *dl,
+ 				struct devlink_info_req *req,
+ 				struct netlink_ext_ack *extack)
+@@ -748,7 +792,7 @@ static int dsa_switch_setup(struct dsa_switch *ds)
+ 	devlink_params_publish(ds->devlink);
+ 
+ 	if (!ds->slave_mii_bus && ds->ops->phy_read) {
+-		ds->slave_mii_bus = devm_mdiobus_alloc(ds->dev);
++		ds->slave_mii_bus = mdiobus_alloc();
+ 		if (!ds->slave_mii_bus) {
+ 			err = -ENOMEM;
+ 			goto teardown;
+@@ -758,13 +802,16 @@ static int dsa_switch_setup(struct dsa_switch *ds)
+ 
+ 		err = mdiobus_register(ds->slave_mii_bus);
+ 		if (err < 0)
+-			goto teardown;
++			goto free_slave_mii_bus;
+ 	}
+ 
+ 	ds->setup = true;
+ 
+ 	return 0;
+ 
++free_slave_mii_bus:
++	if (ds->slave_mii_bus && ds->ops->phy_read)
++		mdiobus_free(ds->slave_mii_bus);
+ teardown:
+ 	if (ds->ops->teardown)
+ 		ds->ops->teardown(ds);
+@@ -789,8 +836,11 @@ static void dsa_switch_teardown(struct dsa_switch *ds)
+ 	if (!ds->setup)
+ 		return;
+ 
+-	if (ds->slave_mii_bus && ds->ops->phy_read)
++	if (ds->slave_mii_bus && ds->ops->phy_read) {
+ 		mdiobus_unregister(ds->slave_mii_bus);
++		mdiobus_free(ds->slave_mii_bus);
++		ds->slave_mii_bus = NULL;
++	}
+ 
+ 	dsa_switch_unregister_notifier(ds);
+ 
+@@ -850,12 +900,9 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
+ 	list_for_each_entry(dp, &dst->ports, list) {
+ 		err = dsa_port_setup(dp);
+ 		if (err) {
+-			dsa_port_devlink_teardown(dp);
+-			dp->type = DSA_PORT_TYPE_UNUSED;
+-			err = dsa_port_devlink_setup(dp);
++			err = dsa_port_reinit_as_unused(dp);
+ 			if (err)
+ 				goto teardown;
+-			continue;
+ 		}
+ 	}
+ 
+@@ -960,6 +1007,7 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+ teardown_master:
+ 	dsa_tree_teardown_master(dst);
+ teardown_switches:
++	dsa_tree_teardown_ports(dst);
+ 	dsa_tree_teardown_switches(dst);
+ teardown_default_cpu:
+ 	dsa_tree_teardown_default_cpu(dst);
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 75ca4b6e484f4..9e8100728d464 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -1982,6 +1982,8 @@ static int replace_nexthop_grp(struct net *net, struct nexthop *old,
+ 	rcu_assign_pointer(old->nh_grp, newg);
+ 
+ 	if (newg->resilient) {
++		/* Make sure concurrent readers are not using 'oldg' anymore. */
++		synchronize_net();
+ 		rcu_assign_pointer(oldg->res_table, tmp_table);
+ 		rcu_assign_pointer(oldg->spare->res_table, tmp_table);
+ 	}
+@@ -3565,6 +3567,7 @@ static struct notifier_block nh_netdev_notifier = {
+ };
+ 
+ static int nexthops_dump(struct net *net, struct notifier_block *nb,
++			 enum nexthop_event_type event_type,
+ 			 struct netlink_ext_ack *extack)
+ {
+ 	struct rb_root *root = &net->nexthop.rb_root;
+@@ -3575,8 +3578,7 @@ static int nexthops_dump(struct net *net, struct notifier_block *nb,
+ 		struct nexthop *nh;
+ 
+ 		nh = rb_entry(node, struct nexthop, rb_node);
+-		err = call_nexthop_notifier(nb, net, NEXTHOP_EVENT_REPLACE, nh,
+-					    extack);
++		err = call_nexthop_notifier(nb, net, event_type, nh, extack);
+ 		if (err)
+ 			break;
+ 	}
+@@ -3590,7 +3592,7 @@ int register_nexthop_notifier(struct net *net, struct notifier_block *nb,
+ 	int err;
+ 
+ 	rtnl_lock();
+-	err = nexthops_dump(net, nb, extack);
++	err = nexthops_dump(net, nb, NEXTHOP_EVENT_REPLACE, extack);
+ 	if (err)
+ 		goto unlock;
+ 	err = blocking_notifier_chain_register(&net->nexthop.notifier_chain,
+@@ -3603,8 +3605,17 @@ EXPORT_SYMBOL(register_nexthop_notifier);
+ 
+ int unregister_nexthop_notifier(struct net *net, struct notifier_block *nb)
+ {
+-	return blocking_notifier_chain_unregister(&net->nexthop.notifier_chain,
+-						  nb);
++	int err;
++
++	rtnl_lock();
++	err = blocking_notifier_chain_unregister(&net->nexthop.notifier_chain,
++						 nb);
++	if (err)
++		goto unlock;
++	nexthops_dump(net, nb, NEXTHOP_EVENT_DEL, NULL);
++unlock:
++	rtnl_unlock();
++	return err;
+ }
+ EXPORT_SYMBOL(unregister_nexthop_notifier);
+ 
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index ef75c9b05f17e..68e94e9f5089a 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1378,7 +1378,6 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
+ 	int err = -ENOMEM;
+ 	int allow_create = 1;
+ 	int replace_required = 0;
+-	int sernum = fib6_new_sernum(info->nl_net);
+ 
+ 	if (info->nlh) {
+ 		if (!(info->nlh->nlmsg_flags & NLM_F_CREATE))
+@@ -1478,7 +1477,7 @@ int fib6_add(struct fib6_node *root, struct fib6_info *rt,
+ 	if (!err) {
+ 		if (rt->nh)
+ 			list_add(&rt->nh_list, &rt->nh->f6i_list);
+-		__fib6_update_sernum_upto_root(rt, sernum);
++		__fib6_update_sernum_upto_root(rt, fib6_new_sernum(info->nl_net));
+ 		fib6_start_gc(info->nl_net, rt);
+ 	}
+ 
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index acbead7cf50f0..4d2abdd3cd3b1 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1291,7 +1291,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 			goto alloc_skb;
+ 		}
+ 
+-		must_collapse = (info->size_goal - skb->len > 0) &&
++		must_collapse = (info->size_goal > skb->len) &&
+ 				(skb_shinfo(skb)->nr_frags < sysctl_max_skb_frags);
+ 		if (must_collapse) {
+ 			size_bias = skb->len;
+@@ -1300,7 +1300,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	}
+ 
+ alloc_skb:
+-	if (!must_collapse && !ssk->sk_tx_skb_cache &&
++	if (!must_collapse &&
+ 	    !mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held))
+ 		return 0;
+ 
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index e286dafd6e886..6ec1ebe878ae0 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -230,7 +230,8 @@ static int smc_clc_prfx_set(struct socket *clcsock,
+ 		goto out_rel;
+ 	}
+ 	/* get address to which the internal TCP socket is bound */
+-	kernel_getsockname(clcsock, (struct sockaddr *)&addrs);
++	if (kernel_getsockname(clcsock, (struct sockaddr *)&addrs) < 0)
++		goto out_rel;
+ 	/* analyze IP specific data of net_device belonging to TCP socket */
+ 	addr6 = (struct sockaddr_in6 *)&addrs;
+ 	rcu_read_lock();
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index c160ff50c053a..116cfd6fac1ff 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1474,7 +1474,9 @@ static void smc_conn_abort_work(struct work_struct *work)
+ 						   abort_work);
+ 	struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ 
++	lock_sock(&smc->sk);
+ 	smc_conn_kill(conn, true);
++	release_sock(&smc->sk);
+ 	sock_put(&smc->sk); /* sock_hold done by schedulers of abort_work */
+ }
+ 
+diff --git a/tools/lib/perf/evsel.c b/tools/lib/perf/evsel.c
+index d8886720e83d8..8441e3e1aaac3 100644
+--- a/tools/lib/perf/evsel.c
++++ b/tools/lib/perf/evsel.c
+@@ -43,7 +43,7 @@ void perf_evsel__delete(struct perf_evsel *evsel)
+ 	free(evsel);
+ }
+ 
+-#define FD(e, x, y) (*(int *) xyarray__entry(e->fd, x, y))
++#define FD(e, x, y) ((int *) xyarray__entry(e->fd, x, y))
+ #define MMAP(e, x, y) (e->mmap ? ((struct perf_mmap *) xyarray__entry(e->mmap, x, y)) : NULL)
+ 
+ int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
+@@ -54,7 +54,10 @@ int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
+ 		int cpu, thread;
+ 		for (cpu = 0; cpu < ncpus; cpu++) {
+ 			for (thread = 0; thread < nthreads; thread++) {
+-				FD(evsel, cpu, thread) = -1;
++				int *fd = FD(evsel, cpu, thread);
++
++				if (fd)
++					*fd = -1;
+ 			}
+ 		}
+ 	}
+@@ -80,7 +83,7 @@ sys_perf_event_open(struct perf_event_attr *attr,
+ static int get_group_fd(struct perf_evsel *evsel, int cpu, int thread, int *group_fd)
+ {
+ 	struct perf_evsel *leader = evsel->leader;
+-	int fd;
++	int *fd;
+ 
+ 	if (evsel == leader) {
+ 		*group_fd = -1;
+@@ -95,10 +98,10 @@ static int get_group_fd(struct perf_evsel *evsel, int cpu, int thread, int *grou
+ 		return -ENOTCONN;
+ 
+ 	fd = FD(leader, cpu, thread);
+-	if (fd == -1)
++	if (fd == NULL || *fd == -1)
+ 		return -EBADF;
+ 
+-	*group_fd = fd;
++	*group_fd = *fd;
+ 
+ 	return 0;
+ }
+@@ -138,7 +141,11 @@ int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
+ 
+ 	for (cpu = 0; cpu < cpus->nr; cpu++) {
+ 		for (thread = 0; thread < threads->nr; thread++) {
+-			int fd, group_fd;
++			int fd, group_fd, *evsel_fd;
++
++			evsel_fd = FD(evsel, cpu, thread);
++			if (evsel_fd == NULL)
++				return -EINVAL;
+ 
+ 			err = get_group_fd(evsel, cpu, thread, &group_fd);
+ 			if (err < 0)
+@@ -151,7 +158,7 @@ int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
+ 			if (fd < 0)
+ 				return -errno;
+ 
+-			FD(evsel, cpu, thread) = fd;
++			*evsel_fd = fd;
+ 		}
+ 	}
+ 
+@@ -163,9 +170,12 @@ static void perf_evsel__close_fd_cpu(struct perf_evsel *evsel, int cpu)
+ 	int thread;
+ 
+ 	for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
+-		if (FD(evsel, cpu, thread) >= 0)
+-			close(FD(evsel, cpu, thread));
+-		FD(evsel, cpu, thread) = -1;
++		int *fd = FD(evsel, cpu, thread);
++
++		if (fd && *fd >= 0) {
++			close(*fd);
++			*fd = -1;
++		}
+ 	}
+ }
+ 
+@@ -209,13 +219,12 @@ void perf_evsel__munmap(struct perf_evsel *evsel)
+ 
+ 	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
+ 		for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
+-			int fd = FD(evsel, cpu, thread);
+-			struct perf_mmap *map = MMAP(evsel, cpu, thread);
++			int *fd = FD(evsel, cpu, thread);
+ 
+-			if (fd < 0)
++			if (fd == NULL || *fd < 0)
+ 				continue;
+ 
+-			perf_mmap__munmap(map);
++			perf_mmap__munmap(MMAP(evsel, cpu, thread));
+ 		}
+ 	}
+ 
+@@ -239,15 +248,16 @@ int perf_evsel__mmap(struct perf_evsel *evsel, int pages)
+ 
+ 	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
+ 		for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
+-			int fd = FD(evsel, cpu, thread);
+-			struct perf_mmap *map = MMAP(evsel, cpu, thread);
++			int *fd = FD(evsel, cpu, thread);
++			struct perf_mmap *map;
+ 
+-			if (fd < 0)
++			if (fd == NULL || *fd < 0)
+ 				continue;
+ 
++			map = MMAP(evsel, cpu, thread);
+ 			perf_mmap__init(map, NULL, false, NULL);
+ 
+-			ret = perf_mmap__mmap(map, &mp, fd, cpu);
++			ret = perf_mmap__mmap(map, &mp, *fd, cpu);
+ 			if (ret) {
+ 				perf_evsel__munmap(evsel);
+ 				return ret;
+@@ -260,7 +270,9 @@ int perf_evsel__mmap(struct perf_evsel *evsel, int pages)
+ 
+ void *perf_evsel__mmap_base(struct perf_evsel *evsel, int cpu, int thread)
+ {
+-	if (FD(evsel, cpu, thread) < 0 || MMAP(evsel, cpu, thread) == NULL)
++	int *fd = FD(evsel, cpu, thread);
++
++	if (fd == NULL || *fd < 0 || MMAP(evsel, cpu, thread) == NULL)
+ 		return NULL;
+ 
+ 	return MMAP(evsel, cpu, thread)->base;
+@@ -295,17 +307,18 @@ int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
+ 		     struct perf_counts_values *count)
+ {
+ 	size_t size = perf_evsel__read_size(evsel);
++	int *fd = FD(evsel, cpu, thread);
+ 
+ 	memset(count, 0, sizeof(*count));
+ 
+-	if (FD(evsel, cpu, thread) < 0)
++	if (fd == NULL || *fd < 0)
+ 		return -EINVAL;
+ 
+ 	if (MMAP(evsel, cpu, thread) &&
+ 	    !perf_mmap__read_self(MMAP(evsel, cpu, thread), count))
+ 		return 0;
+ 
+-	if (readn(FD(evsel, cpu, thread), count->values, size) <= 0)
++	if (readn(*fd, count->values, size) <= 0)
+ 		return -errno;
+ 
+ 	return 0;
+@@ -318,8 +331,13 @@ static int perf_evsel__run_ioctl(struct perf_evsel *evsel,
+ 	int thread;
+ 
+ 	for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
+-		int fd = FD(evsel, cpu, thread),
+-		    err = ioctl(fd, ioc, arg);
++		int err;
++		int *fd = FD(evsel, cpu, thread);
++
++		if (fd == NULL || *fd < 0)
++			return -1;
++
++		err = ioctl(*fd, ioc, arg);
+ 
+ 		if (err)
+ 			return err;
+diff --git a/tools/testing/selftests/arm64/signal/test_signals.h b/tools/testing/selftests/arm64/signal/test_signals.h
+index f96baf1cef1a9..ebe8694dbef0f 100644
+--- a/tools/testing/selftests/arm64/signal/test_signals.h
++++ b/tools/testing/selftests/arm64/signal/test_signals.h
+@@ -33,10 +33,12 @@
+  */
+ enum {
+ 	FSSBS_BIT,
++	FSVE_BIT,
+ 	FMAX_END
+ };
+ 
+ #define FEAT_SSBS		(1UL << FSSBS_BIT)
++#define FEAT_SVE		(1UL << FSVE_BIT)
+ 
+ /*
+  * A descriptor used to describe and configure a test case.
+diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.c b/tools/testing/selftests/arm64/signal/test_signals_utils.c
+index 2de6e5ed5e258..22722abc9dfa9 100644
+--- a/tools/testing/selftests/arm64/signal/test_signals_utils.c
++++ b/tools/testing/selftests/arm64/signal/test_signals_utils.c
+@@ -26,6 +26,7 @@ static int sig_copyctx = SIGTRAP;
+ 
+ static char const *const feats_names[FMAX_END] = {
+ 	" SSBS ",
++	" SVE ",
+ };
+ 
+ #define MAX_FEATS_SZ	128
+@@ -263,16 +264,21 @@ int test_init(struct tdescr *td)
+ 		 */
+ 		if (getauxval(AT_HWCAP) & HWCAP_SSBS)
+ 			td->feats_supported |= FEAT_SSBS;
+-		if (feats_ok(td))
++		if (getauxval(AT_HWCAP) & HWCAP_SVE)
++			td->feats_supported |= FEAT_SVE;
++		if (feats_ok(td)) {
+ 			fprintf(stderr,
+ 				"Required Features: [%s] supported\n",
+ 				feats_to_string(td->feats_required &
+ 						td->feats_supported));
+-		else
++		} else {
+ 			fprintf(stderr,
+ 				"Required Features: [%s] NOT supported\n",
+ 				feats_to_string(td->feats_required &
+ 						~td->feats_supported));
++			td->result = KSFT_SKIP;
++			return 0;
++		}
+ 	}
+ 
+ 	/* Perform test specific additional initialization */


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-03 19:14 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-03 19:14 UTC (permalink / raw
  To: gentoo-commits

commit:     3840a675683c2df1aea2f9efed23617ce7eb9e01
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct  3 19:14:20 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct  3 19:14:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3840a675

Upgrade BMQ and PDS io scheduler to version v5.14-r1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   2 +-
 ...=> 5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch | 284 ++++++++++-----------
 2 files changed, 142 insertions(+), 144 deletions(-)

diff --git a/0000_README b/0000_README
index 21444f8..2d15afd 100644
--- a/0000_README
+++ b/0000_README
@@ -115,7 +115,7 @@ Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
 
-Patch:  5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch
+Patch:  5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
 From:   https://gitlab.com/alfredchen/linux-prjc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
 

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
similarity index 98%
rename from 5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
index 4c6f75c..99adff7 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v5.14-r1.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
@@ -341,6 +341,20 @@ index e5af028c08b4..0a7565d0d3cf 100644
  	return false;
  }
  
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 8f0f778b7c91..991f2280475b 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -225,7 +225,8 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
 diff --git a/init/Kconfig b/init/Kconfig
 index 55f9f7738ebb..9a9b244d3ca3 100644
 --- a/init/Kconfig
@@ -659,10 +673,10 @@ index 978fcfca5871..0425ee149b4d 100644
  obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
 diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
 new file mode 100644
-index 000000000000..900889c838ea
+index 000000000000..56aed2b1e42c
 --- /dev/null
 +++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7248 @@
+@@ -0,0 +1,7341 @@
 +/*
 + *  kernel/sched/alt_core.c
 + *
@@ -732,7 +746,7 @@ index 000000000000..900889c838ea
 +#define sched_feat(x)	(0)
 +#endif /* CONFIG_SCHED_DEBUG */
 +
-+#define ALT_SCHED_VERSION "v5.14-r1"
++#define ALT_SCHED_VERSION "v5.14-r3"
 +
 +/* rt_prio(prio) defined in include/linux/sched/rt.h */
 +#define rt_task(p)		rt_prio((p)->prio)
@@ -1249,6 +1263,101 @@ index 000000000000..900889c838ea
 +	update_rq_clock_task(rq, delta);
 +}
 +
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
++			RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!cpu_rq(rq->cpu)->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu, unsigned long max)
++{
++	return rq_load_util(cpu_rq(cpu), max);
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++						  cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
 +#ifdef CONFIG_NO_HZ_FULL
 +/*
 + * Tick may be needed by tasks in the runqueue depending on their policy and
@@ -4038,6 +4147,7 @@ index 000000000000..900889c838ea
 +	s64 ns = rq->clock_task - p->last_ran;
 +
 +	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
 +	account_group_exec_runtime(p, ns);
 +
 +	p->time_slice -= ns;
@@ -4600,6 +4710,7 @@ index 000000000000..900889c838ea
 +		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
 +			__SCHED_DEQUEUE_TASK(p, rq, 0, );
 +			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
 +			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
 +			nr_migrated++;
 +		}
@@ -5753,11 +5864,7 @@ index 000000000000..900889c838ea
 +		 * the runqueue. This will be done when the task deboost
 +		 * itself.
 +		 */
-+		if (rt_effective_prio(p, newprio) == p->prio) {
-+			__setscheduler_params(p, attr);
-+			retval = 0;
-+			goto unlock;
-+		}
++		newprio = rt_effective_prio(p, newprio);
 +	}
 +
 +	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
@@ -6969,7 +7076,6 @@ index 000000000000..900889c838ea
 +	struct task_struct *push_task = rq->curr;
 +
 +	lockdep_assert_held(&rq->lock);
-+	SCHED_WARN_ON(rq->cpu != smp_processor_id());
 +
 +	/*
 +	 * Ensure the thing is persistent until balance_push_set(.on = false);
@@ -6977,9 +7083,10 @@ index 000000000000..900889c838ea
 +	rq->balance_callback = &balance_push_callback;
 +
 +	/*
-+	 * Only active while going offline.
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
 +	 */
-+	if (!cpu_dying(rq->cpu))
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
 +		return;
 +
 +	/*
@@ -7950,10 +8057,10 @@ index 000000000000..1212a031700e
 +{}
 diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
 new file mode 100644
-index 000000000000..f03af9ab9123
+index 000000000000..289058a09bd5
 --- /dev/null
 +++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,692 @@
+@@ -0,0 +1,666 @@
 +#ifndef ALT_SCHED_H
 +#define ALT_SCHED_H
 +
@@ -8153,6 +8260,7 @@ index 000000000000..f03af9ab9123
 +	struct rcuwait		hotplug_wait;
 +#endif
 +	unsigned int		nr_pinned;
++
 +#endif /* CONFIG_SMP */
 +#ifdef CONFIG_IRQ_TIME_ACCOUNTING
 +	u64 prev_irq_time;
@@ -8164,6 +8272,11 @@ index 000000000000..f03af9ab9123
 +	u64 prev_steal_time_rq;
 +#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
 +
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
 +	/* calc_load related fields */
 +	unsigned long calc_load_update;
 +	long calc_load_active;
@@ -8216,6 +8329,8 @@ index 000000000000..f03af9ab9123
 +#endif /* CONFIG_NO_HZ_COMMON */
 +};
 +
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
 +extern unsigned long calc_load_update;
 +extern atomic_long_t calc_load_tasks;
 +
@@ -8528,40 +8643,6 @@ index 000000000000..f03af9ab9123
 +
 +#ifdef CONFIG_CPU_FREQ
 +DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
-+
-+/**
-+ * cpufreq_update_util - Take a note about CPU utilization changes.
-+ * @rq: Runqueue to carry out the update for.
-+ * @flags: Update reason flags.
-+ *
-+ * This function is called by the scheduler on the CPU whose utilization is
-+ * being updated.
-+ *
-+ * It can only be called from RCU-sched read-side critical sections.
-+ *
-+ * The way cpufreq is currently arranged requires it to evaluate the CPU
-+ * performance state (frequency/voltage) on a regular basis to prevent it from
-+ * being stuck in a completely inadequate performance level for too long.
-+ * That is not guaranteed to happen if the updates are only triggered from CFS
-+ * and DL, though, because they may not be coming in if only RT tasks are
-+ * active all the time (or there are RT tasks only).
-+ *
-+ * As a workaround for that issue, this function is called periodically by the
-+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
-+ * but that really is a band-aid.  Going forward it should be replaced with
-+ * solutions targeted more specifically at RT tasks.
-+ */
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+	struct update_util_data *data;
-+
-+	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
-+						  cpu_of(rq)));
-+	if (data)
-+		data->func(data, rq_clock(rq), flags);
-+}
-+#else
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
 +#endif /* CONFIG_CPU_FREQ */
 +
 +#ifdef CONFIG_NO_HZ_FULL
@@ -8764,88 +8845,25 @@ index 000000000000..be3ee4a553ca
 +
 +static inline void update_rq_time_edge(struct rq *rq) {}
 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
-index 57124614363d..4057e51cef45 100644
+index 57124614363d..f0e9c7543542 100644
 --- a/kernel/sched/cpufreq_schedutil.c
 +++ b/kernel/sched/cpufreq_schedutil.c
-@@ -57,6 +57,13 @@ struct sugov_cpu {
- 	unsigned long		bw_dl;
- 	unsigned long		max;
- 
-+#ifdef CONFIG_SCHED_ALT
-+	/* For genenal cpu load util */
-+	s32			load_history;
-+	u64			load_block;
-+	u64			load_stamp;
-+#endif
-+
- 	/* The field below is for single-CPU policies only: */
- #ifdef CONFIG_NO_HZ_COMMON
- 	unsigned long		saved_idle_calls;
-@@ -161,6 +168,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
- 	return cpufreq_driver_resolve_freq(policy, freq);
- }
+@@ -167,9 +167,14 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ 	unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);
  
+ 	sg_cpu->max = max;
 +#ifndef CONFIG_SCHED_ALT
- static void sugov_get_util(struct sugov_cpu *sg_cpu)
- {
- 	struct rq *rq = cpu_rq(sg_cpu->cpu);
-@@ -172,6 +180,55 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ 	sg_cpu->bw_dl = cpu_bw_dl(rq);
+ 	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(rq), max,
  					  FREQUENCY_UTIL, NULL);
- }
- 
-+#else /* CONFIG_SCHED_ALT */
-+
-+#define SG_CPU_LOAD_HISTORY_BITS	(sizeof(s32) * 8ULL)
-+#define SG_CPU_UTIL_SHIFT		(8)
-+#define SG_CPU_LOAD_HISTORY_SHIFT	(SG_CPU_LOAD_HISTORY_BITS - 1 - SG_CPU_UTIL_SHIFT)
-+#define SG_CPU_LOAD_HISTORY_TO_UTIL(l)	(((l) >> SG_CPU_LOAD_HISTORY_SHIFT) & 0xff)
-+
-+#define LOAD_BLOCK(t)		((t) >> 17)
-+#define LOAD_HALF_BLOCK(t)	((t) >> 16)
-+#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
-+#define LOAD_BLOCK_BIT(b)	(1UL << (SG_CPU_LOAD_HISTORY_BITS - 1 - (b)))
-+#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
-+
-+static void sugov_get_util(struct sugov_cpu *sg_cpu)
-+{
-+	unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);
-+
-+	sg_cpu->max = max;
++#else
 +	sg_cpu->bw_dl = 0;
-+	sg_cpu->util = SG_CPU_LOAD_HISTORY_TO_UTIL(sg_cpu->load_history) *
-+		(max >> SG_CPU_UTIL_SHIFT);
-+}
-+
-+static inline void sugov_cpu_load_update(struct sugov_cpu *sg_cpu, u64 time)
-+{
-+	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(sg_cpu->load_stamp),
-+			SG_CPU_LOAD_HISTORY_BITS - 1);
-+	u64 prev = !!(sg_cpu->load_history & CURRENT_LOAD_BIT);
-+	u64 curr = !!cpu_rq(sg_cpu->cpu)->nr_running;
-+
-+	if (delta) {
-+		sg_cpu->load_history = sg_cpu->load_history >> delta;
-+
-+		if (delta <= SG_CPU_UTIL_SHIFT) {
-+			sg_cpu->load_block += (~BLOCK_MASK(sg_cpu->load_stamp)) * prev;
-+			if (!!LOAD_HALF_BLOCK(sg_cpu->load_block) ^ curr)
-+				sg_cpu->load_history ^= LOAD_BLOCK_BIT(delta);
-+		}
-+
-+		sg_cpu->load_block = BLOCK_MASK(time) * prev;
-+	} else {
-+		sg_cpu->load_block += (time - sg_cpu->load_stamp) * prev;
-+	}
-+	if (prev ^ curr)
-+		sg_cpu->load_history ^= CURRENT_LOAD_BIT;
-+	sg_cpu->load_stamp = time;
-+}
++	sg_cpu->util = rq_load_util(rq, max);
 +#endif /* CONFIG_SCHED_ALT */
-+
+ }
+ 
  /**
-  * sugov_iowait_reset() - Reset the IO boost status of a CPU.
-  * @sg_cpu: the sugov data for the CPU to boost
-@@ -312,13 +369,19 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+@@ -312,8 +317,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
   */
  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
  {
@@ -8856,27 +8874,7 @@ index 57124614363d..4057e51cef45 100644
  }
  
  static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
- 					      u64 time, unsigned int flags)
- {
-+#ifdef CONFIG_SCHED_ALT
-+	sugov_cpu_load_update(sg_cpu, time);
-+#endif /* CONFIG_SCHED_ALT */
-+
- 	sugov_iowait_boost(sg_cpu, time, flags);
- 	sg_cpu->last_update = time;
- 
-@@ -439,6 +502,10 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
- 
- 	raw_spin_lock(&sg_policy->update_lock);
- 
-+#ifdef CONFIG_SCHED_ALT
-+	sugov_cpu_load_update(sg_cpu, time);
-+#endif /* CONFIG_SCHED_ALT */
-+
- 	sugov_iowait_boost(sg_cpu, time, flags);
- 	sg_cpu->last_update = time;
- 
-@@ -599,6 +666,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+@@ -599,6 +606,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
  	}
  
  	ret = sched_setattr_nocheck(thread, &attr);
@@ -8884,7 +8882,7 @@ index 57124614363d..4057e51cef45 100644
  	if (ret) {
  		kthread_stop(thread);
  		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
-@@ -833,7 +901,9 @@ cpufreq_governor_init(schedutil_gov);
+@@ -833,7 +841,9 @@ cpufreq_governor_init(schedutil_gov);
  #ifdef CONFIG_ENERGY_MODEL
  static void rebuild_sd_workfn(struct work_struct *work)
  {


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-07 10:36 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-07 10:36 UTC (permalink / raw
  To: gentoo-commits

commit:     efd47c3e12b1d6d48aee11e5dd709dd719a3a0e5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  7 10:36:26 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct  7 10:36:26 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=efd47c3e

Linux patch 5.14.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-5.14.10.patch | 6835 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6839 insertions(+)

diff --git a/0000_README b/0000_README
index 2d15afd..11074a3 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1008_linux-5.14.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.9
 
+Patch:  1009_linux-5.14.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.14.10.patch b/1009_linux-5.14.10.patch
new file mode 100644
index 0000000..3a2fa0e
--- /dev/null
+++ b/1009_linux-5.14.10.patch
@@ -0,0 +1,6835 @@
+diff --git a/Makefile b/Makefile
+index 50c17e63c54ef..9f99a61d2589b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/m68k/kernel/entry.S b/arch/m68k/kernel/entry.S
+index 9dd76fbb7c6b2..ff9e842cec0fb 100644
+--- a/arch/m68k/kernel/entry.S
++++ b/arch/m68k/kernel/entry.S
+@@ -186,6 +186,8 @@ ENTRY(ret_from_signal)
+ 	movel	%curptr@(TASK_STACK),%a1
+ 	tstb	%a1@(TINFO_FLAGS+2)
+ 	jge	1f
++	lea	%sp@(SWITCH_STACK_SIZE),%a1
++	movel	%a1,%curptr@(TASK_THREAD+THREAD_ESP0)
+ 	jbsr	syscall_trace
+ 1:	RESTORE_SWITCH_STACK
+ 	addql	#4,%sp
+diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c
+index 0af88622c6192..cb6d22439f71b 100644
+--- a/arch/mips/net/bpf_jit.c
++++ b/arch/mips/net/bpf_jit.c
+@@ -662,6 +662,11 @@ static void build_epilogue(struct jit_ctx *ctx)
+ 	((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative : func) : \
+ 	 func##_positive)
+ 
++static bool is_bad_offset(int b_off)
++{
++	return b_off > 0x1ffff || b_off < -0x20000;
++}
++
+ static int build_body(struct jit_ctx *ctx)
+ {
+ 	const struct bpf_prog *prog = ctx->skf;
+@@ -728,7 +733,10 @@ load_common:
+ 			/* Load return register on DS for failures */
+ 			emit_reg_move(r_ret, r_zero, ctx);
+ 			/* Return with error */
+-			emit_b(b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_b(b_off, ctx);
+ 			emit_nop(ctx);
+ 			break;
+ 		case BPF_LD | BPF_W | BPF_IND:
+@@ -775,8 +783,10 @@ load_ind:
+ 			emit_jalr(MIPS_R_RA, r_s0, ctx);
+ 			emit_reg_move(MIPS_R_A0, r_skb, ctx); /* delay slot */
+ 			/* Check the error value */
+-			emit_bcond(MIPS_COND_NE, r_ret, 0,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_NE, r_ret, 0, b_off, ctx);
+ 			emit_reg_move(r_ret, r_zero, ctx);
+ 			/* We are good */
+ 			/* X <- P[1:K] & 0xf */
+@@ -855,8 +865,10 @@ load_ind:
+ 			/* A /= X */
+ 			ctx->flags |= SEEN_X | SEEN_A;
+ 			/* Check if r_X is zero */
+-			emit_bcond(MIPS_COND_EQ, r_X, r_zero,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
+ 			emit_load_imm(r_ret, 0, ctx); /* delay slot */
+ 			emit_div(r_A, r_X, ctx);
+ 			break;
+@@ -864,8 +876,10 @@ load_ind:
+ 			/* A %= X */
+ 			ctx->flags |= SEEN_X | SEEN_A;
+ 			/* Check if r_X is zero */
+-			emit_bcond(MIPS_COND_EQ, r_X, r_zero,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
+ 			emit_load_imm(r_ret, 0, ctx); /* delay slot */
+ 			emit_mod(r_A, r_X, ctx);
+ 			break;
+@@ -926,7 +940,10 @@ load_ind:
+ 			break;
+ 		case BPF_JMP | BPF_JA:
+ 			/* pc += K */
+-			emit_b(b_imm(i + k + 1, ctx), ctx);
++			b_off = b_imm(i + k + 1, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_b(b_off, ctx);
+ 			emit_nop(ctx);
+ 			break;
+ 		case BPF_JMP | BPF_JEQ | BPF_K:
+@@ -1056,12 +1073,16 @@ jmp_cmp:
+ 			break;
+ 		case BPF_RET | BPF_A:
+ 			ctx->flags |= SEEN_A;
+-			if (i != prog->len - 1)
++			if (i != prog->len - 1) {
+ 				/*
+ 				 * If this is not the last instruction
+ 				 * then jump to the epilogue
+ 				 */
+-				emit_b(b_imm(prog->len, ctx), ctx);
++				b_off = b_imm(prog->len, ctx);
++				if (is_bad_offset(b_off))
++					return -E2BIG;
++				emit_b(b_off, ctx);
++			}
+ 			emit_reg_move(r_ret, r_A, ctx); /* delay slot */
+ 			break;
+ 		case BPF_RET | BPF_K:
+@@ -1075,7 +1096,10 @@ jmp_cmp:
+ 				 * If this is not the last instruction
+ 				 * then jump to the epilogue
+ 				 */
+-				emit_b(b_imm(prog->len, ctx), ctx);
++				b_off = b_imm(prog->len, ctx);
++				if (is_bad_offset(b_off))
++					return -E2BIG;
++				emit_b(b_off, ctx);
+ 				emit_nop(ctx);
+ 			}
+ 			break;
+@@ -1133,8 +1157,10 @@ jmp_cmp:
+ 			/* Load *dev pointer */
+ 			emit_load_ptr(r_s0, r_skb, off, ctx);
+ 			/* error (0) in the delay slot */
+-			emit_bcond(MIPS_COND_EQ, r_s0, r_zero,
+-				   b_imm(prog->len, ctx), ctx);
++			b_off = b_imm(prog->len, ctx);
++			if (is_bad_offset(b_off))
++				return -E2BIG;
++			emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_off, ctx);
+ 			emit_reg_move(r_ret, r_zero, ctx);
+ 			if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
+ 				BUILD_BUG_ON(sizeof_field(struct net_device, ifindex) != 4);
+@@ -1244,7 +1270,10 @@ void bpf_jit_compile(struct bpf_prog *fp)
+ 
+ 	/* Generate the actual JIT code */
+ 	build_prologue(&ctx);
+-	build_body(&ctx);
++	if (build_body(&ctx)) {
++		module_memfree(ctx.target);
++		goto out;
++	}
+ 	build_epilogue(&ctx);
+ 
+ 	/* Update the icache */
+diff --git a/arch/nios2/Kconfig.debug b/arch/nios2/Kconfig.debug
+index a8bc06e96ef58..ca1beb87f987c 100644
+--- a/arch/nios2/Kconfig.debug
++++ b/arch/nios2/Kconfig.debug
+@@ -3,9 +3,10 @@
+ config EARLY_PRINTK
+ 	bool "Activate early kernel debugging"
+ 	default y
++	depends on TTY
+ 	select SERIAL_CORE_CONSOLE
+ 	help
+-	  Enable early printk on console
++	  Enable early printk on console.
+ 	  This is useful for kernel debugging when your machine crashes very
+ 	  early before the console code is initialized.
+ 	  You should normally say N here, unless you want to debug such a crash.
+diff --git a/arch/nios2/kernel/setup.c b/arch/nios2/kernel/setup.c
+index cf8d687a2644a..40bc8fb75e0b5 100644
+--- a/arch/nios2/kernel/setup.c
++++ b/arch/nios2/kernel/setup.c
+@@ -149,8 +149,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
+ 
+ void __init setup_arch(char **cmdline_p)
+ {
+-	int dram_start;
+-
+ 	console_verbose();
+ 
+ 	memory_start = memblock_start_of_DRAM();
+diff --git a/arch/s390/include/asm/ccwgroup.h b/arch/s390/include/asm/ccwgroup.h
+index 20f169b6db4ec..d97301d9d0b8c 100644
+--- a/arch/s390/include/asm/ccwgroup.h
++++ b/arch/s390/include/asm/ccwgroup.h
+@@ -57,7 +57,7 @@ struct ccwgroup_device *get_ccwgroupdev_by_busid(struct ccwgroup_driver *gdrv,
+ 						 char *bus_id);
+ 
+ extern int ccwgroup_set_online(struct ccwgroup_device *gdev);
+-extern int ccwgroup_set_offline(struct ccwgroup_device *gdev);
++int ccwgroup_set_offline(struct ccwgroup_device *gdev, bool call_gdrv);
+ 
+ extern int ccwgroup_probe_ccwdev(struct ccw_device *cdev);
+ extern void ccwgroup_remove_ccwdev(struct ccw_device *cdev);
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 388643ca2177e..0fc961bef299c 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -849,7 +849,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
+ 		return -EINVAL;
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
+-	if (err)
++	if (!walk.nbytes)
+ 		return err;
+ 
+ 	if (unlikely(tail > 0 && walk.nbytes < walk.total)) {
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index ac6fd2dabf6a2..482224444a1ee 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -263,6 +263,7 @@ static struct event_constraint intel_icl_event_constraints[] = {
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xa8, 0xb0, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xb7, 0xbd, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xe6, 0xf),
++	INTEL_EVENT_CONSTRAINT(0xef, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0xf0, 0xf4, 0xf),
+ 	EVENT_CONSTRAINT_END
+ };
+diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
+index 87bd6025d91d4..6a5f3acf2b331 100644
+--- a/arch/x86/include/asm/kvm_page_track.h
++++ b/arch/x86/include/asm/kvm_page_track.h
+@@ -46,7 +46,7 @@ struct kvm_page_track_notifier_node {
+ 			    struct kvm_page_track_notifier_node *node);
+ };
+ 
+-void kvm_page_track_init(struct kvm *kvm);
++int kvm_page_track_init(struct kvm *kvm);
+ void kvm_page_track_cleanup(struct kvm *kvm);
+ 
+ void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
+diff --git a/arch/x86/include/asm/kvmclock.h b/arch/x86/include/asm/kvmclock.h
+index eceea92990974..6c57651921028 100644
+--- a/arch/x86/include/asm/kvmclock.h
++++ b/arch/x86/include/asm/kvmclock.h
+@@ -2,6 +2,20 @@
+ #ifndef _ASM_X86_KVM_CLOCK_H
+ #define _ASM_X86_KVM_CLOCK_H
+ 
++#include <linux/percpu.h>
++
+ extern struct clocksource kvm_clock;
+ 
++DECLARE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
++
++static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
++{
++	return &this_cpu_read(hv_clock_per_cpu)->pvti;
++}
++
++static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
++{
++	return this_cpu_read(hv_clock_per_cpu);
++}
++
+ #endif /* _ASM_X86_KVM_CLOCK_H */
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index ad273e5861c1b..73c74b961d0fd 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -49,18 +49,9 @@ early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
+ static struct pvclock_vsyscall_time_info
+ 			hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE);
+ static struct pvclock_wall_clock wall_clock __bss_decrypted;
+-static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
+ static struct pvclock_vsyscall_time_info *hvclock_mem;
+-
+-static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
+-{
+-	return &this_cpu_read(hv_clock_per_cpu)->pvti;
+-}
+-
+-static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
+-{
+-	return this_cpu_read(hv_clock_per_cpu);
+-}
++DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
++EXPORT_PER_CPU_SYMBOL_GPL(hv_clock_per_cpu);
+ 
+ /*
+  * The wallclock is the time of day when we booted. Since then, some time may
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index fe03bd978761e..751aa85a30012 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -65,8 +65,8 @@ static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
+ 	for (i = 0; i < nent; i++) {
+ 		e = &entries[i];
+ 
+-		if (e->function == function && (e->index == index ||
+-		    !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
++		if (e->function == function &&
++		    (!(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX) || e->index == index))
+ 			return e;
+ 	}
+ 
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 2837110e66eda..50050d06672b8 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -435,7 +435,6 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
+ 	__FOP_RET(#op)
+ 
+ asm(".pushsection .fixup, \"ax\"\n"
+-    ".global kvm_fastop_exception \n"
+     "kvm_fastop_exception: xor %esi, %esi; ret\n"
+     ".popsection");
+ 
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index ff005fe738a4c..8c065da73f8e5 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -319,8 +319,8 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
+ 	unsigned index;
+ 	bool mask_before, mask_after;
+ 	union kvm_ioapic_redirect_entry *e;
+-	unsigned long vcpu_bitmap;
+ 	int old_remote_irr, old_delivery_status, old_dest_id, old_dest_mode;
++	DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
+ 
+ 	switch (ioapic->ioregsel) {
+ 	case IOAPIC_REG_VERSION:
+@@ -384,9 +384,9 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
+ 			irq.shorthand = APIC_DEST_NOSHORT;
+ 			irq.dest_id = e->fields.dest_id;
+ 			irq.msi_redir_hint = false;
+-			bitmap_zero(&vcpu_bitmap, 16);
++			bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
+ 			kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
+-						 &vcpu_bitmap);
++						 vcpu_bitmap);
+ 			if (old_dest_mode != e->fields.dest_mode ||
+ 			    old_dest_id != e->fields.dest_id) {
+ 				/*
+@@ -399,10 +399,10 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
+ 				    kvm_lapic_irq_dest_mode(
+ 					!!e->fields.dest_mode);
+ 				kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
+-							 &vcpu_bitmap);
++							 vcpu_bitmap);
+ 			}
+ 			kvm_make_scan_ioapic_request_mask(ioapic->kvm,
+-							  &vcpu_bitmap);
++							  vcpu_bitmap);
+ 		} else {
+ 			kvm_make_scan_ioapic_request(ioapic->kvm);
+ 		}
+diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
+index 91a9f7e0fd914..68e67228101de 100644
+--- a/arch/x86/kvm/mmu/page_track.c
++++ b/arch/x86/kvm/mmu/page_track.c
+@@ -163,13 +163,13 @@ void kvm_page_track_cleanup(struct kvm *kvm)
+ 	cleanup_srcu_struct(&head->track_srcu);
+ }
+ 
+-void kvm_page_track_init(struct kvm *kvm)
++int kvm_page_track_init(struct kvm *kvm)
+ {
+ 	struct kvm_page_track_notifier_head *head;
+ 
+ 	head = &kvm->arch.track_notifier_head;
+-	init_srcu_struct(&head->track_srcu);
+ 	INIT_HLIST_HEAD(&head->track_notifier_list);
++	return init_srcu_struct(&head->track_srcu);
+ }
+ 
+ /*
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index e5515477c30a6..700bc241cee18 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -545,7 +545,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
+ 		(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
+ 		(svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits);
+ 
+-	svm->vmcb->control.virt_ext            = svm->nested.ctl.virt_ext;
+ 	svm->vmcb->control.int_vector          = svm->nested.ctl.int_vector;
+ 	svm->vmcb->control.int_state           = svm->nested.ctl.int_state;
+ 	svm->vmcb->control.event_inj           = svm->nested.ctl.event_inj;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 7fbce342eec47..cb166bde449bd 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -596,43 +596,50 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
+ 	return 0;
+ }
+ 
+-static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
++static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
++				    int *error)
+ {
+-	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 	struct sev_data_launch_update_vmsa vmsa;
++	struct vcpu_svm *svm = to_svm(vcpu);
++	int ret;
++
++	/* Perform some pre-encryption checks against the VMSA */
++	ret = sev_es_sync_vmsa(svm);
++	if (ret)
++		return ret;
++
++	/*
++	 * The LAUNCH_UPDATE_VMSA command will perform in-place encryption of
++	 * the VMSA memory content (i.e it will write the same memory region
++	 * with the guest's key), so invalidate it first.
++	 */
++	clflush_cache_range(svm->vmsa, PAGE_SIZE);
++
++	vmsa.reserved = 0;
++	vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
++	vmsa.address = __sme_pa(svm->vmsa);
++	vmsa.len = PAGE_SIZE;
++	return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
++}
++
++static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
++{
+ 	struct kvm_vcpu *vcpu;
+ 	int i, ret;
+ 
+ 	if (!sev_es_guest(kvm))
+ 		return -ENOTTY;
+ 
+-	vmsa.reserved = 0;
+-
+ 	kvm_for_each_vcpu(i, vcpu, kvm) {
+-		struct vcpu_svm *svm = to_svm(vcpu);
+-
+-		/* Perform some pre-encryption checks against the VMSA */
+-		ret = sev_es_sync_vmsa(svm);
++		ret = mutex_lock_killable(&vcpu->mutex);
+ 		if (ret)
+ 			return ret;
+ 
+-		/*
+-		 * The LAUNCH_UPDATE_VMSA command will perform in-place
+-		 * encryption of the VMSA memory content (i.e it will write
+-		 * the same memory region with the guest's key), so invalidate
+-		 * it first.
+-		 */
+-		clflush_cache_range(svm->vmsa, PAGE_SIZE);
++		ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
+ 
+-		vmsa.handle = sev->handle;
+-		vmsa.address = __sme_pa(svm->vmsa);
+-		vmsa.len = PAGE_SIZE;
+-		ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa,
+-				    &argp->error);
++		mutex_unlock(&vcpu->mutex);
+ 		if (ret)
+ 			return ret;
+-
+-		svm->vcpu.arch.guest_state_protected = true;
+ 	}
+ 
+ 	return 0;
+@@ -1398,8 +1405,10 @@ static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 
+ 	/* Bind ASID to this guest */
+ 	ret = sev_bind_asid(kvm, start.handle, error);
+-	if (ret)
++	if (ret) {
++		sev_decommission(start.handle);
+ 		goto e_free_session;
++	}
+ 
+ 	params.handle = start.handle;
+ 	if (copy_to_user((void __user *)(uintptr_t)argp->data,
+@@ -1465,7 +1474,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 
+ 	/* Pin guest memory */
+ 	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+-				    PAGE_SIZE, &n, 0);
++				    PAGE_SIZE, &n, 1);
+ 	if (IS_ERR(guest_page)) {
+ 		ret = PTR_ERR(guest_page);
+ 		goto e_free_trans;
+@@ -1502,6 +1511,20 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 	return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error);
+ }
+ 
++static bool cmd_allowed_from_miror(u32 cmd_id)
++{
++	/*
++	 * Allow mirrors VM to call KVM_SEV_LAUNCH_UPDATE_VMSA to enable SEV-ES
++	 * active mirror VMs. Also allow the debugging and status commands.
++	 */
++	if (cmd_id == KVM_SEV_LAUNCH_UPDATE_VMSA ||
++	    cmd_id == KVM_SEV_GUEST_STATUS || cmd_id == KVM_SEV_DBG_DECRYPT ||
++	    cmd_id == KVM_SEV_DBG_ENCRYPT)
++		return true;
++
++	return false;
++}
++
+ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
+ {
+ 	struct kvm_sev_cmd sev_cmd;
+@@ -1518,8 +1541,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
+ 
+ 	mutex_lock(&kvm->lock);
+ 
+-	/* enc_context_owner handles all memory enc operations */
+-	if (is_mirroring_enc_context(kvm)) {
++	/* Only the enc_context_owner handles some memory enc operations. */
++	if (is_mirroring_enc_context(kvm) &&
++	    !cmd_allowed_from_miror(sev_cmd.id)) {
+ 		r = -EINVAL;
+ 		goto out;
+ 	}
+@@ -1716,8 +1740,7 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
+ {
+ 	struct file *source_kvm_file;
+ 	struct kvm *source_kvm;
+-	struct kvm_sev_info *mirror_sev;
+-	unsigned int asid;
++	struct kvm_sev_info source_sev, *mirror_sev;
+ 	int ret;
+ 
+ 	source_kvm_file = fget(source_fd);
+@@ -1740,7 +1763,8 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
+ 		goto e_source_unlock;
+ 	}
+ 
+-	asid = to_kvm_svm(source_kvm)->sev_info.asid;
++	memcpy(&source_sev, &to_kvm_svm(source_kvm)->sev_info,
++	       sizeof(source_sev));
+ 
+ 	/*
+ 	 * The mirror kvm holds an enc_context_owner ref so its asid can't
+@@ -1760,8 +1784,16 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
+ 	/* Set enc_context_owner and copy its encryption context over */
+ 	mirror_sev = &to_kvm_svm(kvm)->sev_info;
+ 	mirror_sev->enc_context_owner = source_kvm;
+-	mirror_sev->asid = asid;
+ 	mirror_sev->active = true;
++	mirror_sev->asid = source_sev.asid;
++	mirror_sev->fd = source_sev.fd;
++	mirror_sev->es_active = source_sev.es_active;
++	mirror_sev->handle = source_sev.handle;
++	/*
++	 * Do not copy ap_jump_table. Since the mirror does not share the same
++	 * KVM contexts as the original, and they may have different
++	 * memory-views.
++	 */
+ 
+ 	mutex_unlock(&kvm->lock);
+ 	return 0;
+diff --git a/arch/x86/kvm/vmx/evmcs.c b/arch/x86/kvm/vmx/evmcs.c
+index 896b2a50b4aae..a44e2734ff9b7 100644
+--- a/arch/x86/kvm/vmx/evmcs.c
++++ b/arch/x86/kvm/vmx/evmcs.c
+@@ -354,14 +354,20 @@ void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata)
+ 	switch (msr_index) {
+ 	case MSR_IA32_VMX_EXIT_CTLS:
+ 	case MSR_IA32_VMX_TRUE_EXIT_CTLS:
+-		ctl_high &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
++		ctl_high &= ~EVMCS1_UNSUPPORTED_VMEXIT_CTRL;
+ 		break;
+ 	case MSR_IA32_VMX_ENTRY_CTLS:
+ 	case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+-		ctl_high &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
++		ctl_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;
+ 		break;
+ 	case MSR_IA32_VMX_PROCBASED_CTLS2:
+-		ctl_high &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
++		ctl_high &= ~EVMCS1_UNSUPPORTED_2NDEXEC;
++		break;
++	case MSR_IA32_VMX_PINBASED_CTLS:
++		ctl_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
++		break;
++	case MSR_IA32_VMX_VMFUNC:
++		ctl_low &= ~EVMCS1_UNSUPPORTED_VMFUNC;
+ 		break;
+ 	}
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index ac1803dac4357..ce30503f5438f 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5898,6 +5898,12 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
+ 	case EXIT_REASON_VMFUNC:
+ 		/* VM functions are emulated through L2->L0 vmexits. */
+ 		return true;
++	case EXIT_REASON_BUS_LOCK:
++		/*
++		 * At present, bus lock VM exit is never exposed to L1.
++		 * Handle L2's bus locks in L0 directly.
++		 */
++		return true;
+ 	default:
+ 		break;
+ 	}
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 256f8cab4b8b4..55de1eb135f92 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1840,10 +1840,11 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 				    &msr_info->data))
+ 			return 1;
+ 		/*
+-		 * Enlightened VMCS v1 doesn't have certain fields, but buggy
+-		 * Hyper-V versions are still trying to use corresponding
+-		 * features when they are exposed. Filter out the essential
+-		 * minimum.
++		 * Enlightened VMCS v1 doesn't have certain VMCS fields but
++		 * instead of just ignoring the features, different Hyper-V
++		 * versions are either trying to use them and fail or do some
++		 * sanity checking and refuse to boot. Filter all unsupported
++		 * features out.
+ 		 */
+ 		if (!msr_info->host_initiated &&
+ 		    vmx->nested.enlightened_vmcs_enabled)
+@@ -6815,7 +6816,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ 		 */
+ 		tsx_ctrl = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL);
+ 		if (tsx_ctrl)
+-			vmx->guest_uret_msrs[i].mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
++			tsx_ctrl->mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
+ 	}
+ 
+ 	err = alloc_loaded_vmcs(&vmx->vmcs01);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7ec7c2dce5065..6d5d6e93f5c41 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10873,6 +10873,9 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ 
+ 	static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
+ 
++	vcpu->arch.cr3 = 0;
++	kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3);
++
+ 	/*
+ 	 * Reset the MMU context if paging was enabled prior to INIT (which is
+ 	 * implied if CR0.PG=1 as CR0 will be '0' prior to RESET).  Unlike the
+@@ -11090,9 +11093,15 @@ void kvm_arch_free_vm(struct kvm *kvm)
+ 
+ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ {
++	int ret;
++
+ 	if (type)
+ 		return -EINVAL;
+ 
++	ret = kvm_page_track_init(kvm);
++	if (ret)
++		return ret;
++
+ 	INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
+ 	INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
+ 	INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
+@@ -11125,7 +11134,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ 
+ 	kvm_apicv_init(kvm);
+ 	kvm_hv_init_vm(kvm);
+-	kvm_page_track_init(kvm);
+ 	kvm_mmu_init_vm(kvm);
+ 
+ 	return static_call(kvm_x86_vm_init)(kvm);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 16d76f814e9b1..ffcc4d29ad506 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1341,9 +1341,10 @@ st:			if (is_imm8(insn->off))
+ 			if (insn->imm == (BPF_AND | BPF_FETCH) ||
+ 			    insn->imm == (BPF_OR | BPF_FETCH) ||
+ 			    insn->imm == (BPF_XOR | BPF_FETCH)) {
+-				u8 *branch_target;
+ 				bool is64 = BPF_SIZE(insn->code) == BPF_DW;
+ 				u32 real_src_reg = src_reg;
++				u32 real_dst_reg = dst_reg;
++				u8 *branch_target;
+ 
+ 				/*
+ 				 * Can't be implemented with a single x86 insn.
+@@ -1354,11 +1355,13 @@ st:			if (is_imm8(insn->off))
+ 				emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
+ 				if (src_reg == BPF_REG_0)
+ 					real_src_reg = BPF_REG_AX;
++				if (dst_reg == BPF_REG_0)
++					real_dst_reg = BPF_REG_AX;
+ 
+ 				branch_target = prog;
+ 				/* Load old value */
+ 				emit_ldx(&prog, BPF_SIZE(insn->code),
+-					 BPF_REG_0, dst_reg, insn->off);
++					 BPF_REG_0, real_dst_reg, insn->off);
+ 				/*
+ 				 * Perform the (commutative) operation locally,
+ 				 * put the result in the AUX_REG.
+@@ -1369,7 +1372,8 @@ st:			if (is_imm8(insn->off))
+ 				      add_2reg(0xC0, AUX_REG, real_src_reg));
+ 				/* Attempt to swap in new value */
+ 				err = emit_atomic(&prog, BPF_CMPXCHG,
+-						  dst_reg, AUX_REG, insn->off,
++						  real_dst_reg, AUX_REG,
++						  insn->off,
+ 						  BPF_SIZE(insn->code));
+ 				if (WARN_ON(err))
+ 					return err;
+@@ -1383,11 +1387,10 @@ st:			if (is_imm8(insn->off))
+ 				/* Restore R0 after clobbering RAX */
+ 				emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
+ 				break;
+-
+ 			}
+ 
+ 			err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
+-						  insn->off, BPF_SIZE(insn->code));
++					  insn->off, BPF_SIZE(insn->code));
+ 			if (err)
+ 				return err;
+ 			break;
+@@ -1744,7 +1747,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
+ }
+ 
+ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
+-			   struct bpf_prog *p, int stack_size, bool mod_ret)
++			   struct bpf_prog *p, int stack_size, bool save_ret)
+ {
+ 	u8 *prog = *pprog;
+ 	u8 *jmp_insn;
+@@ -1777,11 +1780,15 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
+ 	if (emit_call(&prog, p->bpf_func, prog))
+ 		return -EINVAL;
+ 
+-	/* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
++	/*
++	 * BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
+ 	 * of the previous call which is then passed on the stack to
+ 	 * the next BPF program.
++	 *
++	 * BPF_TRAMP_FENTRY trampoline may need to return the return
++	 * value of BPF_PROG_TYPE_STRUCT_OPS prog.
+ 	 */
+-	if (mod_ret)
++	if (save_ret)
+ 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+ 
+ 	/* replace 2 nops with JE insn, since jmp target is known */
+@@ -1828,13 +1835,15 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
+ }
+ 
+ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
+-		      struct bpf_tramp_progs *tp, int stack_size)
++		      struct bpf_tramp_progs *tp, int stack_size,
++		      bool save_ret)
+ {
+ 	int i;
+ 	u8 *prog = *pprog;
+ 
+ 	for (i = 0; i < tp->nr_progs; i++) {
+-		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false))
++		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size,
++				    save_ret))
+ 			return -EINVAL;
+ 	}
+ 	*pprog = prog;
+@@ -1877,6 +1886,23 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
+ 	return 0;
+ }
+ 
++static bool is_valid_bpf_tramp_flags(unsigned int flags)
++{
++	if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
++	    (flags & BPF_TRAMP_F_SKIP_FRAME))
++		return false;
++
++	/*
++	 * BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops,
++	 * and it must be used alone.
++	 */
++	if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) &&
++	    (flags & ~BPF_TRAMP_F_RET_FENTRY_RET))
++		return false;
++
++	return true;
++}
++
+ /* Example:
+  * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
+  * its 'struct btf_func_model' will be nr_args=2
+@@ -1949,17 +1975,19 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
+ 	u8 **branches = NULL;
+ 	u8 *prog;
++	bool save_ret;
+ 
+ 	/* x86-64 supports up to 6 arguments. 7+ can be added in the future */
+ 	if (nr_args > 6)
+ 		return -ENOTSUPP;
+ 
+-	if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
+-	    (flags & BPF_TRAMP_F_SKIP_FRAME))
++	if (!is_valid_bpf_tramp_flags(flags))
+ 		return -EINVAL;
+ 
+-	if (flags & BPF_TRAMP_F_CALL_ORIG)
+-		stack_size += 8; /* room for return value of orig_call */
++	/* room for return value of orig_call or fentry prog */
++	save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
++	if (save_ret)
++		stack_size += 8;
+ 
+ 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
+ 		/* skip patched call instruction and point orig_call to actual
+@@ -1986,7 +2014,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	}
+ 
+ 	if (fentry->nr_progs)
+-		if (invoke_bpf(m, &prog, fentry, stack_size))
++		if (invoke_bpf(m, &prog, fentry, stack_size,
++			       flags & BPF_TRAMP_F_RET_FENTRY_RET))
+ 			return -EINVAL;
+ 
+ 	if (fmod_ret->nr_progs) {
+@@ -2033,7 +2062,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 	}
+ 
+ 	if (fexit->nr_progs)
+-		if (invoke_bpf(m, &prog, fexit, stack_size)) {
++		if (invoke_bpf(m, &prog, fexit, stack_size, false)) {
+ 			ret = -EINVAL;
+ 			goto cleanup;
+ 		}
+@@ -2053,9 +2082,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
+ 			ret = -EINVAL;
+ 			goto cleanup;
+ 		}
+-		/* restore original return value back into RAX */
+-		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
+ 	}
++	/* restore return value of orig_call or fentry prog back into RAX */
++	if (save_ret)
++		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
+ 
+ 	EMIT1(0x5B); /* pop rbx */
+ 	EMIT1(0xC9); /* leave */
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 3a1038b6eeb30..9360c65169ff4 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2662,15 +2662,6 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	 * are likely to increase the throughput.
+ 	 */
+ 	bfqq->new_bfqq = new_bfqq;
+-	/*
+-	 * The above assignment schedules the following redirections:
+-	 * each time some I/O for bfqq arrives, the process that
+-	 * generated that I/O is disassociated from bfqq and
+-	 * associated with new_bfqq. Here we increases new_bfqq->ref
+-	 * in advance, adding the number of processes that are
+-	 * expected to be associated with new_bfqq as they happen to
+-	 * issue I/O.
+-	 */
+ 	new_bfqq->ref += process_refs;
+ 	return new_bfqq;
+ }
+@@ -2733,10 +2724,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_queue *in_service_bfqq, *new_bfqq;
+ 
+-	/* if a merge has already been setup, then proceed with that first */
+-	if (bfqq->new_bfqq)
+-		return bfqq->new_bfqq;
+-
+ 	/*
+ 	 * Check delayed stable merge for rotational or non-queueing
+ 	 * devs. For this branch to be executed, bfqq must not be
+@@ -2838,6 +2825,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (bfq_too_late_for_merging(bfqq))
+ 		return NULL;
+ 
++	if (bfqq->new_bfqq)
++		return bfqq->new_bfqq;
++
+ 	if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ 		return NULL;
+ 
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index a3ef6cce644cc..7dd80acf92c78 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -3007,6 +3007,18 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
+ 		ndr_desc->target_node = NUMA_NO_NODE;
+ 	}
+ 
++	/* Fallback to address based numa information if node lookup failed */
++	if (ndr_desc->numa_node == NUMA_NO_NODE) {
++		ndr_desc->numa_node = memory_add_physaddr_to_nid(spa->address);
++		dev_info(acpi_desc->dev, "changing numa node from %d to %d for nfit region [%pa-%pa]",
++			NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
++	}
++	if (ndr_desc->target_node == NUMA_NO_NODE) {
++		ndr_desc->target_node = phys_to_target_node(spa->address);
++		dev_info(acpi_desc->dev, "changing target node from %d to %d for nfit region [%pa-%pa]",
++			NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
++	}
++
+ 	/*
+ 	 * Persistence domain bits are hierarchical, if
+ 	 * ACPI_NFIT_CAPABILITY_CACHE_FLUSH is set then
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 8c77e14987d4b..56f54e6eb9874 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -1721,6 +1721,25 @@ static int fw_devlink_create_devlink(struct device *con,
+ 	struct device *sup_dev;
+ 	int ret = 0;
+ 
++	/*
++	 * In some cases, a device P might also be a supplier to its child node
++	 * C. However, this would defer the probe of C until the probe of P
++	 * completes successfully. This is perfectly fine in the device driver
++	 * model. device_add() doesn't guarantee probe completion of the device
++	 * by the time it returns.
++	 *
++	 * However, there are a few drivers that assume C will finish probing
++	 * as soon as it's added and before P finishes probing. So, we provide
++	 * a flag to let fw_devlink know not to delay the probe of C until the
++	 * probe of P completes successfully.
++	 *
++	 * When such a flag is set, we can't create device links where P is the
++	 * supplier of C as that would delay the probe of C.
++	 */
++	if (sup_handle->flags & FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD &&
++	    fwnode_is_ancestor_of(sup_handle, con->fwnode))
++		return -EINVAL;
++
+ 	sup_dev = get_dev_from_fwnode(sup_handle);
+ 	if (sup_dev) {
+ 		/*
+@@ -1771,14 +1790,21 @@ static int fw_devlink_create_devlink(struct device *con,
+ 	 * be broken by applying logic. Check for these types of cycles and
+ 	 * break them so that devices in the cycle probe properly.
+ 	 *
+-	 * If the supplier's parent is dependent on the consumer, then
+-	 * the consumer-supplier dependency is a false dependency. So,
+-	 * treat it as an invalid link.
++	 * If the supplier's parent is dependent on the consumer, then the
++	 * consumer and supplier have a cyclic dependency. Since fw_devlink
++	 * can't tell which of the inferred dependencies are incorrect, don't
++	 * enforce probe ordering between any of the devices in this cyclic
++	 * dependency. Do this by relaxing all the fw_devlink device links in
++	 * this cycle and by treating the fwnode link between the consumer and
++	 * the supplier as an invalid dependency.
+ 	 */
+ 	sup_dev = fwnode_get_next_parent_dev(sup_handle);
+ 	if (sup_dev && device_is_dependent(con, sup_dev)) {
+-		dev_dbg(con, "Not linking to %pfwP - False link\n",
+-			sup_handle);
++		dev_info(con, "Fixing up cyclic dependency with %pfwP (%s)\n",
++			 sup_handle, dev_name(sup_dev));
++		device_links_write_lock();
++		fw_devlink_relax_cycle(con, sup_dev);
++		device_links_write_unlock();
+ 		ret = -EINVAL;
+ 	} else {
+ 		/*
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 93708b1938e80..99ab58b877f8c 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -97,13 +97,18 @@ struct nbd_config {
+ 
+ 	atomic_t recv_threads;
+ 	wait_queue_head_t recv_wq;
+-	loff_t blksize;
++	unsigned int blksize_bits;
+ 	loff_t bytesize;
+ #if IS_ENABLED(CONFIG_DEBUG_FS)
+ 	struct dentry *dbg_dir;
+ #endif
+ };
+ 
++static inline unsigned int nbd_blksize(struct nbd_config *config)
++{
++	return 1u << config->blksize_bits;
++}
++
+ struct nbd_device {
+ 	struct blk_mq_tag_set tag_set;
+ 
+@@ -147,7 +152,7 @@ static struct dentry *nbd_dbg_dir;
+ 
+ #define NBD_MAGIC 0x68797548
+ 
+-#define NBD_DEF_BLKSIZE 1024
++#define NBD_DEF_BLKSIZE_BITS 10
+ 
+ static unsigned int nbds_max = 16;
+ static int max_part = 16;
+@@ -350,12 +355,12 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+ 		loff_t blksize)
+ {
+ 	if (!blksize)
+-		blksize = NBD_DEF_BLKSIZE;
++		blksize = 1u << NBD_DEF_BLKSIZE_BITS;
+ 	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
+ 		return -EINVAL;
+ 
+ 	nbd->config->bytesize = bytesize;
+-	nbd->config->blksize = blksize;
++	nbd->config->blksize_bits = __ffs(blksize);
+ 
+ 	if (!nbd->task_recv)
+ 		return 0;
+@@ -1370,7 +1375,7 @@ static int nbd_start_device(struct nbd_device *nbd)
+ 		args->index = i;
+ 		queue_work(nbd->recv_workq, &args->work);
+ 	}
+-	return nbd_set_size(nbd, config->bytesize, config->blksize);
++	return nbd_set_size(nbd, config->bytesize, nbd_blksize(config));
+ }
+ 
+ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
+@@ -1439,11 +1444,11 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 	case NBD_SET_BLKSIZE:
+ 		return nbd_set_size(nbd, config->bytesize, arg);
+ 	case NBD_SET_SIZE:
+-		return nbd_set_size(nbd, arg, config->blksize);
++		return nbd_set_size(nbd, arg, nbd_blksize(config));
+ 	case NBD_SET_SIZE_BLOCKS:
+-		if (check_mul_overflow((loff_t)arg, config->blksize, &bytesize))
++		if (check_shl_overflow(arg, config->blksize_bits, &bytesize))
+ 			return -EINVAL;
+-		return nbd_set_size(nbd, bytesize, config->blksize);
++		return nbd_set_size(nbd, bytesize, nbd_blksize(config));
+ 	case NBD_SET_TIMEOUT:
+ 		nbd_set_cmd_timeout(nbd, arg);
+ 		return 0;
+@@ -1509,7 +1514,7 @@ static struct nbd_config *nbd_alloc_config(void)
+ 	atomic_set(&config->recv_threads, 0);
+ 	init_waitqueue_head(&config->recv_wq);
+ 	init_waitqueue_head(&config->conn_wait);
+-	config->blksize = NBD_DEF_BLKSIZE;
++	config->blksize_bits = NBD_DEF_BLKSIZE_BITS;
+ 	atomic_set(&config->live_connections, 0);
+ 	try_module_get(THIS_MODULE);
+ 	return config;
+@@ -1637,7 +1642,7 @@ static int nbd_dev_dbg_init(struct nbd_device *nbd)
+ 	debugfs_create_file("tasks", 0444, dir, nbd, &nbd_dbg_tasks_fops);
+ 	debugfs_create_u64("size_bytes", 0444, dir, &config->bytesize);
+ 	debugfs_create_u32("timeout", 0444, dir, &nbd->tag_set.timeout);
+-	debugfs_create_u64("blocksize", 0444, dir, &config->blksize);
++	debugfs_create_u32("blocksize_bits", 0444, dir, &config->blksize_bits);
+ 	debugfs_create_file("flags", 0444, dir, nbd, &nbd_dbg_flags_fops);
+ 
+ 	return 0;
+@@ -1841,7 +1846,7 @@ nbd_device_policy[NBD_DEVICE_ATTR_MAX + 1] = {
+ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
+ {
+ 	struct nbd_config *config = nbd->config;
+-	u64 bsize = config->blksize;
++	u64 bsize = nbd_blksize(config);
+ 	u64 bytes = config->bytesize;
+ 
+ 	if (info->attrs[NBD_ATTR_SIZE_BYTES])
+@@ -1850,7 +1855,7 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
+ 	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES])
+ 		bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
+ 
+-	if (bytes != config->bytesize || bsize != config->blksize)
++	if (bytes != config->bytesize || bsize != nbd_blksize(config))
+ 		return nbd_set_size(nbd, bytes, bsize);
+ 	return 0;
+ }
+diff --git a/drivers/cpufreq/cpufreq_governor_attr_set.c b/drivers/cpufreq/cpufreq_governor_attr_set.c
+index 66b05a326910e..a6f365b9cc1ad 100644
+--- a/drivers/cpufreq/cpufreq_governor_attr_set.c
++++ b/drivers/cpufreq/cpufreq_governor_attr_set.c
+@@ -74,8 +74,8 @@ unsigned int gov_attr_set_put(struct gov_attr_set *attr_set, struct list_head *l
+ 	if (count)
+ 		return count;
+ 
+-	kobject_put(&attr_set->kobj);
+ 	mutex_destroy(&attr_set->update_lock);
++	kobject_put(&attr_set->kobj);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(gov_attr_set_put);
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index bb88198c874e0..aa4e1a5006919 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -778,7 +778,7 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ 				    in_place ? DMA_BIDIRECTIONAL
+ 					     : DMA_TO_DEVICE);
+ 		if (ret)
+-			goto e_ctx;
++			goto e_aad;
+ 
+ 		if (in_place) {
+ 			dst = src;
+@@ -863,7 +863,7 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ 	op.u.aes.size = 0;
+ 	ret = cmd_q->ccp->vdata->perform->aes(&op);
+ 	if (ret)
+-		goto e_dst;
++		goto e_final_wa;
+ 
+ 	if (aes->action == CCP_AES_ACTION_ENCRYPT) {
+ 		/* Put the ciphered tag after the ciphertext. */
+@@ -873,17 +873,19 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
+ 		ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
+ 					   DMA_BIDIRECTIONAL);
+ 		if (ret)
+-			goto e_tag;
++			goto e_final_wa;
+ 		ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
+-		if (ret)
+-			goto e_tag;
++		if (ret) {
++			ccp_dm_free(&tag);
++			goto e_final_wa;
++		}
+ 
+ 		ret = crypto_memneq(tag.address, final_wa.address,
+ 				    authsize) ? -EBADMSG : 0;
+ 		ccp_dm_free(&tag);
+ 	}
+ 
+-e_tag:
++e_final_wa:
+ 	ccp_dm_free(&final_wa);
+ 
+ e_dst:
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index f5cfc0698799a..8ebf369b3ba0f 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -468,15 +468,8 @@ static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off)
+ 	mutex_lock(&chip->i2c_lock);
+ 	ret = regmap_read(chip->regmap, inreg, &reg_val);
+ 	mutex_unlock(&chip->i2c_lock);
+-	if (ret < 0) {
+-		/*
+-		 * NOTE:
+-		 * diagnostic already emitted; that's all we should
+-		 * do unless gpio_*_value_cansleep() calls become different
+-		 * from their nonsleeping siblings (and report faults).
+-		 */
+-		return 0;
+-	}
++	if (ret < 0)
++		return ret;
+ 
+ 	return !!(reg_val & bit);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 7b42636fc7dc6..d3247a5cceb4c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3602,9 +3602,9 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 
+ fence_driver_init:
+ 	/* Fence driver */
+-	r = amdgpu_fence_driver_init(adev);
++	r = amdgpu_fence_driver_sw_init(adev);
+ 	if (r) {
+-		dev_err(adev->dev, "amdgpu_fence_driver_init failed\n");
++		dev_err(adev->dev, "amdgpu_fence_driver_sw_init failed\n");
+ 		amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_FENCE_INIT_FAIL, 0, 0);
+ 		goto failed;
+ 	}
+@@ -3631,6 +3631,8 @@ fence_driver_init:
+ 		goto release_ras_con;
+ 	}
+ 
++	amdgpu_fence_driver_hw_init(adev);
++
+ 	dev_info(adev->dev,
+ 		"SE %d, SH per SE %d, CU per SH %d, active_cu_number %d\n",
+ 			adev->gfx.config.max_shader_engines,
+@@ -3798,7 +3800,7 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+ 		else
+ 			drm_atomic_helper_shutdown(adev_to_drm(adev));
+ 	}
+-	amdgpu_fence_driver_fini_hw(adev);
++	amdgpu_fence_driver_hw_fini(adev);
+ 
+ 	if (adev->pm_sysfs_en)
+ 		amdgpu_pm_sysfs_fini(adev);
+@@ -3820,7 +3822,7 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ {
+ 	amdgpu_device_ip_fini(adev);
+-	amdgpu_fence_driver_fini_sw(adev);
++	amdgpu_fence_driver_sw_fini(adev);
+ 	release_firmware(adev->firmware.gpu_info_fw);
+ 	adev->firmware.gpu_info_fw = NULL;
+ 	adev->accel_working = false;
+@@ -3895,7 +3897,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
+ 	/* evict vram memory */
+ 	amdgpu_bo_evict_vram(adev);
+ 
+-	amdgpu_fence_driver_suspend(adev);
++	amdgpu_fence_driver_hw_fini(adev);
+ 
+ 	amdgpu_device_ip_suspend_phase2(adev);
+ 	/* evict remaining vram memory
+@@ -3940,8 +3942,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
+ 		dev_err(adev->dev, "amdgpu_device_ip_resume failed (%d).\n", r);
+ 		return r;
+ 	}
+-	amdgpu_fence_driver_resume(adev);
+-
++	amdgpu_fence_driver_hw_init(adev);
+ 
+ 	r = amdgpu_device_ip_late_init(adev);
+ 	if (r)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index 7a73167319116..dc50c05f23fc2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -837,6 +837,28 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
+ 	return 0;
+ }
+ 
++/* Mirrors the is_displayable check in radeonsi's gfx6_compute_surface */
++static int check_tiling_flags_gfx6(struct amdgpu_framebuffer *afb)
++{
++	u64 micro_tile_mode;
++
++	/* Zero swizzle mode means linear */
++	if (AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0)
++		return 0;
++
++	micro_tile_mode = AMDGPU_TILING_GET(afb->tiling_flags, MICRO_TILE_MODE);
++	switch (micro_tile_mode) {
++	case 0: /* DISPLAY */
++	case 3: /* RENDER */
++		return 0;
++	default:
++		drm_dbg_kms(afb->base.dev,
++			    "Micro tile mode %llu not supported for scanout\n",
++			    micro_tile_mode);
++		return -EINVAL;
++	}
++}
++
+ static void get_block_dimensions(unsigned int block_log2, unsigned int cpp,
+ 				 unsigned int *width, unsigned int *height)
+ {
+@@ -1103,6 +1125,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
+ 				    const struct drm_mode_fb_cmd2 *mode_cmd,
+ 				    struct drm_gem_object *obj)
+ {
++	struct amdgpu_device *adev = drm_to_adev(dev);
+ 	int ret, i;
+ 
+ 	/*
+@@ -1122,6 +1145,14 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
++	if (!dev->mode_config.allow_fb_modifiers) {
++		drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
++			      "GFX9+ requires FB check based on format modifier\n");
++		ret = check_tiling_flags_gfx6(rfb);
++		if (ret)
++			return ret;
++	}
++
+ 	if (dev->mode_config.allow_fb_modifiers &&
+ 	    !(rfb->base.flags & DRM_MODE_FB_MODIFIERS)) {
+ 		ret = convert_tiling_flags_to_modifier(rfb);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index 72d9b92b17547..49884069226a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -417,9 +417,6 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
+ 	}
+ 	amdgpu_fence_write(ring, atomic_read(&ring->fence_drv.last_seq));
+ 
+-	if (irq_src)
+-		amdgpu_irq_get(adev, irq_src, irq_type);
+-
+ 	ring->fence_drv.irq_src = irq_src;
+ 	ring->fence_drv.irq_type = irq_type;
+ 	ring->fence_drv.initialized = true;
+@@ -501,7 +498,7 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+ }
+ 
+ /**
+- * amdgpu_fence_driver_init - init the fence driver
++ * amdgpu_fence_driver_sw_init - init the fence driver
+  * for all possible rings.
+  *
+  * @adev: amdgpu device pointer
+@@ -512,20 +509,20 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+  * amdgpu_fence_driver_start_ring().
+  * Returns 0 for success.
+  */
+-int amdgpu_fence_driver_init(struct amdgpu_device *adev)
++int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev)
+ {
+ 	return 0;
+ }
+ 
+ /**
+- * amdgpu_fence_driver_fini - tear down the fence driver
++ * amdgpu_fence_driver_hw_fini - tear down the fence driver
+  * for all possible rings.
+  *
+  * @adev: amdgpu device pointer
+  *
+  * Tear down the fence driver for all possible rings (all asics).
+  */
+-void amdgpu_fence_driver_fini_hw(struct amdgpu_device *adev)
++void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev)
+ {
+ 	int i, r;
+ 
+@@ -534,8 +531,10 @@ void amdgpu_fence_driver_fini_hw(struct amdgpu_device *adev)
+ 
+ 		if (!ring || !ring->fence_drv.initialized)
+ 			continue;
++
+ 		if (!ring->no_scheduler)
+-			drm_sched_fini(&ring->sched);
++			drm_sched_stop(&ring->sched, NULL);
++
+ 		/* You can't wait for HW to signal if it's gone */
+ 		if (!drm_dev_is_unplugged(&adev->ddev))
+ 			r = amdgpu_fence_wait_empty(ring);
+@@ -553,7 +552,7 @@ void amdgpu_fence_driver_fini_hw(struct amdgpu_device *adev)
+ 	}
+ }
+ 
+-void amdgpu_fence_driver_fini_sw(struct amdgpu_device *adev)
++void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev)
+ {
+ 	unsigned int i, j;
+ 
+@@ -563,6 +562,9 @@ void amdgpu_fence_driver_fini_sw(struct amdgpu_device *adev)
+ 		if (!ring || !ring->fence_drv.initialized)
+ 			continue;
+ 
++		if (!ring->no_scheduler)
++			drm_sched_fini(&ring->sched);
++
+ 		for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
+ 			dma_fence_put(ring->fence_drv.fences[j]);
+ 		kfree(ring->fence_drv.fences);
+@@ -572,49 +574,18 @@ void amdgpu_fence_driver_fini_sw(struct amdgpu_device *adev)
+ }
+ 
+ /**
+- * amdgpu_fence_driver_suspend - suspend the fence driver
+- * for all possible rings.
+- *
+- * @adev: amdgpu device pointer
+- *
+- * Suspend the fence driver for all possible rings (all asics).
+- */
+-void amdgpu_fence_driver_suspend(struct amdgpu_device *adev)
+-{
+-	int i, r;
+-
+-	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+-		struct amdgpu_ring *ring = adev->rings[i];
+-		if (!ring || !ring->fence_drv.initialized)
+-			continue;
+-
+-		/* wait for gpu to finish processing current batch */
+-		r = amdgpu_fence_wait_empty(ring);
+-		if (r) {
+-			/* delay GPU reset to resume */
+-			amdgpu_fence_driver_force_completion(ring);
+-		}
+-
+-		/* disable the interrupt */
+-		if (ring->fence_drv.irq_src)
+-			amdgpu_irq_put(adev, ring->fence_drv.irq_src,
+-				       ring->fence_drv.irq_type);
+-	}
+-}
+-
+-/**
+- * amdgpu_fence_driver_resume - resume the fence driver
++ * amdgpu_fence_driver_hw_init - enable the fence driver
+  * for all possible rings.
+  *
+  * @adev: amdgpu device pointer
+  *
+- * Resume the fence driver for all possible rings (all asics).
++ * Enable the fence driver for all possible rings (all asics).
+  * Not all asics have all rings, so each asic will only
+  * start the fence driver on the rings it has using
+  * amdgpu_fence_driver_start_ring().
+  * Returns 0 for success.
+  */
+-void amdgpu_fence_driver_resume(struct amdgpu_device *adev)
++void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev)
+ {
+ 	int i;
+ 
+@@ -623,6 +594,11 @@ void amdgpu_fence_driver_resume(struct amdgpu_device *adev)
+ 		if (!ring || !ring->fence_drv.initialized)
+ 			continue;
+ 
++		if (!ring->no_scheduler) {
++			drm_sched_resubmit_jobs(&ring->sched);
++			drm_sched_start(&ring->sched, true);
++		}
++
+ 		/* enable the interrupt */
+ 		if (ring->fence_drv.irq_src)
+ 			amdgpu_irq_get(adev, ring->fence_drv.irq_src,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+index e7d3d0dbdd967..9c11ced4312c8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+@@ -106,9 +106,6 @@ struct amdgpu_fence_driver {
+ 	struct dma_fence		**fences;
+ };
+ 
+-int amdgpu_fence_driver_init(struct amdgpu_device *adev);
+-void amdgpu_fence_driver_fini_hw(struct amdgpu_device *adev);
+-void amdgpu_fence_driver_fini_sw(struct amdgpu_device *adev);
+ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
+ 
+ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+@@ -117,8 +114,10 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
+ 				   struct amdgpu_irq_src *irq_src,
+ 				   unsigned irq_type);
+-void amdgpu_fence_driver_suspend(struct amdgpu_device *adev);
+-void amdgpu_fence_driver_resume(struct amdgpu_device *adev);
++void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
++void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
++int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
++void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
+ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence,
+ 		      unsigned flags);
+ int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 6a23c6826e122..88ed0ef88f7e2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3598,7 +3598,7 @@ static int gfx_v9_0_mqd_init(struct amdgpu_ring *ring)
+ 
+ 	/* set static priority for a queue/ring */
+ 	gfx_v9_0_mqd_set_priority(ring, mqd);
+-	mqd->cp_hqd_quantum = RREG32(mmCP_HQD_QUANTUM);
++	mqd->cp_hqd_quantum = RREG32_SOC15(GC, 0, mmCP_HQD_QUANTUM);
+ 
+ 	/* map_queues packet doesn't need activate the queue,
+ 	 * so only kiq need set this field.
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index 7486e53067867..27e0ca615edc1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -883,6 +883,12 @@ static int sdma_v5_2_start(struct amdgpu_device *adev)
+ 			msleep(1000);
+ 	}
+ 
++	/* TODO: check whether can submit a doorbell request to raise
++	 * a doorbell fence to exit gfxoff.
++	 */
++	if (adev->in_s0ix)
++		amdgpu_gfx_off_ctrl(adev, false);
++
+ 	sdma_v5_2_soft_reset(adev);
+ 	/* unhalt the MEs */
+ 	sdma_v5_2_enable(adev, true);
+@@ -891,6 +897,8 @@ static int sdma_v5_2_start(struct amdgpu_device *adev)
+ 
+ 	/* start the gfx rings and rlc compute queues */
+ 	r = sdma_v5_2_gfx_resume(adev);
++	if (adev->in_s0ix)
++		amdgpu_gfx_off_ctrl(adev, true);
+ 	if (r)
+ 		return r;
+ 	r = sdma_v5_2_rlc_resume(adev);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3bb567ea2cef9..a03d7682cd8f2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1117,6 +1117,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 
+ 	init_data.asic_id.pci_revision_id = adev->pdev->revision;
+ 	init_data.asic_id.hw_internal_rev = adev->external_rev_id;
++	init_data.asic_id.chip_id = adev->pdev->device;
+ 
+ 	init_data.asic_id.vram_width = adev->gmc.vram_width;
+ 	/* TODO: initialize init_data.asic_id.vram_type here!!!! */
+@@ -1724,6 +1725,7 @@ static int dm_late_init(void *handle)
+ 		linear_lut[i] = 0xFFFF * i / 15;
+ 
+ 	params.set = 0;
++	params.backlight_ramping_override = false;
+ 	params.backlight_ramping_start = 0xCCCC;
+ 	params.backlight_ramping_reduction = 0xCCCCCCCC;
+ 	params.backlight_lut_array_size = 16;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 83ef72a3ebf41..3c8da3665a274 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1813,14 +1813,13 @@ bool perform_link_training_with_retries(
+ 		if (panel_mode == DP_PANEL_MODE_EDP) {
+ 			struct cp_psp *cp_psp = &stream->ctx->cp_psp;
+ 
+-			if (cp_psp && cp_psp->funcs.enable_assr) {
+-				if (!cp_psp->funcs.enable_assr(cp_psp->handle, link)) {
+-					/* since eDP implies ASSR on, change panel
+-					 * mode to disable ASSR
+-					 */
+-					panel_mode = DP_PANEL_MODE_DEFAULT;
+-				}
+-			}
++			if (cp_psp && cp_psp->funcs.enable_assr)
++				/* ASSR is bound to fail with unsigned PSP
++				 * verstage used during devlopment phase.
++				 * Report and continue with eDP panel mode to
++				 * perform eDP link training with right settings
++				 */
++				cp_psp->funcs.enable_assr(cp_psp->handle, link);
+ 		}
+ #endif
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
+index 06e9a8ed4e03c..db9c212a240e5 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rps.c
++++ b/drivers/gpu/drm/i915/gt/intel_rps.c
+@@ -861,8 +861,6 @@ void intel_rps_park(struct intel_rps *rps)
+ {
+ 	int adj;
+ 
+-	GEM_BUG_ON(atomic_read(&rps->num_waiters));
+-
+ 	if (!intel_rps_clear_active(rps))
+ 		return;
+ 
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 734c37c5e3474..527b59b863125 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -576,7 +576,7 @@ retry:
+ 
+ 			/* No one is going to touch shadow bb from now on. */
+ 			i915_gem_object_flush_map(bb->obj);
+-			i915_gem_object_unlock(bb->obj);
++			i915_gem_ww_ctx_fini(&ww);
+ 		}
+ 	}
+ 	return 0;
+@@ -630,7 +630,7 @@ retry:
+ 		return ret;
+ 	}
+ 
+-	i915_gem_object_unlock(wa_ctx->indirect_ctx.obj);
++	i915_gem_ww_ctx_fini(&ww);
+ 
+ 	/* FIXME: we are not tracking our pinned VMA leaving it
+ 	 * up to the core to fix up the stray pin_count upon
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index 37aef13085739..7db972fa70243 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -914,8 +914,6 @@ static void __i915_request_ctor(void *arg)
+ 	i915_sw_fence_init(&rq->submit, submit_notify);
+ 	i915_sw_fence_init(&rq->semaphore, semaphore_notify);
+ 
+-	dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock, 0, 0);
+-
+ 	rq->capture_list = NULL;
+ 
+ 	init_llist_head(&rq->execute_cb);
+@@ -978,17 +976,12 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
+ 	rq->ring = ce->ring;
+ 	rq->execution_mask = ce->engine->mask;
+ 
+-	kref_init(&rq->fence.refcount);
+-	rq->fence.flags = 0;
+-	rq->fence.error = 0;
+-	INIT_LIST_HEAD(&rq->fence.cb_list);
+-
+ 	ret = intel_timeline_get_seqno(tl, rq, &seqno);
+ 	if (ret)
+ 		goto err_free;
+ 
+-	rq->fence.context = tl->fence_context;
+-	rq->fence.seqno = seqno;
++	dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock,
++		       tl->fence_context, seqno);
+ 
+ 	RCU_INIT_POINTER(rq->timeline, tl);
+ 	rq->hwsp_seqno = tl->hwsp_seqno;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 8d68796aa905f..1b4a192b19e5e 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -239,13 +239,13 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (!privdata->cl_data)
+ 		return -ENOMEM;
+ 
+-	rc = devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata);
++	mp2_select_ops(privdata);
++
++	rc = amd_sfh_hid_client_init(privdata);
+ 	if (rc)
+ 		return rc;
+ 
+-	mp2_select_ops(privdata);
+-
+-	return amd_sfh_hid_client_init(privdata);
++	return devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata);
+ }
+ 
+ static const struct pci_device_id amd_mp2_pci_tbl[] = {
+diff --git a/drivers/hid/hid-betopff.c b/drivers/hid/hid-betopff.c
+index 0790fbd3fc9a2..467d789f9bc2d 100644
+--- a/drivers/hid/hid-betopff.c
++++ b/drivers/hid/hid-betopff.c
+@@ -56,15 +56,22 @@ static int betopff_init(struct hid_device *hid)
+ {
+ 	struct betopff_device *betopff;
+ 	struct hid_report *report;
+-	struct hid_input *hidinput =
+-			list_first_entry(&hid->inputs, struct hid_input, list);
++	struct hid_input *hidinput;
+ 	struct list_head *report_list =
+ 			&hid->report_enum[HID_OUTPUT_REPORT].report_list;
+-	struct input_dev *dev = hidinput->input;
++	struct input_dev *dev;
+ 	int field_count = 0;
+ 	int error;
+ 	int i, j;
+ 
++	if (list_empty(&hid->inputs)) {
++		hid_err(hid, "no inputs found\n");
++		return -ENODEV;
++	}
++
++	hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
++	dev = hidinput->input;
++
+ 	if (list_empty(report_list)) {
+ 		hid_err(hid, "no output reports found\n");
+ 		return -ENODEV;
+diff --git a/drivers/hid/hid-u2fzero.c b/drivers/hid/hid-u2fzero.c
+index 95e0807878c7e..d70cd3d7f583b 100644
+--- a/drivers/hid/hid-u2fzero.c
++++ b/drivers/hid/hid-u2fzero.c
+@@ -198,7 +198,9 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
+ 	}
+ 
+ 	ret = u2fzero_recv(dev, &req, &resp);
+-	if (ret < 0)
++
++	/* ignore errors or packets without data */
++	if (ret < offsetof(struct u2f_hid_msg, init.data))
+ 		return 0;
+ 
+ 	/* only take the minimum amount of data it is safe to take */
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index b234958f883a4..c56cb03c1551f 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -505,7 +505,7 @@ static void hid_ctrl(struct urb *urb)
+ 
+ 	if (unplug) {
+ 		usbhid->ctrltail = usbhid->ctrlhead;
+-	} else {
++	} else if (usbhid->ctrlhead != usbhid->ctrltail) {
+ 		usbhid->ctrltail = (usbhid->ctrltail + 1) & (HID_CONTROL_FIFO_SIZE - 1);
+ 
+ 		if (usbhid->ctrlhead != usbhid->ctrltail &&
+@@ -1223,9 +1223,20 @@ static void usbhid_stop(struct hid_device *hid)
+ 	mutex_lock(&usbhid->mutex);
+ 
+ 	clear_bit(HID_STARTED, &usbhid->iofl);
++
+ 	spin_lock_irq(&usbhid->lock);	/* Sync with error and led handlers */
+ 	set_bit(HID_DISCONNECTED, &usbhid->iofl);
++	while (usbhid->ctrltail != usbhid->ctrlhead) {
++		if (usbhid->ctrl[usbhid->ctrltail].dir == USB_DIR_OUT) {
++			kfree(usbhid->ctrl[usbhid->ctrltail].raw_report);
++			usbhid->ctrl[usbhid->ctrltail].raw_report = NULL;
++		}
++
++		usbhid->ctrltail = (usbhid->ctrltail + 1) &
++			(HID_CONTROL_FIFO_SIZE - 1);
++	}
+ 	spin_unlock_irq(&usbhid->lock);
++
+ 	usb_kill_urb(usbhid->urbin);
+ 	usb_kill_urb(usbhid->urbout);
+ 	usb_kill_urb(usbhid->urbctrl);
+diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c
+index 116681fde33d2..89fe7b9fe26be 100644
+--- a/drivers/hwmon/mlxreg-fan.c
++++ b/drivers/hwmon/mlxreg-fan.c
+@@ -315,8 +315,8 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ {
+ 	struct mlxreg_fan *fan = cdev->devdata;
+ 	unsigned long cur_state;
++	int i, config = 0;
+ 	u32 regval;
+-	int i;
+ 	int err;
+ 
+ 	/*
+@@ -329,6 +329,12 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 	 * overwritten.
+ 	 */
+ 	if (state >= MLXREG_FAN_SPEED_MIN && state <= MLXREG_FAN_SPEED_MAX) {
++		/*
++		 * This is configuration change, which is only supported through sysfs.
++		 * For configuration non-zero value is to be returned to avoid thermal
++		 * statistics update.
++		 */
++		config = 1;
+ 		state -= MLXREG_FAN_MAX_STATE;
+ 		for (i = 0; i < state; i++)
+ 			fan->cooling_levels[i] = state;
+@@ -343,7 +349,7 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 
+ 		cur_state = MLXREG_FAN_PWM_DUTY2STATE(regval);
+ 		if (state < cur_state)
+-			return 0;
++			return config;
+ 
+ 		state = cur_state;
+ 	}
+@@ -359,7 +365,7 @@ static int mlxreg_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ 		dev_err(fan->dev, "Failed to write PWM duty\n");
+ 		return err;
+ 	}
+-	return 0;
++	return config;
+ }
+ 
+ static const struct thermal_cooling_device_ops mlxreg_fan_cooling_ops = {
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index 0d68a78be980d..ae664613289c4 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -340,18 +340,11 @@ static ssize_t occ_show_temp_10(struct device *dev,
+ 		if (val == OCC_TEMP_SENSOR_FAULT)
+ 			return -EREMOTEIO;
+ 
+-		/*
+-		 * VRM doesn't return temperature, only alarm bit. This
+-		 * attribute maps to tempX_alarm instead of tempX_input for
+-		 * VRM
+-		 */
+-		if (temp->fru_type != OCC_FRU_TYPE_VRM) {
+-			/* sensor not ready */
+-			if (val == 0)
+-				return -EAGAIN;
++		/* sensor not ready */
++		if (val == 0)
++			return -EAGAIN;
+ 
+-			val *= 1000;
+-		}
++		val *= 1000;
+ 		break;
+ 	case 2:
+ 		val = temp->fru_type;
+@@ -886,7 +879,7 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 					     0, i);
+ 		attr++;
+ 
+-		if (sensors->temp.version > 1 &&
++		if (sensors->temp.version == 2 &&
+ 		    temp->fru_type == OCC_FRU_TYPE_VRM) {
+ 			snprintf(attr->name, sizeof(attr->name),
+ 				 "temp%d_alarm", s);
+diff --git a/drivers/hwmon/pmbus/mp2975.c b/drivers/hwmon/pmbus/mp2975.c
+index eb94bd5f4e2a8..51986adfbf47c 100644
+--- a/drivers/hwmon/pmbus/mp2975.c
++++ b/drivers/hwmon/pmbus/mp2975.c
+@@ -54,7 +54,7 @@
+ 
+ #define MP2975_RAIL2_FUNC	(PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT | \
+ 				 PMBUS_HAVE_IOUT | PMBUS_HAVE_STATUS_IOUT | \
+-				 PMBUS_PHASE_VIRTUAL)
++				 PMBUS_HAVE_POUT | PMBUS_PHASE_VIRTUAL)
+ 
+ struct mp2975_data {
+ 	struct pmbus_driver_info info;
+diff --git a/drivers/hwmon/tmp421.c b/drivers/hwmon/tmp421.c
+index ede66ea6a730d..b963a369c5ab3 100644
+--- a/drivers/hwmon/tmp421.c
++++ b/drivers/hwmon/tmp421.c
+@@ -100,71 +100,81 @@ struct tmp421_data {
+ 	s16 temp[4];
+ };
+ 
+-static int temp_from_s16(s16 reg)
++static int temp_from_raw(u16 reg, bool extended)
+ {
+ 	/* Mask out status bits */
+ 	int temp = reg & ~0xf;
+ 
+-	return (temp * 1000 + 128) / 256;
+-}
+-
+-static int temp_from_u16(u16 reg)
+-{
+-	/* Mask out status bits */
+-	int temp = reg & ~0xf;
+-
+-	/* Add offset for extended temperature range. */
+-	temp -= 64 * 256;
++	if (extended)
++		temp = temp - 64 * 256;
++	else
++		temp = (s16)temp;
+ 
+-	return (temp * 1000 + 128) / 256;
++	return DIV_ROUND_CLOSEST(temp * 1000, 256);
+ }
+ 
+-static struct tmp421_data *tmp421_update_device(struct device *dev)
++static int tmp421_update_device(struct tmp421_data *data)
+ {
+-	struct tmp421_data *data = dev_get_drvdata(dev);
+ 	struct i2c_client *client = data->client;
++	int ret = 0;
+ 	int i;
+ 
+ 	mutex_lock(&data->update_lock);
+ 
+ 	if (time_after(jiffies, data->last_updated + (HZ / 2)) ||
+ 	    !data->valid) {
+-		data->config = i2c_smbus_read_byte_data(client,
+-			TMP421_CONFIG_REG_1);
++		ret = i2c_smbus_read_byte_data(client, TMP421_CONFIG_REG_1);
++		if (ret < 0)
++			goto exit;
++		data->config = ret;
+ 
+ 		for (i = 0; i < data->channels; i++) {
+-			data->temp[i] = i2c_smbus_read_byte_data(client,
+-				TMP421_TEMP_MSB[i]) << 8;
+-			data->temp[i] |= i2c_smbus_read_byte_data(client,
+-				TMP421_TEMP_LSB[i]);
++			ret = i2c_smbus_read_byte_data(client, TMP421_TEMP_MSB[i]);
++			if (ret < 0)
++				goto exit;
++			data->temp[i] = ret << 8;
++
++			ret = i2c_smbus_read_byte_data(client, TMP421_TEMP_LSB[i]);
++			if (ret < 0)
++				goto exit;
++			data->temp[i] |= ret;
+ 		}
+ 		data->last_updated = jiffies;
+ 		data->valid = 1;
+ 	}
+ 
++exit:
+ 	mutex_unlock(&data->update_lock);
+ 
+-	return data;
++	if (ret < 0) {
++		data->valid = 0;
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static int tmp421_read(struct device *dev, enum hwmon_sensor_types type,
+ 		       u32 attr, int channel, long *val)
+ {
+-	struct tmp421_data *tmp421 = tmp421_update_device(dev);
++	struct tmp421_data *tmp421 = dev_get_drvdata(dev);
++	int ret = 0;
++
++	ret = tmp421_update_device(tmp421);
++	if (ret)
++		return ret;
+ 
+ 	switch (attr) {
+ 	case hwmon_temp_input:
+-		if (tmp421->config & TMP421_CONFIG_RANGE)
+-			*val = temp_from_u16(tmp421->temp[channel]);
+-		else
+-			*val = temp_from_s16(tmp421->temp[channel]);
++		*val = temp_from_raw(tmp421->temp[channel],
++				     tmp421->config & TMP421_CONFIG_RANGE);
+ 		return 0;
+ 	case hwmon_temp_fault:
+ 		/*
+-		 * The OPEN bit signals a fault. This is bit 0 of the temperature
+-		 * register (low byte).
++		 * Any of OPEN or /PVLD bits indicate a hardware mulfunction
++		 * and the conversion result may be incorrect
+ 		 */
+-		*val = tmp421->temp[channel] & 0x01;
++		*val = !!(tmp421->temp[channel] & 0x03);
+ 		return 0;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -177,9 +187,6 @@ static umode_t tmp421_is_visible(const void *data, enum hwmon_sensor_types type,
+ {
+ 	switch (attr) {
+ 	case hwmon_temp_fault:
+-		if (channel == 0)
+-			return 0;
+-		return 0444;
+ 	case hwmon_temp_input:
+ 		return 0444;
+ 	default:
+diff --git a/drivers/hwmon/w83791d.c b/drivers/hwmon/w83791d.c
+index 37b25a1474c46..3c1be2c11fdf0 100644
+--- a/drivers/hwmon/w83791d.c
++++ b/drivers/hwmon/w83791d.c
+@@ -273,9 +273,6 @@ struct w83791d_data {
+ 	char valid;			/* !=0 if following fields are valid */
+ 	unsigned long last_updated;	/* In jiffies */
+ 
+-	/* array of 2 pointers to subclients */
+-	struct i2c_client *lm75[2];
+-
+ 	/* volts */
+ 	u8 in[NUMBER_OF_VIN];		/* Register value */
+ 	u8 in_max[NUMBER_OF_VIN];	/* Register value */
+@@ -1257,7 +1254,6 @@ static const struct attribute_group w83791d_group_fanpwm45 = {
+ static int w83791d_detect_subclients(struct i2c_client *client)
+ {
+ 	struct i2c_adapter *adapter = client->adapter;
+-	struct w83791d_data *data = i2c_get_clientdata(client);
+ 	int address = client->addr;
+ 	int i, id;
+ 	u8 val;
+@@ -1280,22 +1276,19 @@ static int w83791d_detect_subclients(struct i2c_client *client)
+ 	}
+ 
+ 	val = w83791d_read(client, W83791D_REG_I2C_SUBADDR);
+-	if (!(val & 0x08))
+-		data->lm75[0] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + (val & 0x7));
+-	if (!(val & 0x80)) {
+-		if (!IS_ERR(data->lm75[0]) &&
+-				((val & 0x7) == ((val >> 4) & 0x7))) {
+-			dev_err(&client->dev,
+-				"duplicate addresses 0x%x, "
+-				"use force_subclient\n",
+-				data->lm75[0]->addr);
+-			return -ENODEV;
+-		}
+-		data->lm75[1] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + ((val >> 4) & 0x7));
++
++	if (!(val & 0x88) && (val & 0x7) == ((val >> 4) & 0x7)) {
++		dev_err(&client->dev,
++			"duplicate addresses 0x%x, use force_subclient\n", 0x48 + (val & 0x7));
++		return -ENODEV;
+ 	}
+ 
++	if (!(val & 0x08))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + (val & 0x7));
++
++	if (!(val & 0x80))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + ((val >> 4) & 0x7));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/w83792d.c b/drivers/hwmon/w83792d.c
+index abd5c3a722b91..1f175f3813506 100644
+--- a/drivers/hwmon/w83792d.c
++++ b/drivers/hwmon/w83792d.c
+@@ -264,9 +264,6 @@ struct w83792d_data {
+ 	char valid;		/* !=0 if following fields are valid */
+ 	unsigned long last_updated;	/* In jiffies */
+ 
+-	/* array of 2 pointers to subclients */
+-	struct i2c_client *lm75[2];
+-
+ 	u8 in[9];		/* Register value */
+ 	u8 in_max[9];		/* Register value */
+ 	u8 in_min[9];		/* Register value */
+@@ -927,7 +924,6 @@ w83792d_detect_subclients(struct i2c_client *new_client)
+ 	int address = new_client->addr;
+ 	u8 val;
+ 	struct i2c_adapter *adapter = new_client->adapter;
+-	struct w83792d_data *data = i2c_get_clientdata(new_client);
+ 
+ 	id = i2c_adapter_id(adapter);
+ 	if (force_subclients[0] == id && force_subclients[1] == address) {
+@@ -946,21 +942,19 @@ w83792d_detect_subclients(struct i2c_client *new_client)
+ 	}
+ 
+ 	val = w83792d_read_value(new_client, W83792D_REG_I2C_SUBADDR);
+-	if (!(val & 0x08))
+-		data->lm75[0] = devm_i2c_new_dummy_device(&new_client->dev, adapter,
+-							  0x48 + (val & 0x7));
+-	if (!(val & 0x80)) {
+-		if (!IS_ERR(data->lm75[0]) &&
+-			((val & 0x7) == ((val >> 4) & 0x7))) {
+-			dev_err(&new_client->dev,
+-				"duplicate addresses 0x%x, use force_subclient\n",
+-				data->lm75[0]->addr);
+-			return -ENODEV;
+-		}
+-		data->lm75[1] = devm_i2c_new_dummy_device(&new_client->dev, adapter,
+-							  0x48 + ((val >> 4) & 0x7));
++
++	if (!(val & 0x88) && (val & 0x7) == ((val >> 4) & 0x7)) {
++		dev_err(&new_client->dev,
++			"duplicate addresses 0x%x, use force_subclient\n", 0x48 + (val & 0x7));
++		return -ENODEV;
+ 	}
+ 
++	if (!(val & 0x08))
++		devm_i2c_new_dummy_device(&new_client->dev, adapter, 0x48 + (val & 0x7));
++
++	if (!(val & 0x80))
++		devm_i2c_new_dummy_device(&new_client->dev, adapter, 0x48 + ((val >> 4) & 0x7));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/w83793.c b/drivers/hwmon/w83793.c
+index e7d0484eabe4c..1d2854de1cfc9 100644
+--- a/drivers/hwmon/w83793.c
++++ b/drivers/hwmon/w83793.c
+@@ -202,7 +202,6 @@ static inline s8 TEMP_TO_REG(long val, s8 min, s8 max)
+ }
+ 
+ struct w83793_data {
+-	struct i2c_client *lm75[2];
+ 	struct device *hwmon_dev;
+ 	struct mutex update_lock;
+ 	char valid;			/* !=0 if following fields are valid */
+@@ -1566,7 +1565,6 @@ w83793_detect_subclients(struct i2c_client *client)
+ 	int address = client->addr;
+ 	u8 tmp;
+ 	struct i2c_adapter *adapter = client->adapter;
+-	struct w83793_data *data = i2c_get_clientdata(client);
+ 
+ 	id = i2c_adapter_id(adapter);
+ 	if (force_subclients[0] == id && force_subclients[1] == address) {
+@@ -1586,21 +1584,19 @@ w83793_detect_subclients(struct i2c_client *client)
+ 	}
+ 
+ 	tmp = w83793_read_value(client, W83793_REG_I2C_SUBADDR);
+-	if (!(tmp & 0x08))
+-		data->lm75[0] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + (tmp & 0x7));
+-	if (!(tmp & 0x80)) {
+-		if (!IS_ERR(data->lm75[0])
+-		    && ((tmp & 0x7) == ((tmp >> 4) & 0x7))) {
+-			dev_err(&client->dev,
+-				"duplicate addresses 0x%x, "
+-				"use force_subclients\n", data->lm75[0]->addr);
+-			return -ENODEV;
+-		}
+-		data->lm75[1] = devm_i2c_new_dummy_device(&client->dev, adapter,
+-							  0x48 + ((tmp >> 4) & 0x7));
++
++	if (!(tmp & 0x88) && (tmp & 0x7) == ((tmp >> 4) & 0x7)) {
++		dev_err(&client->dev,
++			"duplicate addresses 0x%x, use force_subclient\n", 0x48 + (tmp & 0x7));
++		return -ENODEV;
+ 	}
+ 
++	if (!(tmp & 0x08))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + (tmp & 0x7));
++
++	if (!(tmp & 0x80))
++		devm_i2c_new_dummy_device(&client->dev, adapter, 0x48 + ((tmp >> 4) & 0x7));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 5d3b8b8d163d6..dbbacc8e9273f 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1746,15 +1746,16 @@ static void cma_cancel_route(struct rdma_id_private *id_priv)
+ 	}
+ }
+ 
+-static void cma_cancel_listens(struct rdma_id_private *id_priv)
++static void _cma_cancel_listens(struct rdma_id_private *id_priv)
+ {
+ 	struct rdma_id_private *dev_id_priv;
+ 
++	lockdep_assert_held(&lock);
++
+ 	/*
+ 	 * Remove from listen_any_list to prevent added devices from spawning
+ 	 * additional listen requests.
+ 	 */
+-	mutex_lock(&lock);
+ 	list_del(&id_priv->list);
+ 
+ 	while (!list_empty(&id_priv->listen_list)) {
+@@ -1768,6 +1769,12 @@ static void cma_cancel_listens(struct rdma_id_private *id_priv)
+ 		rdma_destroy_id(&dev_id_priv->id);
+ 		mutex_lock(&lock);
+ 	}
++}
++
++static void cma_cancel_listens(struct rdma_id_private *id_priv)
++{
++	mutex_lock(&lock);
++	_cma_cancel_listens(id_priv);
+ 	mutex_unlock(&lock);
+ }
+ 
+@@ -1776,6 +1783,14 @@ static void cma_cancel_operation(struct rdma_id_private *id_priv,
+ {
+ 	switch (state) {
+ 	case RDMA_CM_ADDR_QUERY:
++		/*
++		 * We can avoid doing the rdma_addr_cancel() based on state,
++		 * only RDMA_CM_ADDR_QUERY has a work that could still execute.
++		 * Notice that the addr_handler work could still be exiting
++		 * outside this state, however due to the interaction with the
++		 * handler_mutex the work is guaranteed not to touch id_priv
++		 * during exit.
++		 */
+ 		rdma_addr_cancel(&id_priv->id.route.addr.dev_addr);
+ 		break;
+ 	case RDMA_CM_ROUTE_QUERY:
+@@ -1810,6 +1825,8 @@ static void cma_release_port(struct rdma_id_private *id_priv)
+ static void destroy_mc(struct rdma_id_private *id_priv,
+ 		       struct cma_multicast *mc)
+ {
++	bool send_only = mc->join_state == BIT(SENDONLY_FULLMEMBER_JOIN);
++
+ 	if (rdma_cap_ib_mcast(id_priv->id.device, id_priv->id.port_num))
+ 		ib_sa_free_multicast(mc->sa_mc);
+ 
+@@ -1826,7 +1843,10 @@ static void destroy_mc(struct rdma_id_private *id_priv,
+ 
+ 			cma_set_mgid(id_priv, (struct sockaddr *)&mc->addr,
+ 				     &mgid);
+-			cma_igmp_send(ndev, &mgid, false);
++
++			if (!send_only)
++				cma_igmp_send(ndev, &mgid, false);
++
+ 			dev_put(ndev);
+ 		}
+ 
+@@ -2574,7 +2594,7 @@ static int cma_listen_on_all(struct rdma_id_private *id_priv)
+ 	return 0;
+ 
+ err_listen:
+-	list_del(&id_priv->list);
++	_cma_cancel_listens(id_priv);
+ 	mutex_unlock(&lock);
+ 	if (to_destroy)
+ 		rdma_destroy_id(&to_destroy->id);
+@@ -3410,6 +3430,21 @@ int rdma_resolve_addr(struct rdma_cm_id *id, struct sockaddr *src_addr,
+ 		if (dst_addr->sa_family == AF_IB) {
+ 			ret = cma_resolve_ib_addr(id_priv);
+ 		} else {
++			/*
++			 * The FSM can return back to RDMA_CM_ADDR_BOUND after
++			 * rdma_resolve_ip() is called, eg through the error
++			 * path in addr_handler(). If this happens the existing
++			 * request must be canceled before issuing a new one.
++			 * Since canceling a request is a bit slow and this
++			 * oddball path is rare, keep track once a request has
++			 * been issued. The track turns out to be a permanent
++			 * state since this is the only cancel as it is
++			 * immediately before rdma_resolve_ip().
++			 */
++			if (id_priv->used_resolve_ip)
++				rdma_addr_cancel(&id->route.addr.dev_addr);
++			else
++				id_priv->used_resolve_ip = 1;
+ 			ret = rdma_resolve_ip(cma_src_addr(id_priv), dst_addr,
+ 					      &id->route.addr.dev_addr,
+ 					      timeout_ms, addr_handler,
+@@ -3768,9 +3803,13 @@ int rdma_listen(struct rdma_cm_id *id, int backlog)
+ 	int ret;
+ 
+ 	if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND, RDMA_CM_LISTEN)) {
++		struct sockaddr_in any_in = {
++			.sin_family = AF_INET,
++			.sin_addr.s_addr = htonl(INADDR_ANY),
++		};
++
+ 		/* For a well behaved ULP state will be RDMA_CM_IDLE */
+-		id->route.addr.src_addr.ss_family = AF_INET;
+-		ret = rdma_bind_addr(id, cma_src_addr(id_priv));
++		ret = rdma_bind_addr(id, (struct sockaddr *)&any_in);
+ 		if (ret)
+ 			return ret;
+ 		if (WARN_ON(!cma_comp_exch(id_priv, RDMA_CM_ADDR_BOUND,
+diff --git a/drivers/infiniband/core/cma_priv.h b/drivers/infiniband/core/cma_priv.h
+index 5c463da998453..f92f101ea9818 100644
+--- a/drivers/infiniband/core/cma_priv.h
++++ b/drivers/infiniband/core/cma_priv.h
+@@ -91,6 +91,7 @@ struct rdma_id_private {
+ 	u8			afonly;
+ 	u8			timeout;
+ 	u8			min_rnr_timer;
++	u8 used_resolve_ip;
+ 	enum ib_gid_type	gid_type;
+ 
+ 	/*
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index 993f9838b6c80..e1fdeadda437d 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -873,14 +873,14 @@ void hfi1_ipoib_tx_timeout(struct net_device *dev, unsigned int q)
+ 	struct hfi1_ipoib_txq *txq = &priv->txqs[q];
+ 	u64 completed = atomic64_read(&txq->complete_txreqs);
+ 
+-	dd_dev_info(priv->dd, "timeout txq %llx q %u stopped %u stops %d no_desc %d ring_full %d\n",
+-		    (unsigned long long)txq, q,
++	dd_dev_info(priv->dd, "timeout txq %p q %u stopped %u stops %d no_desc %d ring_full %d\n",
++		    txq, q,
+ 		    __netif_subqueue_stopped(dev, txq->q_idx),
+ 		    atomic_read(&txq->stops),
+ 		    atomic_read(&txq->no_desc),
+ 		    atomic_read(&txq->ring_full));
+-	dd_dev_info(priv->dd, "sde %llx engine %u\n",
+-		    (unsigned long long)txq->sde,
++	dd_dev_info(priv->dd, "sde %p engine %u\n",
++		    txq->sde,
+ 		    txq->sde ? txq->sde->this_idx : 0);
+ 	dd_dev_info(priv->dd, "flow %x\n", txq->flow.as_int);
+ 	dd_dev_info(priv->dd, "sent %llu completed %llu used %llu\n",
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 1e9c3c5bee684..d763f097599ff 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -326,19 +326,30 @@ static void set_cq_param(struct hns_roce_cq *hr_cq, u32 cq_entries, int vector,
+ 	INIT_LIST_HEAD(&hr_cq->rq_list);
+ }
+ 
+-static void set_cqe_size(struct hns_roce_cq *hr_cq, struct ib_udata *udata,
+-			 struct hns_roce_ib_create_cq *ucmd)
++static int set_cqe_size(struct hns_roce_cq *hr_cq, struct ib_udata *udata,
++			struct hns_roce_ib_create_cq *ucmd)
+ {
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(hr_cq->ib_cq.device);
+ 
+-	if (udata) {
+-		if (udata->inlen >= offsetofend(typeof(*ucmd), cqe_size))
+-			hr_cq->cqe_size = ucmd->cqe_size;
+-		else
+-			hr_cq->cqe_size = HNS_ROCE_V2_CQE_SIZE;
+-	} else {
++	if (!udata) {
+ 		hr_cq->cqe_size = hr_dev->caps.cqe_sz;
++		return 0;
++	}
++
++	if (udata->inlen >= offsetofend(typeof(*ucmd), cqe_size)) {
++		if (ucmd->cqe_size != HNS_ROCE_V2_CQE_SIZE &&
++		    ucmd->cqe_size != HNS_ROCE_V3_CQE_SIZE) {
++			ibdev_err(&hr_dev->ib_dev,
++				  "invalid cqe size %u.\n", ucmd->cqe_size);
++			return -EINVAL;
++		}
++
++		hr_cq->cqe_size = ucmd->cqe_size;
++	} else {
++		hr_cq->cqe_size = HNS_ROCE_V2_CQE_SIZE;
+ 	}
++
++	return 0;
+ }
+ 
+ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+@@ -366,7 +377,9 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
+ 
+ 	set_cq_param(hr_cq, attr->cqe, attr->comp_vector, &ucmd);
+ 
+-	set_cqe_size(hr_cq, udata, &ucmd);
++	ret = set_cqe_size(hr_cq, udata, &ucmd);
++	if (ret)
++		return ret;
+ 
+ 	ret = alloc_cq_buf(hr_dev, hr_cq, udata, ucmd.buf_addr);
+ 	if (ret) {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index c320891c8763c..0ccb0c453f6a2 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -3306,7 +3306,7 @@ static void __hns_roce_v2_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn,
+ 			dest = get_cqe_v2(hr_cq, (prod_index + nfreed) &
+ 					  hr_cq->ib_cq.cqe);
+ 			owner_bit = hr_reg_read(dest, CQE_OWNER);
+-			memcpy(dest, cqe, sizeof(*cqe));
++			memcpy(dest, cqe, hr_cq->cqe_size);
+ 			hr_reg_write(dest, CQE_OWNER, owner_bit);
+ 		}
+ 	}
+@@ -4411,7 +4411,12 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	hr_qp->path_mtu = ib_mtu;
+ 
+ 	mtu = ib_mtu_enum_to_int(ib_mtu);
+-	if (WARN_ON(mtu < 0))
++	if (WARN_ON(mtu <= 0))
++		return -EINVAL;
++#define MAX_LP_MSG_LEN 65536
++	/* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
++	lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
++	if (WARN_ON(lp_pktn_ini >= 0xF))
+ 		return -EINVAL;
+ 
+ 	if (attr_mask & IB_QP_PATH_MTU) {
+@@ -4419,10 +4424,6 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 		hr_reg_clear(qpc_mask, QPC_MTU);
+ 	}
+ 
+-#define MAX_LP_MSG_LEN 65536
+-	/* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
+-	lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
+-
+ 	hr_reg_write(context, QPC_LP_PKTN_INI, lp_pktn_ini);
+ 	hr_reg_clear(qpc_mask, QPC_LP_PKTN_INI);
+ 
+diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
+index 6b62299abfbbb..6dea0a49d1718 100644
+--- a/drivers/infiniband/hw/irdma/cm.c
++++ b/drivers/infiniband/hw/irdma/cm.c
+@@ -3496,7 +3496,7 @@ static void irdma_cm_disconn_true(struct irdma_qp *iwqp)
+ 	     original_hw_tcp_state == IRDMA_TCP_STATE_TIME_WAIT ||
+ 	     last_ae == IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE ||
+ 	     last_ae == IRDMA_AE_BAD_CLOSE ||
+-	     last_ae == IRDMA_AE_LLP_CONNECTION_RESET || iwdev->reset)) {
++	     last_ae == IRDMA_AE_LLP_CONNECTION_RESET || iwdev->rf->reset)) {
+ 		issue_close = 1;
+ 		iwqp->cm_id = NULL;
+ 		qp->term_flags = 0;
+@@ -4250,7 +4250,7 @@ void irdma_cm_teardown_connections(struct irdma_device *iwdev, u32 *ipaddr,
+ 				       teardown_entry);
+ 		attr.qp_state = IB_QPS_ERR;
+ 		irdma_modify_qp(&cm_node->iwqp->ibqp, &attr, IB_QP_STATE, NULL);
+-		if (iwdev->reset)
++		if (iwdev->rf->reset)
+ 			irdma_cm_disconn(cm_node->iwqp);
+ 		irdma_rem_ref_cm_node(cm_node);
+ 	}
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index 00de5ee9a2609..7de525a5ccf8c 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -176,6 +176,14 @@ static void irdma_set_flush_fields(struct irdma_sc_qp *qp,
+ 	case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR:
+ 		qp->flush_code = FLUSH_GENERAL_ERR;
+ 		break;
++	case IRDMA_AE_LLP_TOO_MANY_RETRIES:
++		qp->flush_code = FLUSH_RETRY_EXC_ERR;
++		break;
++	case IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS:
++	case IRDMA_AE_AMP_MWBIND_BIND_DISABLED:
++	case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS:
++		qp->flush_code = FLUSH_MW_BIND_ERR;
++		break;
+ 	default:
+ 		qp->flush_code = FLUSH_FATAL_ERR;
+ 		break;
+@@ -1489,7 +1497,7 @@ void irdma_reinitialize_ieq(struct irdma_sc_vsi *vsi)
+ 
+ 	irdma_puda_dele_rsrc(vsi, IRDMA_PUDA_RSRC_TYPE_IEQ, false);
+ 	if (irdma_initialize_ieq(iwdev)) {
+-		iwdev->reset = true;
++		iwdev->rf->reset = true;
+ 		rf->gen_ops.request_reset(rf);
+ 	}
+ }
+@@ -1632,13 +1640,13 @@ void irdma_rt_deinit_hw(struct irdma_device *iwdev)
+ 	case IEQ_CREATED:
+ 		if (!iwdev->roce_mode)
+ 			irdma_puda_dele_rsrc(&iwdev->vsi, IRDMA_PUDA_RSRC_TYPE_IEQ,
+-					     iwdev->reset);
++					     iwdev->rf->reset);
+ 		fallthrough;
+ 	case ILQ_CREATED:
+ 		if (!iwdev->roce_mode)
+ 			irdma_puda_dele_rsrc(&iwdev->vsi,
+ 					     IRDMA_PUDA_RSRC_TYPE_ILQ,
+-					     iwdev->reset);
++					     iwdev->rf->reset);
+ 		break;
+ 	default:
+ 		ibdev_warn(&iwdev->ibdev, "bad init_state = %d\n", iwdev->init_state);
+diff --git a/drivers/infiniband/hw/irdma/i40iw_if.c b/drivers/infiniband/hw/irdma/i40iw_if.c
+index bddf88194d095..d219f64b2c3d5 100644
+--- a/drivers/infiniband/hw/irdma/i40iw_if.c
++++ b/drivers/infiniband/hw/irdma/i40iw_if.c
+@@ -55,7 +55,7 @@ static void i40iw_close(struct i40e_info *cdev_info, struct i40e_client *client,
+ 
+ 	iwdev = to_iwdev(ibdev);
+ 	if (reset)
+-		iwdev->reset = true;
++		iwdev->rf->reset = true;
+ 
+ 	iwdev->iw_status = 0;
+ 	irdma_port_ibevent(iwdev);
+diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h
+index 743d9e143a999..b678fe712447e 100644
+--- a/drivers/infiniband/hw/irdma/main.h
++++ b/drivers/infiniband/hw/irdma/main.h
+@@ -346,7 +346,6 @@ struct irdma_device {
+ 	bool roce_mode:1;
+ 	bool roce_dcqcn_en:1;
+ 	bool dcb:1;
+-	bool reset:1;
+ 	bool iw_ooo:1;
+ 	enum init_completion_state init_state;
+ 
+diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h
+index ff705f3232333..3dcbb1fbf2c66 100644
+--- a/drivers/infiniband/hw/irdma/user.h
++++ b/drivers/infiniband/hw/irdma/user.h
+@@ -102,6 +102,8 @@ enum irdma_flush_opcode {
+ 	FLUSH_REM_OP_ERR,
+ 	FLUSH_LOC_LEN_ERR,
+ 	FLUSH_FATAL_ERR,
++	FLUSH_RETRY_EXC_ERR,
++	FLUSH_MW_BIND_ERR,
+ };
+ 
+ enum irdma_cmpl_status {
+diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
+index 5bbe44e54f9a1..832e9604766b4 100644
+--- a/drivers/infiniband/hw/irdma/utils.c
++++ b/drivers/infiniband/hw/irdma/utils.c
+@@ -2510,7 +2510,7 @@ void irdma_modify_qp_to_err(struct irdma_sc_qp *sc_qp)
+ 	struct irdma_qp *qp = sc_qp->qp_uk.back_qp;
+ 	struct ib_qp_attr attr;
+ 
+-	if (qp->iwdev->reset)
++	if (qp->iwdev->rf->reset)
+ 		return;
+ 	attr.qp_state = IB_QPS_ERR;
+ 
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 717147ed0519d..fa393c5ea3973 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -535,8 +535,7 @@ static int irdma_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ 	irdma_qp_rem_ref(&iwqp->ibqp);
+ 	wait_for_completion(&iwqp->free_qp);
+ 	irdma_free_lsmm_rsrc(iwqp);
+-	if (!iwdev->reset)
+-		irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp);
++	irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp);
+ 
+ 	if (!iwqp->user_mode) {
+ 		if (iwqp->iwscq) {
+@@ -2041,7 +2040,7 @@ static int irdma_create_cq(struct ib_cq *ibcq,
+ 		/* Kmode allocations */
+ 		int rsize;
+ 
+-		if (entries > rf->max_cqe) {
++		if (entries < 1 || entries > rf->max_cqe) {
+ 			err_code = -EINVAL;
+ 			goto cq_free_rsrc;
+ 		}
+@@ -3359,6 +3358,10 @@ static enum ib_wc_status irdma_flush_err_to_ib_wc_status(enum irdma_flush_opcode
+ 		return IB_WC_LOC_LEN_ERR;
+ 	case FLUSH_GENERAL_ERR:
+ 		return IB_WC_WR_FLUSH_ERR;
++	case FLUSH_RETRY_EXC_ERR:
++		return IB_WC_RETRY_EXC_ERR;
++	case FLUSH_MW_BIND_ERR:
++		return IB_WC_MW_BIND_ERR;
+ 	case FLUSH_FATAL_ERR:
+ 	default:
+ 		return IB_WC_FATAL_ERR;
+diff --git a/drivers/interconnect/qcom/sdm660.c b/drivers/interconnect/qcom/sdm660.c
+index 632dbdd219150..99eef7e2d326a 100644
+--- a/drivers/interconnect/qcom/sdm660.c
++++ b/drivers/interconnect/qcom/sdm660.c
+@@ -44,9 +44,9 @@
+ #define NOC_PERM_MODE_BYPASS		(1 << NOC_QOS_MODE_BYPASS)
+ 
+ #define NOC_QOS_PRIORITYn_ADDR(n)	(0x8 + (n * 0x1000))
+-#define NOC_QOS_PRIORITY_MASK		0xf
++#define NOC_QOS_PRIORITY_P1_MASK	0xc
++#define NOC_QOS_PRIORITY_P0_MASK	0x3
+ #define NOC_QOS_PRIORITY_P1_SHIFT	0x2
+-#define NOC_QOS_PRIORITY_P0_SHIFT	0x3
+ 
+ #define NOC_QOS_MODEn_ADDR(n)		(0xc + (n * 0x1000))
+ #define NOC_QOS_MODEn_MASK		0x3
+@@ -307,7 +307,7 @@ DEFINE_QNODE(slv_bimc_cfg, SDM660_SLAVE_BIMC_CFG, 4, -1, 56, true, -1, 0, -1, 0)
+ DEFINE_QNODE(slv_prng, SDM660_SLAVE_PRNG, 4, -1, 44, true, -1, 0, -1, 0);
+ DEFINE_QNODE(slv_spdm, SDM660_SLAVE_SPDM, 4, -1, 60, true, -1, 0, -1, 0);
+ DEFINE_QNODE(slv_qdss_cfg, SDM660_SLAVE_QDSS_CFG, 4, -1, 63, true, -1, 0, -1, 0);
+-DEFINE_QNODE(slv_cnoc_mnoc_cfg, SDM660_SLAVE_BLSP_1, 4, -1, 66, true, -1, 0, -1, SDM660_MASTER_CNOC_MNOC_CFG);
++DEFINE_QNODE(slv_cnoc_mnoc_cfg, SDM660_SLAVE_CNOC_MNOC_CFG, 4, -1, 66, true, -1, 0, -1, SDM660_MASTER_CNOC_MNOC_CFG);
+ DEFINE_QNODE(slv_snoc_cfg, SDM660_SLAVE_SNOC_CFG, 4, -1, 70, true, -1, 0, -1, 0);
+ DEFINE_QNODE(slv_qm_cfg, SDM660_SLAVE_QM_CFG, 4, -1, 212, true, -1, 0, -1, 0);
+ DEFINE_QNODE(slv_clk_ctl, SDM660_SLAVE_CLK_CTL, 4, -1, 47, true, -1, 0, -1, 0);
+@@ -624,13 +624,12 @@ static int qcom_icc_noc_set_qos_priority(struct regmap *rmap,
+ 	/* Must be updated one at a time, P1 first, P0 last */
+ 	val = qos->areq_prio << NOC_QOS_PRIORITY_P1_SHIFT;
+ 	rc = regmap_update_bits(rmap, NOC_QOS_PRIORITYn_ADDR(qos->qos_port),
+-				NOC_QOS_PRIORITY_MASK, val);
++				NOC_QOS_PRIORITY_P1_MASK, val);
+ 	if (rc)
+ 		return rc;
+ 
+-	val = qos->prio_level << NOC_QOS_PRIORITY_P0_SHIFT;
+ 	return regmap_update_bits(rmap, NOC_QOS_PRIORITYn_ADDR(qos->qos_port),
+-				  NOC_QOS_PRIORITY_MASK, val);
++				  NOC_QOS_PRIORITY_P0_MASK, qos->prio_level);
+ }
+ 
+ static int qcom_icc_set_noc_qos(struct icc_node *src, u64 max_bw)
+diff --git a/drivers/ipack/devices/ipoctal.c b/drivers/ipack/devices/ipoctal.c
+index 20fa02c81070f..9117874cbfdbd 100644
+--- a/drivers/ipack/devices/ipoctal.c
++++ b/drivers/ipack/devices/ipoctal.c
+@@ -33,6 +33,7 @@ struct ipoctal_channel {
+ 	unsigned int			pointer_read;
+ 	unsigned int			pointer_write;
+ 	struct tty_port			tty_port;
++	bool				tty_registered;
+ 	union scc2698_channel __iomem	*regs;
+ 	union scc2698_block __iomem	*block_regs;
+ 	unsigned int			board_id;
+@@ -81,22 +82,34 @@ static int ipoctal_port_activate(struct tty_port *port, struct tty_struct *tty)
+ 	return 0;
+ }
+ 
+-static int ipoctal_open(struct tty_struct *tty, struct file *file)
++static int ipoctal_install(struct tty_driver *driver, struct tty_struct *tty)
+ {
+ 	struct ipoctal_channel *channel = dev_get_drvdata(tty->dev);
+ 	struct ipoctal *ipoctal = chan_to_ipoctal(channel, tty->index);
+-	int err;
+-
+-	tty->driver_data = channel;
++	int res;
+ 
+ 	if (!ipack_get_carrier(ipoctal->dev))
+ 		return -EBUSY;
+ 
+-	err = tty_port_open(&channel->tty_port, tty, file);
+-	if (err)
+-		ipack_put_carrier(ipoctal->dev);
++	res = tty_standard_install(driver, tty);
++	if (res)
++		goto err_put_carrier;
++
++	tty->driver_data = channel;
++
++	return 0;
++
++err_put_carrier:
++	ipack_put_carrier(ipoctal->dev);
++
++	return res;
++}
++
++static int ipoctal_open(struct tty_struct *tty, struct file *file)
++{
++	struct ipoctal_channel *channel = tty->driver_data;
+ 
+-	return err;
++	return tty_port_open(&channel->tty_port, tty, file);
+ }
+ 
+ static void ipoctal_reset_stats(struct ipoctal_stats *stats)
+@@ -264,7 +277,6 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 	int res;
+ 	int i;
+ 	struct tty_driver *tty;
+-	char name[20];
+ 	struct ipoctal_channel *channel;
+ 	struct ipack_region *region;
+ 	void __iomem *addr;
+@@ -355,8 +367,11 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 	/* Fill struct tty_driver with ipoctal data */
+ 	tty->owner = THIS_MODULE;
+ 	tty->driver_name = KBUILD_MODNAME;
+-	sprintf(name, KBUILD_MODNAME ".%d.%d.", bus_nr, slot);
+-	tty->name = name;
++	tty->name = kasprintf(GFP_KERNEL, KBUILD_MODNAME ".%d.%d.", bus_nr, slot);
++	if (!tty->name) {
++		res = -ENOMEM;
++		goto err_put_driver;
++	}
+ 	tty->major = 0;
+ 
+ 	tty->minor_start = 0;
+@@ -372,8 +387,7 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 	res = tty_register_driver(tty);
+ 	if (res) {
+ 		dev_err(&ipoctal->dev->dev, "Can't register tty driver.\n");
+-		put_tty_driver(tty);
+-		return res;
++		goto err_free_name;
+ 	}
+ 
+ 	/* Save struct tty_driver for use it when uninstalling the device */
+@@ -384,7 +398,9 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 
+ 		channel = &ipoctal->channel[i];
+ 		tty_port_init(&channel->tty_port);
+-		tty_port_alloc_xmit_buf(&channel->tty_port);
++		res = tty_port_alloc_xmit_buf(&channel->tty_port);
++		if (res)
++			continue;
+ 		channel->tty_port.ops = &ipoctal_tty_port_ops;
+ 
+ 		ipoctal_reset_stats(&channel->stats);
+@@ -392,13 +408,15 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 		spin_lock_init(&channel->lock);
+ 		channel->pointer_read = 0;
+ 		channel->pointer_write = 0;
+-		tty_dev = tty_port_register_device(&channel->tty_port, tty, i, NULL);
++		tty_dev = tty_port_register_device_attr(&channel->tty_port, tty,
++							i, NULL, channel, NULL);
+ 		if (IS_ERR(tty_dev)) {
+ 			dev_err(&ipoctal->dev->dev, "Failed to register tty device.\n");
++			tty_port_free_xmit_buf(&channel->tty_port);
+ 			tty_port_destroy(&channel->tty_port);
+ 			continue;
+ 		}
+-		dev_set_drvdata(tty_dev, channel);
++		channel->tty_registered = true;
+ 	}
+ 
+ 	/*
+@@ -410,6 +428,13 @@ static int ipoctal_inst_slot(struct ipoctal *ipoctal, unsigned int bus_nr,
+ 				       ipoctal_irq_handler, ipoctal);
+ 
+ 	return 0;
++
++err_free_name:
++	kfree(tty->name);
++err_put_driver:
++	put_tty_driver(tty);
++
++	return res;
+ }
+ 
+ static inline int ipoctal_copy_write_buffer(struct ipoctal_channel *channel,
+@@ -649,6 +674,7 @@ static void ipoctal_cleanup(struct tty_struct *tty)
+ 
+ static const struct tty_operations ipoctal_fops = {
+ 	.ioctl =		NULL,
++	.install =		ipoctal_install,
+ 	.open =			ipoctal_open,
+ 	.close =		ipoctal_close,
+ 	.write =		ipoctal_write_tty,
+@@ -691,12 +717,17 @@ static void __ipoctal_remove(struct ipoctal *ipoctal)
+ 
+ 	for (i = 0; i < NR_CHANNELS; i++) {
+ 		struct ipoctal_channel *channel = &ipoctal->channel[i];
++
++		if (!channel->tty_registered)
++			continue;
++
+ 		tty_unregister_device(ipoctal->tty_drv, i);
+ 		tty_port_free_xmit_buf(&channel->tty_port);
+ 		tty_port_destroy(&channel->tty_port);
+ 	}
+ 
+ 	tty_unregister_driver(ipoctal->tty_drv);
++	kfree(ipoctal->tty_drv->name);
+ 	put_tty_driver(ipoctal->tty_drv);
+ 	kfree(ipoctal);
+ }
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+index d402e456f27df..7d0ab19c38bb9 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+@@ -1140,8 +1140,8 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ 			continue;
+ 		length = 0;
+ 		switch (c) {
+-		/* SOF0: baseline JPEG */
+-		case SOF0:
++		/* JPEG_MARKER_SOF0: baseline JPEG */
++		case JPEG_MARKER_SOF0:
+ 			if (get_word_be(&jpeg_buffer, &word))
+ 				break;
+ 			length = (long)word - 2;
+@@ -1172,7 +1172,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ 			notfound = 0;
+ 			break;
+ 
+-		case DQT:
++		case JPEG_MARKER_DQT:
+ 			if (get_word_be(&jpeg_buffer, &word))
+ 				break;
+ 			length = (long)word - 2;
+@@ -1185,7 +1185,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ 			skip(&jpeg_buffer, length);
+ 			break;
+ 
+-		case DHT:
++		case JPEG_MARKER_DHT:
+ 			if (get_word_be(&jpeg_buffer, &word))
+ 				break;
+ 			length = (long)word - 2;
+@@ -1198,15 +1198,15 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
+ 			skip(&jpeg_buffer, length);
+ 			break;
+ 
+-		case SOS:
++		case JPEG_MARKER_SOS:
+ 			sos = jpeg_buffer.curr - 2; /* 0xffda */
+ 			break;
+ 
+ 		/* skip payload-less markers */
+-		case RST ... RST + 7:
+-		case SOI:
+-		case EOI:
+-		case TEM:
++		case JPEG_MARKER_RST ... JPEG_MARKER_RST + 7:
++		case JPEG_MARKER_SOI:
++		case JPEG_MARKER_EOI:
++		case JPEG_MARKER_TEM:
+ 			break;
+ 
+ 		/* skip uninteresting payload markers */
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.h b/drivers/media/platform/s5p-jpeg/jpeg-core.h
+index a77d93c098ce7..8473a019bb5f2 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.h
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.h
+@@ -37,15 +37,15 @@
+ #define EXYNOS3250_IRQ_TIMEOUT		0x10000000
+ 
+ /* a selection of JPEG markers */
+-#define TEM				0x01
+-#define SOF0				0xc0
+-#define DHT				0xc4
+-#define RST				0xd0
+-#define SOI				0xd8
+-#define EOI				0xd9
+-#define	SOS				0xda
+-#define DQT				0xdb
+-#define DHP				0xde
++#define JPEG_MARKER_TEM				0x01
++#define JPEG_MARKER_SOF0				0xc0
++#define JPEG_MARKER_DHT				0xc4
++#define JPEG_MARKER_RST				0xd0
++#define JPEG_MARKER_SOI				0xd8
++#define JPEG_MARKER_EOI				0xd9
++#define	JPEG_MARKER_SOS				0xda
++#define JPEG_MARKER_DQT				0xdb
++#define JPEG_MARKER_DHP				0xde
+ 
+ /* Flags that indicate a format can be used for capture/output */
+ #define SJPEG_FMT_FLAG_ENC_CAPTURE	(1 << 0)
+@@ -187,11 +187,11 @@ struct s5p_jpeg_marker {
+  * @fmt:	driver-specific format of this queue
+  * @w:		image width
+  * @h:		image height
+- * @sos:	SOS marker's position relative to the buffer beginning
+- * @dht:	DHT markers' positions relative to the buffer beginning
+- * @dqt:	DQT markers' positions relative to the buffer beginning
+- * @sof:	SOF0 marker's position relative to the buffer beginning
+- * @sof_len:	SOF0 marker's payload length (without length field itself)
++ * @sos:	JPEG_MARKER_SOS's position relative to the buffer beginning
++ * @dht:	JPEG_MARKER_DHT' positions relative to the buffer beginning
++ * @dqt:	JPEG_MARKER_DQT' positions relative to the buffer beginning
++ * @sof:	JPEG_MARKER_SOF0's position relative to the buffer beginning
++ * @sof_len:	JPEG_MARKER_SOF0's payload length (without length field itself)
+  * @size:	image buffer size in bytes
+  */
+ struct s5p_jpeg_q_data {
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 3e729a17b35ff..48d52baec1a1c 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -24,6 +24,7 @@ static const u8 COMMAND_VERSION[] = { 'v' };
+ // End transmit and repeat reset command so we exit sump mode
+ static const u8 COMMAND_RESET[] = { 0xff, 0xff, 0, 0, 0, 0, 0 };
+ static const u8 COMMAND_SMODE_ENTER[] = { 's' };
++static const u8 COMMAND_SMODE_EXIT[] = { 0 };
+ static const u8 COMMAND_TXSTART[] = { 0x26, 0x24, 0x25, 0x03 };
+ 
+ #define REPLY_XMITCOUNT 't'
+@@ -309,12 +310,30 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 		buf[i] = cpu_to_be16(v);
+ 	}
+ 
+-	buf[count] = cpu_to_be16(0xffff);
++	buf[count] = 0xffff;
+ 
+ 	irtoy->tx_buf = buf;
+ 	irtoy->tx_len = size;
+ 	irtoy->emitted = 0;
+ 
++	// There is an issue where if the unit is receiving IR while the
++	// first TXSTART command is sent, the device might end up hanging
++	// with its led on. It does not respond to any command when this
++	// happens. To work around this, re-enter sample mode.
++	err = irtoy_command(irtoy, COMMAND_SMODE_EXIT,
++			    sizeof(COMMAND_SMODE_EXIT), STATE_RESET);
++	if (err) {
++		dev_err(irtoy->dev, "exit sample mode: %d\n", err);
++		return err;
++	}
++
++	err = irtoy_command(irtoy, COMMAND_SMODE_ENTER,
++			    sizeof(COMMAND_SMODE_ENTER), STATE_COMMAND);
++	if (err) {
++		dev_err(irtoy->dev, "enter sample mode: %d\n", err);
++		return err;
++	}
++
+ 	err = irtoy_command(irtoy, COMMAND_TXSTART, sizeof(COMMAND_TXSTART),
+ 			    STATE_TX);
+ 	kfree(buf);
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index e49ca0f7fe9a8..1543a5dd94252 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -582,6 +582,8 @@ static void renesas_sdhi_reset(struct tmio_mmc_host *host)
+ 		/* Unknown why but without polling reset status, it will hang */
+ 		read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100,
+ 				  false, priv->rstc);
++		/* At least SDHI_VER_GEN2_SDR50 needs manual release of reset */
++		sd_ctrl_write16(host, CTL_RESET_SD, 0x0001);
+ 		priv->needs_adjust_hs400 = false;
+ 		renesas_sdhi_set_clock(host, host->clk_cache);
+ 	} else if (priv->scc_ctl) {
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 1c122a1f2f97d..66b4f4a9832a4 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2775,8 +2775,8 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+ 	if (err)
+ 		return err;
+ 
+-	/* Port Control 2: don't force a good FCS, set the maximum frame size to
+-	 * 10240 bytes, disable 802.1q tags checking, don't discard tagged or
++	/* Port Control 2: don't force a good FCS, set the MTU size to
++	 * 10222 bytes, disable 802.1q tags checking, don't discard tagged or
+ 	 * untagged frames on this port, do a destination address lookup on all
+ 	 * received packets as usual, disable ARP mirroring and don't send a
+ 	 * copy of all transmitted/received frames on this port to the CPU.
+@@ -2795,7 +2795,7 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+ 		return err;
+ 
+ 	if (chip->info->ops->port_set_jumbo_size) {
+-		err = chip->info->ops->port_set_jumbo_size(chip, port, 10240);
++		err = chip->info->ops->port_set_jumbo_size(chip, port, 10218);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -2885,10 +2885,10 @@ static int mv88e6xxx_get_max_mtu(struct dsa_switch *ds, int port)
+ 	struct mv88e6xxx_chip *chip = ds->priv;
+ 
+ 	if (chip->info->ops->port_set_jumbo_size)
+-		return 10240;
++		return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ 	else if (chip->info->ops->set_max_frame_size)
+-		return 1632;
+-	return 1522;
++		return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
++	return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ }
+ 
+ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+@@ -2896,6 +2896,9 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	struct mv88e6xxx_chip *chip = ds->priv;
+ 	int ret = 0;
+ 
++	if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
++		new_mtu += EDSA_HLEN;
++
+ 	mv88e6xxx_reg_lock(chip);
+ 	if (chip->info->ops->port_set_jumbo_size)
+ 		ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
+@@ -3657,7 +3660,6 @@ static const struct mv88e6xxx_ops mv88e6161_ops = {
+ 	.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ 	.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+ 	.port_set_ether_type = mv88e6351_port_set_ether_type,
+-	.port_set_jumbo_size = mv88e6165_port_set_jumbo_size,
+ 	.port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting,
+ 	.port_pause_limit = mv88e6097_port_pause_limit,
+ 	.port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit,
+@@ -3682,6 +3684,7 @@ static const struct mv88e6xxx_ops mv88e6161_ops = {
+ 	.avb_ops = &mv88e6165_avb_ops,
+ 	.ptp_ops = &mv88e6165_ptp_ops,
+ 	.phylink_validate = mv88e6185_phylink_validate,
++	.set_max_frame_size = mv88e6185_g1_set_max_frame_size,
+ };
+ 
+ static const struct mv88e6xxx_ops mv88e6165_ops = {
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 675b1f3e43b7b..59f316cc8583e 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -18,6 +18,7 @@
+ #include <linux/timecounter.h>
+ #include <net/dsa.h>
+ 
++#define EDSA_HLEN		8
+ #define MV88E6XXX_N_FID		4096
+ 
+ /* PVT limits for 4-bit port and 5-bit switch */
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.c b/drivers/net/dsa/mv88e6xxx/global1.c
+index 815b0f681d698..5848112036b08 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.c
++++ b/drivers/net/dsa/mv88e6xxx/global1.c
+@@ -232,6 +232,8 @@ int mv88e6185_g1_set_max_frame_size(struct mv88e6xxx_chip *chip, int mtu)
+ 	u16 val;
+ 	int err;
+ 
++	mtu += ETH_HLEN + ETH_FCS_LEN;
++
+ 	err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_CTL1, &val);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index f77e2ee64a607..451028c57af8a 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -1277,6 +1277,8 @@ int mv88e6165_port_set_jumbo_size(struct mv88e6xxx_chip *chip, int port,
+ 	u16 reg;
+ 	int err;
+ 
++	size += VLAN_ETH_HLEN + ETH_FCS_LEN;
++
+ 	err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_CTL2, &reg);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index c84f6c226743d..cf00709caea4b 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -541,8 +541,7 @@ static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
+ 
+ 	if (phy_interface_mode_is_rgmii(phy_mode)) {
+ 		val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
+-		val &= ~ENETC_PM0_IFM_EN_AUTO;
+-		val &= ENETC_PM0_IFM_IFMODE_MASK;
++		val &= ~(ENETC_PM0_IFM_EN_AUTO | ENETC_PM0_IFM_IFMODE_MASK);
+ 		val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
+ 		enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
+ 	}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index e0b7c3c44e7b4..32987bd134a1d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -750,7 +750,6 @@ struct hnae3_tc_info {
+ 	u8 prio_tc[HNAE3_MAX_USER_PRIO]; /* TC indexed by prio */
+ 	u16 tqp_count[HNAE3_MAX_TC];
+ 	u16 tqp_offset[HNAE3_MAX_TC];
+-	unsigned long tc_en; /* bitmap of TC enabled */
+ 	u8 num_tc; /* Total number of enabled TCs */
+ 	bool mqprio_active;
+ };
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 9faa3712ea5b8..114692c4f7978 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -620,13 +620,9 @@ static int hns3_nic_set_real_num_queue(struct net_device *netdev)
+ 			return ret;
+ 		}
+ 
+-		for (i = 0; i < HNAE3_MAX_TC; i++) {
+-			if (!test_bit(i, &tc_info->tc_en))
+-				continue;
+-
++		for (i = 0; i < tc_info->num_tc; i++)
+ 			netdev_set_tc_queue(netdev, i, tc_info->tqp_count[i],
+ 					    tc_info->tqp_offset[i]);
+-		}
+ 	}
+ 
+ 	ret = netif_set_real_num_tx_queues(netdev, queue_size);
+@@ -776,6 +772,11 @@ static int hns3_nic_net_open(struct net_device *netdev)
+ 	if (hns3_nic_resetting(netdev))
+ 		return -EBUSY;
+ 
++	if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
++		netdev_warn(netdev, "net open repeatedly!\n");
++		return 0;
++	}
++
+ 	netif_carrier_off(netdev);
+ 
+ 	ret = hns3_nic_set_real_num_queue(netdev);
+@@ -4825,12 +4826,9 @@ static void hns3_init_tx_ring_tc(struct hns3_nic_priv *priv)
+ 	struct hnae3_tc_info *tc_info = &kinfo->tc_info;
+ 	int i;
+ 
+-	for (i = 0; i < HNAE3_MAX_TC; i++) {
++	for (i = 0; i < tc_info->num_tc; i++) {
+ 		int j;
+ 
+-		if (!test_bit(i, &tc_info->tc_en))
+-			continue;
+-
+ 		for (j = 0; j < tc_info->tqp_count[i]; j++) {
+ 			struct hnae3_queue *q;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 82061ab6930fb..83ee0f41322c7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -312,33 +312,8 @@ out:
+ 	return ret_val;
+ }
+ 
+-/**
+- * hns3_self_test - self test
+- * @ndev: net device
+- * @eth_test: test cmd
+- * @data: test result
+- */
+-static void hns3_self_test(struct net_device *ndev,
+-			   struct ethtool_test *eth_test, u64 *data)
++static void hns3_set_selftest_param(struct hnae3_handle *h, int (*st_param)[2])
+ {
+-	struct hns3_nic_priv *priv = netdev_priv(ndev);
+-	struct hnae3_handle *h = priv->ae_handle;
+-	int st_param[HNS3_SELF_TEST_TYPE_NUM][2];
+-	bool if_running = netif_running(ndev);
+-	int test_index = 0;
+-	u32 i;
+-
+-	if (hns3_nic_resetting(ndev)) {
+-		netdev_err(ndev, "dev resetting!");
+-		return;
+-	}
+-
+-	/* Only do offline selftest, or pass by default */
+-	if (eth_test->flags != ETH_TEST_FL_OFFLINE)
+-		return;
+-
+-	netif_dbg(h, drv, ndev, "self test start");
+-
+ 	st_param[HNAE3_LOOP_APP][0] = HNAE3_LOOP_APP;
+ 	st_param[HNAE3_LOOP_APP][1] =
+ 			h->flags & HNAE3_SUPPORT_APP_LOOPBACK;
+@@ -355,13 +330,26 @@ static void hns3_self_test(struct net_device *ndev,
+ 	st_param[HNAE3_LOOP_PHY][0] = HNAE3_LOOP_PHY;
+ 	st_param[HNAE3_LOOP_PHY][1] =
+ 			h->flags & HNAE3_SUPPORT_PHY_LOOPBACK;
++}
++
++static void hns3_selftest_prepare(struct net_device *ndev,
++				  bool if_running, int (*st_param)[2])
++{
++	struct hns3_nic_priv *priv = netdev_priv(ndev);
++	struct hnae3_handle *h = priv->ae_handle;
++
++	if (netif_msg_ifdown(h))
++		netdev_info(ndev, "self test start\n");
++
++	hns3_set_selftest_param(h, st_param);
+ 
+ 	if (if_running)
+ 		ndev->netdev_ops->ndo_stop(ndev);
+ 
+ #if IS_ENABLED(CONFIG_VLAN_8021Q)
+ 	/* Disable the vlan filter for selftest does not support it */
+-	if (h->ae_algo->ops->enable_vlan_filter)
++	if (h->ae_algo->ops->enable_vlan_filter &&
++	    ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
+ 		h->ae_algo->ops->enable_vlan_filter(h, false);
+ #endif
+ 
+@@ -373,6 +361,36 @@ static void hns3_self_test(struct net_device *ndev,
+ 		h->ae_algo->ops->halt_autoneg(h, true);
+ 
+ 	set_bit(HNS3_NIC_STATE_TESTING, &priv->state);
++}
++
++static void hns3_selftest_restore(struct net_device *ndev, bool if_running)
++{
++	struct hns3_nic_priv *priv = netdev_priv(ndev);
++	struct hnae3_handle *h = priv->ae_handle;
++
++	clear_bit(HNS3_NIC_STATE_TESTING, &priv->state);
++
++	if (h->ae_algo->ops->halt_autoneg)
++		h->ae_algo->ops->halt_autoneg(h, false);
++
++#if IS_ENABLED(CONFIG_VLAN_8021Q)
++	if (h->ae_algo->ops->enable_vlan_filter &&
++	    ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
++		h->ae_algo->ops->enable_vlan_filter(h, true);
++#endif
++
++	if (if_running)
++		ndev->netdev_ops->ndo_open(ndev);
++
++	if (netif_msg_ifdown(h))
++		netdev_info(ndev, "self test end\n");
++}
++
++static void hns3_do_selftest(struct net_device *ndev, int (*st_param)[2],
++			     struct ethtool_test *eth_test, u64 *data)
++{
++	int test_index = 0;
++	u32 i;
+ 
+ 	for (i = 0; i < HNS3_SELF_TEST_TYPE_NUM; i++) {
+ 		enum hnae3_loop loop_type = (enum hnae3_loop)st_param[i][0];
+@@ -391,21 +409,32 @@ static void hns3_self_test(struct net_device *ndev,
+ 
+ 		test_index++;
+ 	}
++}
+ 
+-	clear_bit(HNS3_NIC_STATE_TESTING, &priv->state);
+-
+-	if (h->ae_algo->ops->halt_autoneg)
+-		h->ae_algo->ops->halt_autoneg(h, false);
++/**
++ * hns3_nic_self_test - self test
++ * @ndev: net device
++ * @eth_test: test cmd
++ * @data: test result
++ */
++static void hns3_self_test(struct net_device *ndev,
++			   struct ethtool_test *eth_test, u64 *data)
++{
++	int st_param[HNS3_SELF_TEST_TYPE_NUM][2];
++	bool if_running = netif_running(ndev);
+ 
+-#if IS_ENABLED(CONFIG_VLAN_8021Q)
+-	if (h->ae_algo->ops->enable_vlan_filter)
+-		h->ae_algo->ops->enable_vlan_filter(h, true);
+-#endif
++	if (hns3_nic_resetting(ndev)) {
++		netdev_err(ndev, "dev resetting!");
++		return;
++	}
+ 
+-	if (if_running)
+-		ndev->netdev_ops->ndo_open(ndev);
++	/* Only do offline selftest, or pass by default */
++	if (eth_test->flags != ETH_TEST_FL_OFFLINE)
++		return;
+ 
+-	netif_dbg(h, drv, ndev, "self test end\n");
++	hns3_selftest_prepare(ndev, if_running, st_param);
++	hns3_do_selftest(ndev, st_param, eth_test, data);
++	hns3_selftest_restore(ndev, if_running);
+ }
+ 
+ static void hns3_update_limit_promisc_mode(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+index eb748aa35952c..0f0bf3d503bf5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+@@ -472,7 +472,7 @@ err_csq:
+ 	return ret;
+ }
+ 
+-static int hclge_firmware_compat_config(struct hclge_dev *hdev)
++static int hclge_firmware_compat_config(struct hclge_dev *hdev, bool en)
+ {
+ 	struct hclge_firmware_compat_cmd *req;
+ 	struct hclge_desc desc;
+@@ -480,13 +480,16 @@ static int hclge_firmware_compat_config(struct hclge_dev *hdev)
+ 
+ 	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_IMP_COMPAT_CFG, false);
+ 
+-	req = (struct hclge_firmware_compat_cmd *)desc.data;
++	if (en) {
++		req = (struct hclge_firmware_compat_cmd *)desc.data;
+ 
+-	hnae3_set_bit(compat, HCLGE_LINK_EVENT_REPORT_EN_B, 1);
+-	hnae3_set_bit(compat, HCLGE_NCSI_ERROR_REPORT_EN_B, 1);
+-	if (hnae3_dev_phy_imp_supported(hdev))
+-		hnae3_set_bit(compat, HCLGE_PHY_IMP_EN_B, 1);
+-	req->compat = cpu_to_le32(compat);
++		hnae3_set_bit(compat, HCLGE_LINK_EVENT_REPORT_EN_B, 1);
++		hnae3_set_bit(compat, HCLGE_NCSI_ERROR_REPORT_EN_B, 1);
++		if (hnae3_dev_phy_imp_supported(hdev))
++			hnae3_set_bit(compat, HCLGE_PHY_IMP_EN_B, 1);
++
++		req->compat = cpu_to_le32(compat);
++	}
+ 
+ 	return hclge_cmd_send(&hdev->hw, &desc, 1);
+ }
+@@ -543,7 +546,7 @@ int hclge_cmd_init(struct hclge_dev *hdev)
+ 	/* ask the firmware to enable some features, driver can work without
+ 	 * it.
+ 	 */
+-	ret = hclge_firmware_compat_config(hdev);
++	ret = hclge_firmware_compat_config(hdev, true);
+ 	if (ret)
+ 		dev_warn(&hdev->pdev->dev,
+ 			 "Firmware compatible features not enabled(%d).\n",
+@@ -573,6 +576,8 @@ static void hclge_cmd_uninit_regs(struct hclge_hw *hw)
+ 
+ void hclge_cmd_uninit(struct hclge_dev *hdev)
+ {
++	hclge_firmware_compat_config(hdev, false);
++
+ 	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
+ 	/* wait to ensure that the firmware completes the possible left
+ 	 * over commands.
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 39f56f245d843..c90bfde2aecff 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -224,6 +224,10 @@ static int hclge_ieee_setets(struct hnae3_handle *h, struct ieee_ets *ets)
+ 	}
+ 
+ 	hclge_tm_schd_info_update(hdev, num_tc);
++	if (num_tc > 1)
++		hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
++	else
++		hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+ 
+ 	ret = hclge_ieee_ets_to_tm_info(hdev, ets);
+ 	if (ret)
+@@ -285,8 +289,7 @@ static int hclge_ieee_setpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+ 	u8 i, j, pfc_map, *prio_tc;
+ 	int ret;
+ 
+-	if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE) ||
+-	    hdev->flag & HCLGE_FLAG_MQPRIO_ENABLE)
++	if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))
+ 		return -EINVAL;
+ 
+ 	if (pfc->pfc_en == hdev->tm_info.pfc_en)
+@@ -420,8 +423,6 @@ static int hclge_mqprio_qopt_check(struct hclge_dev *hdev,
+ static void hclge_sync_mqprio_qopt(struct hnae3_tc_info *tc_info,
+ 				   struct tc_mqprio_qopt_offload *mqprio_qopt)
+ {
+-	int i;
+-
+ 	memset(tc_info, 0, sizeof(*tc_info));
+ 	tc_info->num_tc = mqprio_qopt->qopt.num_tc;
+ 	memcpy(tc_info->prio_tc, mqprio_qopt->qopt.prio_tc_map,
+@@ -430,9 +431,6 @@ static void hclge_sync_mqprio_qopt(struct hnae3_tc_info *tc_info,
+ 	       sizeof_field(struct hnae3_tc_info, tqp_count));
+ 	memcpy(tc_info->tqp_offset, mqprio_qopt->qopt.offset,
+ 	       sizeof_field(struct hnae3_tc_info, tqp_offset));
+-
+-	for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
+-		set_bit(tc_info->prio_tc[i], &tc_info->tc_en);
+ }
+ 
+ static int hclge_config_tc(struct hclge_dev *hdev,
+@@ -498,12 +496,17 @@ static int hclge_setup_tc(struct hnae3_handle *h,
+ 	return hclge_notify_init_up(hdev);
+ 
+ err_out:
+-	/* roll-back */
+-	memcpy(&kinfo->tc_info, &old_tc_info, sizeof(old_tc_info));
+-	if (hclge_config_tc(hdev, &kinfo->tc_info))
+-		dev_err(&hdev->pdev->dev,
+-			"failed to roll back tc configuration\n");
+-
++	if (!tc) {
++		dev_warn(&hdev->pdev->dev,
++			 "failed to destroy mqprio, will active after reset, ret = %d\n",
++			 ret);
++	} else {
++		/* roll-back */
++		memcpy(&kinfo->tc_info, &old_tc_info, sizeof(old_tc_info));
++		if (hclge_config_tc(hdev, &kinfo->tc_info))
++			dev_err(&hdev->pdev->dev,
++				"failed to roll back tc configuration\n");
++	}
+ 	hclge_notify_init_up(hdev);
+ 
+ 	return ret;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 90a72c79fec99..9920e76b4f41c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -8701,15 +8701,8 @@ int hclge_add_uc_addr_common(struct hclge_vport *vport,
+ 	}
+ 
+ 	/* check if we just hit the duplicate */
+-	if (!ret) {
+-		dev_warn(&hdev->pdev->dev, "VF %u mac(%pM) exists\n",
+-			 vport->vport_id, addr);
+-		return 0;
+-	}
+-
+-	dev_err(&hdev->pdev->dev,
+-		"PF failed to add unicast entry(%pM) in the MAC table\n",
+-		addr);
++	if (!ret)
++		return -EEXIST;
+ 
+ 	return ret;
+ }
+@@ -8861,7 +8854,13 @@ static void hclge_sync_vport_mac_list(struct hclge_vport *vport,
+ 		} else {
+ 			set_bit(HCLGE_VPORT_STATE_MAC_TBL_CHANGE,
+ 				&vport->state);
+-			break;
++
++			/* If one unicast mac address is existing in hardware,
++			 * we need to try whether other unicast mac addresses
++			 * are new addresses that can be added.
++			 */
++			if (ret != -EEXIST)
++				break;
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 44618cc4cca10..f314dbd3ce11f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -687,12 +687,10 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
+ 
+ 	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		if (hdev->hw_tc_map & BIT(i) && i < kinfo->tc_info.num_tc) {
+-			set_bit(i, &kinfo->tc_info.tc_en);
+ 			kinfo->tc_info.tqp_offset[i] = i * kinfo->rss_size;
+ 			kinfo->tc_info.tqp_count[i] = kinfo->rss_size;
+ 		} else {
+ 			/* Set to default queue if TC is disable */
+-			clear_bit(i, &kinfo->tc_info.tc_en);
+ 			kinfo->tc_info.tqp_offset[i] = 0;
+ 			kinfo->tc_info.tqp_count[i] = 1;
+ 		}
+@@ -729,14 +727,6 @@ static void hclge_tm_tc_info_init(struct hclge_dev *hdev)
+ 	for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
+ 		hdev->tm_info.prio_tc[i] =
+ 			(i >= hdev->tm_info.num_tc) ? 0 : i;
+-
+-	/* DCB is enabled if we have more than 1 TC or pfc_en is
+-	 * non-zero.
+-	 */
+-	if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
+-		hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
+-	else
+-		hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+ }
+ 
+ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+@@ -767,10 +757,10 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+ 
+ static void hclge_update_fc_mode_by_dcb_flag(struct hclge_dev *hdev)
+ {
+-	if (!(hdev->flag & HCLGE_FLAG_DCB_ENABLE)) {
++	if (hdev->tm_info.num_tc == 1 && !hdev->tm_info.pfc_en) {
+ 		if (hdev->fc_mode_last_time == HCLGE_FC_PFC)
+ 			dev_warn(&hdev->pdev->dev,
+-				 "DCB is disable, but last mode is FC_PFC\n");
++				 "Only 1 tc used, but last mode is FC_PFC\n");
+ 
+ 		hdev->tm_info.fc_mode = hdev->fc_mode_last_time;
+ 	} else if (hdev->tm_info.fc_mode != HCLGE_FC_PFC) {
+@@ -796,7 +786,7 @@ static void hclge_update_fc_mode(struct hclge_dev *hdev)
+ 	}
+ }
+ 
+-static void hclge_pfc_info_init(struct hclge_dev *hdev)
++void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
+ {
+ 	if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3)
+ 		hclge_update_fc_mode(hdev);
+@@ -812,7 +802,7 @@ static void hclge_tm_schd_info_init(struct hclge_dev *hdev)
+ 
+ 	hclge_tm_vport_info_update(hdev);
+ 
+-	hclge_pfc_info_init(hdev);
++	hclge_tm_pfc_info_update(hdev);
+ }
+ 
+ static int hclge_tm_pg_to_pri_map(struct hclge_dev *hdev)
+@@ -1558,19 +1548,6 @@ void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc)
+ 	hclge_tm_schd_info_init(hdev);
+ }
+ 
+-void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
+-{
+-	/* DCB is enabled if we have more than 1 TC or pfc_en is
+-	 * non-zero.
+-	 */
+-	if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
+-		hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
+-	else
+-		hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+-
+-	hclge_pfc_info_init(hdev);
+-}
+-
+ int hclge_tm_init_hw(struct hclge_dev *hdev, bool init)
+ {
+ 	int ret;
+@@ -1616,7 +1593,7 @@ int hclge_tm_vport_map_update(struct hclge_dev *hdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!(hdev->flag & HCLGE_FLAG_DCB_ENABLE))
++	if (hdev->tm_info.num_tc == 1 && !hdev->tm_info.pfc_en)
+ 		return 0;
+ 
+ 	return hclge_tm_bp_setup(hdev);
+diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
+index 1b0958bd24f6c..1fa68ebe94325 100644
+--- a/drivers/net/ethernet/intel/e100.c
++++ b/drivers/net/ethernet/intel/e100.c
+@@ -2437,11 +2437,15 @@ static void e100_get_drvinfo(struct net_device *netdev,
+ 		sizeof(info->bus_info));
+ }
+ 
+-#define E100_PHY_REGS 0x1C
++#define E100_PHY_REGS 0x1D
+ static int e100_get_regs_len(struct net_device *netdev)
+ {
+ 	struct nic *nic = netdev_priv(netdev);
+-	return 1 + E100_PHY_REGS + sizeof(nic->mem->dump_buf);
++
++	/* We know the number of registers, and the size of the dump buffer.
++	 * Calculate the total size in bytes.
++	 */
++	return (1 + E100_PHY_REGS) * sizeof(u32) + sizeof(nic->mem->dump_buf);
+ }
+ 
+ static void e100_get_regs(struct net_device *netdev,
+@@ -2455,14 +2459,18 @@ static void e100_get_regs(struct net_device *netdev,
+ 	buff[0] = ioread8(&nic->csr->scb.cmd_hi) << 24 |
+ 		ioread8(&nic->csr->scb.cmd_lo) << 16 |
+ 		ioread16(&nic->csr->scb.status);
+-	for (i = E100_PHY_REGS; i >= 0; i--)
+-		buff[1 + E100_PHY_REGS - i] =
+-			mdio_read(netdev, nic->mii.phy_id, i);
++	for (i = 0; i < E100_PHY_REGS; i++)
++		/* Note that we read the registers in reverse order. This
++		 * ordering is the ABI apparently used by ethtool and other
++		 * applications.
++		 */
++		buff[1 + i] = mdio_read(netdev, nic->mii.phy_id,
++					E100_PHY_REGS - 1 - i);
+ 	memset(nic->mem->dump_buf, 0, sizeof(nic->mem->dump_buf));
+ 	e100_exec_cb(nic, NULL, e100_dump);
+ 	msleep(10);
+-	memcpy(&buff[2 + E100_PHY_REGS], nic->mem->dump_buf,
+-		sizeof(nic->mem->dump_buf));
++	memcpy(&buff[1 + E100_PHY_REGS], nic->mem->dump_buf,
++	       sizeof(nic->mem->dump_buf));
+ }
+ 
+ static void e100_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index 4ceaca0f6ce30..21321d1647089 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -3204,7 +3204,7 @@ static unsigned int ixgbe_max_channels(struct ixgbe_adapter *adapter)
+ 		max_combined = ixgbe_max_rss_indices(adapter);
+ 	}
+ 
+-	return max_combined;
++	return min_t(int, max_combined, num_online_cpus());
+ }
+ 
+ static void ixgbe_get_channels(struct net_device *dev,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 14aea40da50fb..77350e5fdf977 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -10112,6 +10112,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct bpf_prog *old_prog;
+ 	bool need_reset;
++	int num_queues;
+ 
+ 	if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
+ 		return -EINVAL;
+@@ -10161,11 +10162,14 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
+ 	/* Kick start the NAPI context if there is an AF_XDP socket open
+ 	 * on that queue id. This so that receiving will start.
+ 	 */
+-	if (need_reset && prog)
+-		for (i = 0; i < adapter->num_rx_queues; i++)
++	if (need_reset && prog) {
++		num_queues = min_t(int, adapter->num_rx_queues,
++				   adapter->num_xdp_queues);
++		for (i = 0; i < num_queues; i++)
+ 			if (adapter->xdp_ring[i]->xsk_pool)
+ 				(void)ixgbe_xsk_wakeup(adapter->netdev, i,
+ 						       XDP_WAKEUP_RX);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 1e672bc36c4dc..a6878e5f922a7 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -1272,7 +1272,6 @@ static void mlx4_en_do_set_rx_mode(struct work_struct *work)
+ 	if (!netif_carrier_ok(dev)) {
+ 		if (!mlx4_en_QUERY_PORT(mdev, priv->port)) {
+ 			if (priv->port_state.link_state) {
+-				priv->last_link_state = MLX4_DEV_EVENT_PORT_UP;
+ 				netif_carrier_on(dev);
+ 				en_dbg(LINK, priv, "Link Up\n");
+ 			}
+@@ -1560,26 +1559,36 @@ static void mlx4_en_service_task(struct work_struct *work)
+ 	mutex_unlock(&mdev->state_lock);
+ }
+ 
+-static void mlx4_en_linkstate(struct work_struct *work)
++static void mlx4_en_linkstate(struct mlx4_en_priv *priv)
++{
++	struct mlx4_en_port_state *port_state = &priv->port_state;
++	struct mlx4_en_dev *mdev = priv->mdev;
++	struct net_device *dev = priv->dev;
++	bool up;
++
++	if (mlx4_en_QUERY_PORT(mdev, priv->port))
++		port_state->link_state = MLX4_PORT_STATE_DEV_EVENT_PORT_DOWN;
++
++	up = port_state->link_state == MLX4_PORT_STATE_DEV_EVENT_PORT_UP;
++	if (up == netif_carrier_ok(dev))
++		netif_carrier_event(dev);
++	if (!up) {
++		en_info(priv, "Link Down\n");
++		netif_carrier_off(dev);
++	} else {
++		en_info(priv, "Link Up\n");
++		netif_carrier_on(dev);
++	}
++}
++
++static void mlx4_en_linkstate_work(struct work_struct *work)
+ {
+ 	struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv,
+ 						 linkstate_task);
+ 	struct mlx4_en_dev *mdev = priv->mdev;
+-	int linkstate = priv->link_state;
+ 
+ 	mutex_lock(&mdev->state_lock);
+-	/* If observable port state changed set carrier state and
+-	 * report to system log */
+-	if (priv->last_link_state != linkstate) {
+-		if (linkstate == MLX4_DEV_EVENT_PORT_DOWN) {
+-			en_info(priv, "Link Down\n");
+-			netif_carrier_off(priv->dev);
+-		} else {
+-			en_info(priv, "Link Up\n");
+-			netif_carrier_on(priv->dev);
+-		}
+-	}
+-	priv->last_link_state = linkstate;
++	mlx4_en_linkstate(priv);
+ 	mutex_unlock(&mdev->state_lock);
+ }
+ 
+@@ -2082,9 +2091,11 @@ static int mlx4_en_open(struct net_device *dev)
+ 	mlx4_en_clear_stats(dev);
+ 
+ 	err = mlx4_en_start_port(dev);
+-	if (err)
++	if (err) {
+ 		en_err(priv, "Failed starting port:%d\n", priv->port);
+-
++		goto out;
++	}
++	mlx4_en_linkstate(priv);
+ out:
+ 	mutex_unlock(&mdev->state_lock);
+ 	return err;
+@@ -3171,7 +3182,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
+ 	spin_lock_init(&priv->stats_lock);
+ 	INIT_WORK(&priv->rx_mode_task, mlx4_en_do_set_rx_mode);
+ 	INIT_WORK(&priv->restart_task, mlx4_en_restart);
+-	INIT_WORK(&priv->linkstate_task, mlx4_en_linkstate);
++	INIT_WORK(&priv->linkstate_task, mlx4_en_linkstate_work);
+ 	INIT_DELAYED_WORK(&priv->stats_task, mlx4_en_do_get_stats);
+ 	INIT_DELAYED_WORK(&priv->service_task, mlx4_en_service_task);
+ #ifdef CONFIG_RFS_ACCEL
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+index f3d1a20201ef3..6bf558c5ec107 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+@@ -552,7 +552,6 @@ struct mlx4_en_priv {
+ 
+ 	struct mlx4_hwq_resources res;
+ 	int link_state;
+-	int last_link_state;
+ 	bool port_up;
+ 	int port;
+ 	int registered;
+diff --git a/drivers/net/ethernet/micrel/Makefile b/drivers/net/ethernet/micrel/Makefile
+index 5cc00d22c708c..6ecc4eb30e74b 100644
+--- a/drivers/net/ethernet/micrel/Makefile
++++ b/drivers/net/ethernet/micrel/Makefile
+@@ -4,8 +4,6 @@
+ #
+ 
+ obj-$(CONFIG_KS8842) += ks8842.o
+-obj-$(CONFIG_KS8851) += ks8851.o
+-ks8851-objs = ks8851_common.o ks8851_spi.o
+-obj-$(CONFIG_KS8851_MLL) += ks8851_mll.o
+-ks8851_mll-objs = ks8851_common.o ks8851_par.o
++obj-$(CONFIG_KS8851) += ks8851_common.o ks8851_spi.o
++obj-$(CONFIG_KS8851_MLL) += ks8851_common.o ks8851_par.o
+ obj-$(CONFIG_KSZ884X_PCI) += ksz884x.o
+diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c
+index 831518466de22..0f9c5457b93ef 100644
+--- a/drivers/net/ethernet/micrel/ks8851_common.c
++++ b/drivers/net/ethernet/micrel/ks8851_common.c
+@@ -1057,6 +1057,7 @@ int ks8851_suspend(struct device *dev)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(ks8851_suspend);
+ 
+ int ks8851_resume(struct device *dev)
+ {
+@@ -1070,6 +1071,7 @@ int ks8851_resume(struct device *dev)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(ks8851_resume);
+ #endif
+ 
+ static int ks8851_register_mdiobus(struct ks8851_net *ks, struct device *dev)
+@@ -1243,6 +1245,7 @@ err_reg:
+ err_reg_io:
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(ks8851_probe_common);
+ 
+ int ks8851_remove_common(struct device *dev)
+ {
+@@ -1261,3 +1264,8 @@ int ks8851_remove_common(struct device *dev)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(ks8851_remove_common);
++
++MODULE_DESCRIPTION("KS8851 Network driver");
++MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_stats.c b/drivers/net/ethernet/pensando/ionic/ionic_stats.c
+index 58a854666c62b..c14de5fcedea3 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_stats.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_stats.c
+@@ -380,15 +380,6 @@ static void ionic_sw_stats_get_txq_values(struct ionic_lif *lif, u64 **buf,
+ 					  &ionic_dbg_intr_stats_desc[i]);
+ 		(*buf)++;
+ 	}
+-	for (i = 0; i < IONIC_NUM_DBG_NAPI_STATS; i++) {
+-		**buf = IONIC_READ_STAT64(&txqcq->napi_stats,
+-					  &ionic_dbg_napi_stats_desc[i]);
+-		(*buf)++;
+-	}
+-	for (i = 0; i < IONIC_MAX_NUM_NAPI_CNTR; i++) {
+-		**buf = txqcq->napi_stats.work_done_cntr[i];
+-		(*buf)++;
+-	}
+ 	for (i = 0; i < IONIC_MAX_NUM_SG_CNTR; i++) {
+ 		**buf = txstats->sg_cntr[i];
+ 		(*buf)++;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 2218bc3a624b4..86151a817b79a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -486,6 +486,10 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
+ 		timer_setup(&priv->eee_ctrl_timer, stmmac_eee_ctrl_timer, 0);
+ 		stmmac_set_eee_timer(priv, priv->hw, STMMAC_DEFAULT_LIT_LS,
+ 				     eee_tw_timer);
++		if (priv->hw->xpcs)
++			xpcs_config_eee(priv->hw->xpcs,
++					priv->plat->mult_fact_100ns,
++					true);
+ 	}
+ 
+ 	if (priv->plat->has_gmac4 && priv->tx_lpi_timer <= STMMAC_ET_MAX) {
+diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c
+index e60e38c1f09d3..5e49f7a919b61 100644
+--- a/drivers/net/mhi/net.c
++++ b/drivers/net/mhi/net.c
+@@ -337,7 +337,7 @@ static int mhi_net_newlink(void *ctxt, struct net_device *ndev, u32 if_id,
+ 	/* Start MHI channels */
+ 	err = mhi_prepare_for_transfer(mhi_dev);
+ 	if (err)
+-		goto out_err;
++		return err;
+ 
+ 	/* Number of transfer descriptors determines size of the queue */
+ 	mhi_netdev->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE);
+@@ -347,7 +347,7 @@ static int mhi_net_newlink(void *ctxt, struct net_device *ndev, u32 if_id,
+ 	else
+ 		err = register_netdev(ndev);
+ 	if (err)
+-		goto out_err;
++		return err;
+ 
+ 	if (mhi_netdev->proto) {
+ 		err = mhi_netdev->proto->init(mhi_netdev);
+@@ -359,8 +359,6 @@ static int mhi_net_newlink(void *ctxt, struct net_device *ndev, u32 if_id,
+ 
+ out_err_proto:
+ 	unregister_netdevice(ndev);
+-out_err:
+-	free_netdev(ndev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
+index e79297a4bae81..27b6a3f507ae6 100644
+--- a/drivers/net/phy/bcm7xxx.c
++++ b/drivers/net/phy/bcm7xxx.c
+@@ -27,7 +27,12 @@
+ #define MII_BCM7XXX_SHD_2_ADDR_CTRL	0xe
+ #define MII_BCM7XXX_SHD_2_CTRL_STAT	0xf
+ #define MII_BCM7XXX_SHD_2_BIAS_TRIM	0x1a
++#define MII_BCM7XXX_SHD_3_PCS_CTRL	0x0
++#define MII_BCM7XXX_SHD_3_PCS_STATUS	0x1
++#define MII_BCM7XXX_SHD_3_EEE_CAP	0x2
+ #define MII_BCM7XXX_SHD_3_AN_EEE_ADV	0x3
++#define MII_BCM7XXX_SHD_3_EEE_LP	0x4
++#define MII_BCM7XXX_SHD_3_EEE_WK_ERR	0x5
+ #define MII_BCM7XXX_SHD_3_PCS_CTRL_2	0x6
+ #define  MII_BCM7XXX_PCS_CTRL_2_DEF	0x4400
+ #define MII_BCM7XXX_SHD_3_AN_STAT	0xb
+@@ -216,25 +221,37 @@ static int bcm7xxx_28nm_resume(struct phy_device *phydev)
+ 	return genphy_config_aneg(phydev);
+ }
+ 
+-static int phy_set_clr_bits(struct phy_device *dev, int location,
+-					int set_mask, int clr_mask)
++static int __phy_set_clr_bits(struct phy_device *dev, int location,
++			      int set_mask, int clr_mask)
+ {
+ 	int v, ret;
+ 
+-	v = phy_read(dev, location);
++	v = __phy_read(dev, location);
+ 	if (v < 0)
+ 		return v;
+ 
+ 	v &= ~clr_mask;
+ 	v |= set_mask;
+ 
+-	ret = phy_write(dev, location, v);
++	ret = __phy_write(dev, location, v);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	return v;
+ }
+ 
++static int phy_set_clr_bits(struct phy_device *dev, int location,
++			    int set_mask, int clr_mask)
++{
++	int ret;
++
++	mutex_lock(&dev->mdio.bus->mdio_lock);
++	ret = __phy_set_clr_bits(dev, location, set_mask, clr_mask);
++	mutex_unlock(&dev->mdio.bus->mdio_lock);
++
++	return ret;
++}
++
+ static int bcm7xxx_28nm_ephy_01_afe_config_init(struct phy_device *phydev)
+ {
+ 	int ret;
+@@ -398,6 +415,93 @@ static int bcm7xxx_28nm_ephy_config_init(struct phy_device *phydev)
+ 	return bcm7xxx_28nm_ephy_apd_enable(phydev);
+ }
+ 
++#define MII_BCM7XXX_REG_INVALID	0xff
++
++static u8 bcm7xxx_28nm_ephy_regnum_to_shd(u16 regnum)
++{
++	switch (regnum) {
++	case MDIO_CTRL1:
++		return MII_BCM7XXX_SHD_3_PCS_CTRL;
++	case MDIO_STAT1:
++		return MII_BCM7XXX_SHD_3_PCS_STATUS;
++	case MDIO_PCS_EEE_ABLE:
++		return MII_BCM7XXX_SHD_3_EEE_CAP;
++	case MDIO_AN_EEE_ADV:
++		return MII_BCM7XXX_SHD_3_AN_EEE_ADV;
++	case MDIO_AN_EEE_LPABLE:
++		return MII_BCM7XXX_SHD_3_EEE_LP;
++	case MDIO_PCS_EEE_WK_ERR:
++		return MII_BCM7XXX_SHD_3_EEE_WK_ERR;
++	default:
++		return MII_BCM7XXX_REG_INVALID;
++	}
++}
++
++static bool bcm7xxx_28nm_ephy_dev_valid(int devnum)
++{
++	return devnum == MDIO_MMD_AN || devnum == MDIO_MMD_PCS;
++}
++
++static int bcm7xxx_28nm_ephy_read_mmd(struct phy_device *phydev,
++				      int devnum, u16 regnum)
++{
++	u8 shd = bcm7xxx_28nm_ephy_regnum_to_shd(regnum);
++	int ret;
++
++	if (!bcm7xxx_28nm_ephy_dev_valid(devnum) ||
++	    shd == MII_BCM7XXX_REG_INVALID)
++		return -EOPNOTSUPP;
++
++	/* set shadow mode 2 */
++	ret = __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
++				 MII_BCM7XXX_SHD_MODE_2, 0);
++	if (ret < 0)
++		return ret;
++
++	/* Access the desired shadow register address */
++	ret = __phy_write(phydev, MII_BCM7XXX_SHD_2_ADDR_CTRL, shd);
++	if (ret < 0)
++		goto reset_shadow_mode;
++
++	ret = __phy_read(phydev, MII_BCM7XXX_SHD_2_CTRL_STAT);
++
++reset_shadow_mode:
++	/* reset shadow mode 2 */
++	__phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0,
++			   MII_BCM7XXX_SHD_MODE_2);
++	return ret;
++}
++
++static int bcm7xxx_28nm_ephy_write_mmd(struct phy_device *phydev,
++				       int devnum, u16 regnum, u16 val)
++{
++	u8 shd = bcm7xxx_28nm_ephy_regnum_to_shd(regnum);
++	int ret;
++
++	if (!bcm7xxx_28nm_ephy_dev_valid(devnum) ||
++	    shd == MII_BCM7XXX_REG_INVALID)
++		return -EOPNOTSUPP;
++
++	/* set shadow mode 2 */
++	ret = __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
++				 MII_BCM7XXX_SHD_MODE_2, 0);
++	if (ret < 0)
++		return ret;
++
++	/* Access the desired shadow register address */
++	ret = __phy_write(phydev, MII_BCM7XXX_SHD_2_ADDR_CTRL, shd);
++	if (ret < 0)
++		goto reset_shadow_mode;
++
++	/* Write the desired value in the shadow register */
++	__phy_write(phydev, MII_BCM7XXX_SHD_2_CTRL_STAT, val);
++
++reset_shadow_mode:
++	/* reset shadow mode 2 */
++	return __phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0,
++				  MII_BCM7XXX_SHD_MODE_2);
++}
++
+ static int bcm7xxx_28nm_ephy_resume(struct phy_device *phydev)
+ {
+ 	int ret;
+@@ -595,6 +699,8 @@ static void bcm7xxx_28nm_remove(struct phy_device *phydev)
+ 	.get_stats	= bcm7xxx_28nm_get_phy_stats,			\
+ 	.probe		= bcm7xxx_28nm_probe,				\
+ 	.remove		= bcm7xxx_28nm_remove,				\
++	.read_mmd	= bcm7xxx_28nm_ephy_read_mmd,			\
++	.write_mmd	= bcm7xxx_28nm_ephy_write_mmd,			\
+ }
+ 
+ #define BCM7XXX_40NM_EPHY(_oui, _name)					\
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 53f034fc2ef79..ee8313a4ac713 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -525,6 +525,10 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	    NULL == bus->read || NULL == bus->write)
+ 		return -EINVAL;
+ 
++	if (bus->parent && bus->parent->of_node)
++		bus->parent->of_node->fwnode.flags |=
++					FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD;
++
+ 	BUG_ON(bus->state != MDIOBUS_ALLOCATED &&
+ 	       bus->state != MDIOBUS_UNREGISTERED);
+ 
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 3c7120ec70798..6a0799f5b05f9 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2353,7 +2353,7 @@ static int remove_net_device(struct hso_device *hso_dev)
+ }
+ 
+ /* Frees our network device */
+-static void hso_free_net_device(struct hso_device *hso_dev, bool bailout)
++static void hso_free_net_device(struct hso_device *hso_dev)
+ {
+ 	int i;
+ 	struct hso_net *hso_net = dev2net(hso_dev);
+@@ -2376,7 +2376,7 @@ static void hso_free_net_device(struct hso_device *hso_dev, bool bailout)
+ 	kfree(hso_net->mux_bulk_tx_buf);
+ 	hso_net->mux_bulk_tx_buf = NULL;
+ 
+-	if (hso_net->net && !bailout)
++	if (hso_net->net)
+ 		free_netdev(hso_net->net);
+ 
+ 	kfree(hso_dev);
+@@ -3136,7 +3136,7 @@ static void hso_free_interface(struct usb_interface *interface)
+ 				rfkill_unregister(rfk);
+ 				rfkill_destroy(rfk);
+ 			}
+-			hso_free_net_device(network_table[i], false);
++			hso_free_net_device(network_table[i]);
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 4c8ee1cff4d47..4cb71dd1998c4 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1178,7 +1178,10 @@ static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ 
+ static void smsc95xx_handle_link_change(struct net_device *net)
+ {
++	struct usbnet *dev = netdev_priv(net);
++
+ 	phy_print_status(net->phydev);
++	usbnet_defer_kevent(dev, EVENT_LINK_CHANGE);
+ }
+ 
+ static int smsc95xx_start_phy(struct usbnet *dev)
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index ffa894f7312a4..0adae76eb8df1 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -1867,8 +1867,8 @@ mac80211_hwsim_beacon(struct hrtimer *timer)
+ 		bcn_int -= data->bcn_delta;
+ 		data->bcn_delta = 0;
+ 	}
+-	hrtimer_forward(&data->beacon_timer, hrtimer_get_expires(timer),
+-			ns_to_ktime(bcn_int * NSEC_PER_USEC));
++	hrtimer_forward_now(&data->beacon_timer,
++			    ns_to_ktime(bcn_int * NSEC_PER_USEC));
+ 	return HRTIMER_RESTART;
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index e2374319df61a..9b6f78eac9375 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -980,6 +980,7 @@ EXPORT_SYMBOL_GPL(nvme_cleanup_cmd);
+ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
+ {
+ 	struct nvme_command *cmd = nvme_req(req)->cmd;
++	struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
+ 	blk_status_t ret = BLK_STS_OK;
+ 
+ 	if (!(req->rq_flags & RQF_DONTPREP)) {
+@@ -1028,7 +1029,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
+ 		return BLK_STS_IOERR;
+ 	}
+ 
+-	nvme_req(req)->genctr++;
++	if (!(ctrl->quirks & NVME_QUIRK_SKIP_CID_GEN))
++		nvme_req(req)->genctr++;
+ 	cmd->common.command_id = nvme_cid(req);
+ 	trace_nvme_setup_cmd(req, cmd);
+ 	return ret;
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 26511794629bc..12393a72662e5 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -149,6 +149,12 @@ enum nvme_quirks {
+ 	 * 48 bits.
+ 	 */
+ 	NVME_QUIRK_DMA_ADDRESS_BITS_48		= (1 << 16),
++
++	/*
++	 * The controller requires the command_id value be be limited, so skip
++	 * encoding the generation sequence number.
++	 */
++	NVME_QUIRK_SKIP_CID_GEN			= (1 << 17),
+ };
+ 
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index c246fdacba2e5..4f22fbafe964f 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3282,7 +3282,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2005),
+ 		.driver_data = NVME_QUIRK_SINGLE_VECTOR |
+ 				NVME_QUIRK_128_BYTES_SQES |
+-				NVME_QUIRK_SHARED_TAGS },
++				NVME_QUIRK_SHARED_TAGS |
++				NVME_QUIRK_SKIP_CID_GEN },
+ 
+ 	{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ 	{ 0, }
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index a89d24a040af8..9b524969eff74 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2012-2014, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2012-2014, 2016-2021 The Linux Foundation. All rights reserved.
+  */
+ 
+ #include <linux/gpio/driver.h>
+@@ -14,6 +14,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+ #include <linux/slab.h>
++#include <linux/spmi.h>
+ #include <linux/types.h>
+ 
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+@@ -171,6 +172,8 @@ struct pmic_gpio_state {
+ 	struct pinctrl_dev *ctrl;
+ 	struct gpio_chip chip;
+ 	struct irq_chip irq;
++	u8 usid;
++	u8 pid_base;
+ };
+ 
+ static const struct pinconf_generic_params pmic_gpio_bindings[] = {
+@@ -949,12 +952,36 @@ static int pmic_gpio_child_to_parent_hwirq(struct gpio_chip *chip,
+ 					   unsigned int *parent_hwirq,
+ 					   unsigned int *parent_type)
+ {
+-	*parent_hwirq = child_hwirq + 0xc0;
++	struct pmic_gpio_state *state = gpiochip_get_data(chip);
++
++	*parent_hwirq = child_hwirq + state->pid_base;
+ 	*parent_type = child_type;
+ 
+ 	return 0;
+ }
+ 
++static void *pmic_gpio_populate_parent_fwspec(struct gpio_chip *chip,
++					     unsigned int parent_hwirq,
++					     unsigned int parent_type)
++{
++	struct pmic_gpio_state *state = gpiochip_get_data(chip);
++	struct irq_fwspec *fwspec;
++
++	fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
++	if (!fwspec)
++		return NULL;
++
++	fwspec->fwnode = chip->irq.parent_domain->fwnode;
++
++	fwspec->param_count = 4;
++	fwspec->param[0] = state->usid;
++	fwspec->param[1] = parent_hwirq;
++	/* param[2] must be left as 0 */
++	fwspec->param[3] = parent_type;
++
++	return fwspec;
++}
++
+ static int pmic_gpio_probe(struct platform_device *pdev)
+ {
+ 	struct irq_domain *parent_domain;
+@@ -965,6 +992,7 @@ static int pmic_gpio_probe(struct platform_device *pdev)
+ 	struct pmic_gpio_pad *pad, *pads;
+ 	struct pmic_gpio_state *state;
+ 	struct gpio_irq_chip *girq;
++	const struct spmi_device *parent_spmi_dev;
+ 	int ret, npins, i;
+ 	u32 reg;
+ 
+@@ -984,6 +1012,9 @@ static int pmic_gpio_probe(struct platform_device *pdev)
+ 
+ 	state->dev = &pdev->dev;
+ 	state->map = dev_get_regmap(dev->parent, NULL);
++	parent_spmi_dev = to_spmi_device(dev->parent);
++	state->usid = parent_spmi_dev->usid;
++	state->pid_base = reg >> 8;
+ 
+ 	pindesc = devm_kcalloc(dev, npins, sizeof(*pindesc), GFP_KERNEL);
+ 	if (!pindesc)
+@@ -1059,7 +1090,7 @@ static int pmic_gpio_probe(struct platform_device *pdev)
+ 	girq->fwnode = of_node_to_fwnode(state->dev->of_node);
+ 	girq->parent_domain = parent_domain;
+ 	girq->child_to_parent_hwirq = pmic_gpio_child_to_parent_hwirq;
+-	girq->populate_parent_alloc_arg = gpiochip_populate_parent_fwspec_fourcell;
++	girq->populate_parent_alloc_arg = pmic_gpio_populate_parent_fwspec;
+ 	girq->child_offset_to_irq = pmic_gpio_child_offset_to_irq;
+ 	girq->child_irq_domain_ops.translate = pmic_gpio_domain_translate;
+ 
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 2e4e97a626a51..7b03b497d93b2 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -118,12 +118,30 @@ static const struct dmi_system_id dmi_vgbs_allow_list[] = {
+ 	{ }
+ };
+ 
++/*
++ * Some devices, even non convertible ones, can send incorrect SW_TABLET_MODE
++ * reports. Accept such reports only from devices in this list.
++ */
++static const struct dmi_system_id dmi_auto_add_switch[] = {
++	{
++		.matches = {
++			DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "31" /* Convertible */),
++		},
++	},
++	{
++		.matches = {
++			DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "32" /* Detachable */),
++		},
++	},
++	{} /* Array terminator */
++};
++
+ struct intel_hid_priv {
+ 	struct input_dev *input_dev;
+ 	struct input_dev *array;
+ 	struct input_dev *switches;
+ 	bool wakeup_mode;
+-	bool dual_accel;
++	bool auto_add_switch;
+ };
+ 
+ #define HID_EVENT_FILTER_UUID	"eeec56b3-4442-408f-a792-4edd4d758054"
+@@ -452,10 +470,8 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
+ 	 * Some convertible have unreliable VGBS return which could cause incorrect
+ 	 * SW_TABLET_MODE report, in these cases we enable support when receiving
+ 	 * the first event instead of during driver setup.
+-	 *
+-	 * See dual_accel_detect.h for more info on the dual_accel check.
+ 	 */
+-	if (!priv->switches && !priv->dual_accel && (event == 0xcc || event == 0xcd)) {
++	if (!priv->switches && priv->auto_add_switch && (event == 0xcc || event == 0xcd)) {
+ 		dev_info(&device->dev, "switch event received, enable switches supports\n");
+ 		err = intel_hid_switches_setup(device);
+ 		if (err)
+@@ -596,7 +612,8 @@ static int intel_hid_probe(struct platform_device *device)
+ 		return -ENOMEM;
+ 	dev_set_drvdata(&device->dev, priv);
+ 
+-	priv->dual_accel = dual_accel_detect();
++	/* See dual_accel_detect.h for more info on the dual_accel check. */
++	priv->auto_add_switch = dmi_check_system(dmi_auto_add_switch) && !dual_accel_detect();
+ 
+ 	err = intel_hid_input_setup(device);
+ 	if (err) {
+diff --git a/drivers/ptp/ptp_kvm_x86.c b/drivers/ptp/ptp_kvm_x86.c
+index 3dd519dfc473c..d0096cd7096a8 100644
+--- a/drivers/ptp/ptp_kvm_x86.c
++++ b/drivers/ptp/ptp_kvm_x86.c
+@@ -15,8 +15,6 @@
+ #include <linux/ptp_clock_kernel.h>
+ #include <linux/ptp_kvm.h>
+ 
+-struct pvclock_vsyscall_time_info *hv_clock;
+-
+ static phys_addr_t clock_pair_gpa;
+ static struct kvm_clock_pairing clock_pair;
+ 
+@@ -28,8 +26,7 @@ int kvm_arch_ptp_init(void)
+ 		return -ENODEV;
+ 
+ 	clock_pair_gpa = slow_virt_to_phys(&clock_pair);
+-	hv_clock = pvclock_get_pvti_cpu0_va();
+-	if (!hv_clock)
++	if (!pvclock_get_pvti_cpu0_va())
+ 		return -ENODEV;
+ 
+ 	ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa,
+@@ -64,10 +61,8 @@ int kvm_arch_ptp_get_crosststamp(u64 *cycle, struct timespec64 *tspec,
+ 	struct pvclock_vcpu_time_info *src;
+ 	unsigned int version;
+ 	long ret;
+-	int cpu;
+ 
+-	cpu = smp_processor_id();
+-	src = &hv_clock[cpu].pvti;
++	src = this_cpu_pvti();
+ 
+ 	do {
+ 		/*
+diff --git a/drivers/s390/cio/ccwgroup.c b/drivers/s390/cio/ccwgroup.c
+index 9748165e08e96..f19f02e751155 100644
+--- a/drivers/s390/cio/ccwgroup.c
++++ b/drivers/s390/cio/ccwgroup.c
+@@ -77,12 +77,13 @@ EXPORT_SYMBOL(ccwgroup_set_online);
+ /**
+  * ccwgroup_set_offline() - disable a ccwgroup device
+  * @gdev: target ccwgroup device
++ * @call_gdrv: Call the registered gdrv set_offline function
+  *
+  * This function attempts to put the ccwgroup device into the offline state.
+  * Returns:
+  *  %0 on success and a negative error value on failure.
+  */
+-int ccwgroup_set_offline(struct ccwgroup_device *gdev)
++int ccwgroup_set_offline(struct ccwgroup_device *gdev, bool call_gdrv)
+ {
+ 	struct ccwgroup_driver *gdrv = to_ccwgroupdrv(gdev->dev.driver);
+ 	int ret = -EINVAL;
+@@ -91,11 +92,16 @@ int ccwgroup_set_offline(struct ccwgroup_device *gdev)
+ 		return -EAGAIN;
+ 	if (gdev->state == CCWGROUP_OFFLINE)
+ 		goto out;
++	if (!call_gdrv) {
++		ret = 0;
++		goto offline;
++	}
+ 	if (gdrv->set_offline)
+ 		ret = gdrv->set_offline(gdev);
+ 	if (ret)
+ 		goto out;
+ 
++offline:
+ 	gdev->state = CCWGROUP_OFFLINE;
+ out:
+ 	atomic_set(&gdev->onoff, 0);
+@@ -124,7 +130,7 @@ static ssize_t ccwgroup_online_store(struct device *dev,
+ 	if (value == 1)
+ 		ret = ccwgroup_set_online(gdev);
+ 	else if (value == 0)
+-		ret = ccwgroup_set_offline(gdev);
++		ret = ccwgroup_set_offline(gdev, true);
+ 	else
+ 		ret = -EINVAL;
+ out:
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index f4d554ea0c930..52bdb2c8c0855 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -877,7 +877,6 @@ struct qeth_card {
+ 	struct napi_struct napi;
+ 	struct qeth_rx rx;
+ 	struct delayed_work buffer_reclaim_work;
+-	struct work_struct close_dev_work;
+ };
+ 
+ static inline bool qeth_card_hw_is_reachable(struct qeth_card *card)
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 51f7f4e680c34..f5bad10f3f44f 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -71,15 +71,6 @@ static void qeth_issue_next_read_cb(struct qeth_card *card,
+ static int qeth_qdio_establish(struct qeth_card *);
+ static void qeth_free_qdio_queues(struct qeth_card *card);
+ 
+-static void qeth_close_dev_handler(struct work_struct *work)
+-{
+-	struct qeth_card *card;
+-
+-	card = container_of(work, struct qeth_card, close_dev_work);
+-	QETH_CARD_TEXT(card, 2, "cldevhdl");
+-	ccwgroup_set_offline(card->gdev);
+-}
+-
+ static const char *qeth_get_cardname(struct qeth_card *card)
+ {
+ 	if (IS_VM_NIC(card)) {
+@@ -797,10 +788,12 @@ static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card,
+ 	case IPA_CMD_STOPLAN:
+ 		if (cmd->hdr.return_code == IPA_RC_VEPA_TO_VEB_TRANSITION) {
+ 			dev_err(&card->gdev->dev,
+-				"Interface %s is down because the adjacent port is no longer in reflective relay mode\n",
++				"Adjacent port of interface %s is no longer in reflective relay mode, trigger recovery\n",
+ 				netdev_name(card->dev));
+-			schedule_work(&card->close_dev_work);
++			/* Set offline, then probably fail to set online: */
++			qeth_schedule_recovery(card);
+ 		} else {
++			/* stay online for subsequent STARTLAN */
+ 			dev_warn(&card->gdev->dev,
+ 				 "The link for interface %s on CHPID 0x%X failed\n",
+ 				 netdev_name(card->dev), card->info.chpid);
+@@ -1559,7 +1552,6 @@ static void qeth_setup_card(struct qeth_card *card)
+ 	INIT_LIST_HEAD(&card->ipato.entries);
+ 	qeth_init_qdio_info(card);
+ 	INIT_DELAYED_WORK(&card->buffer_reclaim_work, qeth_buffer_reclaim_work);
+-	INIT_WORK(&card->close_dev_work, qeth_close_dev_handler);
+ 	hash_init(card->rx_mode_addrs);
+ 	hash_init(card->local_addrs4);
+ 	hash_init(card->local_addrs6);
+@@ -5556,7 +5548,8 @@ static int qeth_do_reset(void *data)
+ 		dev_info(&card->gdev->dev,
+ 			 "Device successfully recovered!\n");
+ 	} else {
+-		ccwgroup_set_offline(card->gdev);
++		qeth_set_offline(card, disc, true);
++		ccwgroup_set_offline(card->gdev, false);
+ 		dev_warn(&card->gdev->dev,
+ 			 "The qeth device driver failed to recover an error on the device\n");
+ 	}
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index d7cdd9cfe485a..3dbe592ca97a1 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -2218,7 +2218,6 @@ static void qeth_l2_remove_device(struct ccwgroup_device *gdev)
+ 	if (gdev->state == CCWGROUP_ONLINE)
+ 		qeth_set_offline(card, card->discipline, false);
+ 
+-	cancel_work_sync(&card->close_dev_work);
+ 	if (card->dev->reg_state == NETREG_REGISTERED)
+ 		unregister_netdev(card->dev);
+ }
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index f0d6f205c53cd..5ba38499e3e29 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -1965,7 +1965,6 @@ static void qeth_l3_remove_device(struct ccwgroup_device *cgdev)
+ 	if (cgdev->state == CCWGROUP_ONLINE)
+ 		qeth_set_offline(card, card->discipline, false);
+ 
+-	cancel_work_sync(&card->close_dev_work);
+ 	if (card->dev->reg_state == NETREG_REGISTERED)
+ 		unregister_netdev(card->dev);
+ 
+diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c
+index 390b07bf92b97..ccbded3353bd0 100644
+--- a/drivers/scsi/csiostor/csio_init.c
++++ b/drivers/scsi/csiostor/csio_init.c
+@@ -1254,3 +1254,4 @@ MODULE_DEVICE_TABLE(pci, csio_pci_tbl);
+ MODULE_VERSION(CSIO_DRV_VERSION);
+ MODULE_FIRMWARE(FW_FNAME_T5);
+ MODULE_FIRMWARE(FW_FNAME_T6);
++MODULE_SOFTDEP("pre: cxgb4");
+diff --git a/drivers/scsi/elx/libefc/efc_device.c b/drivers/scsi/elx/libefc/efc_device.c
+index 725ca2a23fb2a..52be01333c6e3 100644
+--- a/drivers/scsi/elx/libefc/efc_device.c
++++ b/drivers/scsi/elx/libefc/efc_device.c
+@@ -928,22 +928,21 @@ __efc_d_wait_topology_notify(struct efc_sm_ctx *ctx,
+ 		break;
+ 
+ 	case EFC_EVT_NPORT_TOPOLOGY_NOTIFY: {
+-		enum efc_nport_topology topology =
+-					(enum efc_nport_topology)arg;
++		enum efc_nport_topology *topology = arg;
+ 
+ 		WARN_ON(node->nport->domain->attached);
+ 
+ 		WARN_ON(node->send_ls_acc != EFC_NODE_SEND_LS_ACC_PLOGI);
+ 
+ 		node_printf(node, "topology notification, topology=%d\n",
+-			    topology);
++			    *topology);
+ 
+ 		/* At the time the PLOGI was received, the topology was unknown,
+ 		 * so we didn't know which node would perform the domain attach:
+ 		 * 1. The node from which the PLOGI was sent (p2p) or
+ 		 * 2. The node to which the FLOGI was sent (fabric).
+ 		 */
+-		if (topology == EFC_NPORT_TOPO_P2P) {
++		if (*topology == EFC_NPORT_TOPO_P2P) {
+ 			/* if this is p2p, need to attach to the domain using
+ 			 * the d_id from the PLOGI received
+ 			 */
+diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c
+index d397220d9e543..3270ce40196c6 100644
+--- a/drivers/scsi/elx/libefc/efc_fabric.c
++++ b/drivers/scsi/elx/libefc/efc_fabric.c
+@@ -107,7 +107,6 @@ void
+ efc_fabric_notify_topology(struct efc_node *node)
+ {
+ 	struct efc_node *tmp_node;
+-	enum efc_nport_topology topology = node->nport->topology;
+ 	unsigned long index;
+ 
+ 	/*
+@@ -118,7 +117,7 @@ efc_fabric_notify_topology(struct efc_node *node)
+ 		if (tmp_node != node) {
+ 			efc_node_post_event(tmp_node,
+ 					    EFC_EVT_NPORT_TOPOLOGY_NOTIFY,
+-					    (void *)topology);
++					    &node->nport->topology);
+ 		}
+ 	}
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 2f67ec1df3e66..82b6f4c2eb4a8 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3935,7 +3935,6 @@ struct qla_hw_data {
+ 		uint32_t	scm_supported_f:1;
+ 				/* Enabled in Driver */
+ 		uint32_t	scm_enabled:1;
+-		uint32_t	max_req_queue_warned:1;
+ 		uint32_t	plogi_template_valid:1;
+ 		uint32_t	port_isolated:1;
+ 	} flags;
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index d9fb093a60a1f..2aa8f519aae62 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -4201,6 +4201,8 @@ skip_msi:
+ 		ql_dbg(ql_dbg_init, vha, 0x0125,
+ 		    "INTa mode: Enabled.\n");
+ 		ha->flags.mr_intr_valid = 1;
++		/* Set max_qpair to 0, as MSI-X and MSI in not enabled */
++		ha->max_qpairs = 0;
+ 	}
+ 
+ clear_risc_ints:
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index a7259733e4709..9316d7d91e2ab 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -109,19 +109,24 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (ha->queue_pair_map[qidx]) {
+-		*handle = ha->queue_pair_map[qidx];
+-		ql_log(ql_log_info, vha, 0x2121,
+-		    "Returning existing qpair of %p for idx=%x\n",
+-		    *handle, qidx);
+-		return 0;
+-	}
++	/* Use base qpair if max_qpairs is 0 */
++	if (!ha->max_qpairs) {
++		qpair = ha->base_qpair;
++	} else {
++		if (ha->queue_pair_map[qidx]) {
++			*handle = ha->queue_pair_map[qidx];
++			ql_log(ql_log_info, vha, 0x2121,
++			       "Returning existing qpair of %p for idx=%x\n",
++			       *handle, qidx);
++			return 0;
++		}
+ 
+-	qpair = qla2xxx_create_qpair(vha, 5, vha->vp_idx, true);
+-	if (qpair == NULL) {
+-		ql_log(ql_log_warn, vha, 0x2122,
+-		    "Failed to allocate qpair\n");
+-		return -EINVAL;
++		qpair = qla2xxx_create_qpair(vha, 5, vha->vp_idx, true);
++		if (!qpair) {
++			ql_log(ql_log_warn, vha, 0x2122,
++			       "Failed to allocate qpair\n");
++			return -EINVAL;
++		}
+ 	}
+ 	*handle = qpair;
+ 
+@@ -728,18 +733,9 @@ int qla_nvme_register_hba(struct scsi_qla_host *vha)
+ 
+ 	WARN_ON(vha->nvme_local_port);
+ 
+-	if (ha->max_req_queues < 3) {
+-		if (!ha->flags.max_req_queue_warned)
+-			ql_log(ql_log_info, vha, 0x2120,
+-			       "%s: Disabling FC-NVME due to lack of free queue pairs (%d).\n",
+-			       __func__, ha->max_req_queues);
+-		ha->flags.max_req_queue_warned = 1;
+-		return ret;
+-	}
+-
+ 	qla_nvme_fc_transport.max_hw_queues =
+ 	    min((uint8_t)(qla_nvme_fc_transport.max_hw_queues),
+-		(uint8_t)(ha->max_req_queues - 2));
++		(uint8_t)(ha->max_qpairs ? ha->max_qpairs : 1));
+ 
+ 	pinfo.node_name = wwn_to_u64(vha->node_name);
+ 	pinfo.port_name = wwn_to_u64(vha->port_name);
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index e6c334bfb4c2c..40acca04d03bb 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -128,6 +128,81 @@ static int ufs_intel_link_startup_notify(struct ufs_hba *hba,
+ 	return err;
+ }
+ 
++static int ufs_intel_set_lanes(struct ufs_hba *hba, u32 lanes)
++{
++	struct ufs_pa_layer_attr pwr_info = hba->pwr_info;
++	int ret;
++
++	pwr_info.lane_rx = lanes;
++	pwr_info.lane_tx = lanes;
++	ret = ufshcd_config_pwr_mode(hba, &pwr_info);
++	if (ret)
++		dev_err(hba->dev, "%s: Setting %u lanes, err = %d\n",
++			__func__, lanes, ret);
++	return ret;
++}
++
++static int ufs_intel_lkf_pwr_change_notify(struct ufs_hba *hba,
++				enum ufs_notify_change_status status,
++				struct ufs_pa_layer_attr *dev_max_params,
++				struct ufs_pa_layer_attr *dev_req_params)
++{
++	int err = 0;
++
++	switch (status) {
++	case PRE_CHANGE:
++		if (ufshcd_is_hs_mode(dev_max_params) &&
++		    (hba->pwr_info.lane_rx != 2 || hba->pwr_info.lane_tx != 2))
++			ufs_intel_set_lanes(hba, 2);
++		memcpy(dev_req_params, dev_max_params, sizeof(*dev_req_params));
++		break;
++	case POST_CHANGE:
++		if (ufshcd_is_hs_mode(dev_req_params)) {
++			u32 peer_granularity;
++
++			usleep_range(1000, 1250);
++			err = ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_GRANULARITY),
++						  &peer_granularity);
++		}
++		break;
++	default:
++		break;
++	}
++
++	return err;
++}
++
++static int ufs_intel_lkf_apply_dev_quirks(struct ufs_hba *hba)
++{
++	u32 granularity, peer_granularity;
++	u32 pa_tactivate, peer_pa_tactivate;
++	int ret;
++
++	ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_GRANULARITY), &granularity);
++	if (ret)
++		goto out;
++
++	ret = ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_GRANULARITY), &peer_granularity);
++	if (ret)
++		goto out;
++
++	ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_TACTIVATE), &pa_tactivate);
++	if (ret)
++		goto out;
++
++	ret = ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_TACTIVATE), &peer_pa_tactivate);
++	if (ret)
++		goto out;
++
++	if (granularity == peer_granularity) {
++		u32 new_peer_pa_tactivate = pa_tactivate + 2;
++
++		ret = ufshcd_dme_peer_set(hba, UIC_ARG_MIB(PA_TACTIVATE), new_peer_pa_tactivate);
++	}
++out:
++	return ret;
++}
++
+ #define INTEL_ACTIVELTR		0x804
+ #define INTEL_IDLELTR		0x808
+ 
+@@ -351,6 +426,7 @@ static int ufs_intel_lkf_init(struct ufs_hba *hba)
+ 	struct ufs_host *ufs_host;
+ 	int err;
+ 
++	hba->nop_out_timeout = 200;
+ 	hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
+ 	hba->caps |= UFSHCD_CAP_CRYPTO;
+ 	err = ufs_intel_common_init(hba);
+@@ -381,6 +457,8 @@ static struct ufs_hba_variant_ops ufs_intel_lkf_hba_vops = {
+ 	.exit			= ufs_intel_common_exit,
+ 	.hce_enable_notify	= ufs_intel_hce_enable_notify,
+ 	.link_startup_notify	= ufs_intel_link_startup_notify,
++	.pwr_change_notify	= ufs_intel_lkf_pwr_change_notify,
++	.apply_dev_quirks	= ufs_intel_lkf_apply_dev_quirks,
+ 	.resume			= ufs_intel_resume,
+ 	.device_reset		= ufs_intel_device_reset,
+ };
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 3a204324151a8..a3f5af088122e 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -330,8 +330,7 @@ static void ufshcd_add_query_upiu_trace(struct ufs_hba *hba,
+ static void ufshcd_add_tm_upiu_trace(struct ufs_hba *hba, unsigned int tag,
+ 				     enum ufs_trace_str_t str_t)
+ {
+-	int off = (int)tag - hba->nutrs;
+-	struct utp_task_req_desc *descp = &hba->utmrdl_base_addr[off];
++	struct utp_task_req_desc *descp = &hba->utmrdl_base_addr[tag];
+ 
+ 	if (!trace_ufshcd_upiu_enabled())
+ 		return;
+@@ -4767,7 +4766,7 @@ static int ufshcd_verify_dev_init(struct ufs_hba *hba)
+ 	mutex_lock(&hba->dev_cmd.lock);
+ 	for (retries = NOP_OUT_RETRIES; retries > 0; retries--) {
+ 		err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_NOP,
+-					       NOP_OUT_TIMEOUT);
++					  hba->nop_out_timeout);
+ 
+ 		if (!err || err == -ETIMEDOUT)
+ 			break;
+@@ -9403,6 +9402,7 @@ int ufshcd_alloc_host(struct device *dev, struct ufs_hba **hba_handle)
+ 	hba->dev = dev;
+ 	*hba_handle = hba;
+ 	hba->dev_ref_clk_freq = REF_CLK_FREQ_INVAL;
++	hba->nop_out_timeout = NOP_OUT_TIMEOUT;
+ 
+ 	INIT_LIST_HEAD(&hba->clk_list_head);
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 86d4765a17b83..aa95deffb873a 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -814,6 +814,7 @@ struct ufs_hba {
+ 	/* Device management request data */
+ 	struct ufs_dev_cmd dev_cmd;
+ 	ktime_t last_dme_cmd_tstamp;
++	int nop_out_timeout;
+ 
+ 	/* Keeps information of the UFS device connected to this host */
+ 	struct ufs_dev_info dev_info;
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index 31d8449ca1d2d..fc769c52c6d30 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -918,7 +918,7 @@ static int hantro_probe(struct platform_device *pdev)
+ 		if (!vpu->variant->irqs[i].handler)
+ 			continue;
+ 
+-		if (vpu->variant->num_clocks > 1) {
++		if (vpu->variant->num_irqs > 1) {
+ 			irq_name = vpu->variant->irqs[i].name;
+ 			irq = platform_get_irq_byname(vpu->pdev, irq_name);
+ 		} else {
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_video.c b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+index 32c13ecb22d83..a8168ac2fbd0c 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_video.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+@@ -135,7 +135,7 @@ void cedrus_prepare_format(struct v4l2_pix_format *pix_fmt)
+ 		sizeimage = bytesperline * height;
+ 
+ 		/* Chroma plane size. */
+-		sizeimage += bytesperline * height / 2;
++		sizeimage += bytesperline * ALIGN(height, 64) / 2;
+ 
+ 		break;
+ 
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index cb72393f92d3a..153d4a88ec9ac 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1219,8 +1219,25 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ 	new_row_size = new_cols << 1;
+ 	new_screen_size = new_row_size * new_rows;
+ 
+-	if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)
+-		return 0;
++	if (new_cols == vc->vc_cols && new_rows == vc->vc_rows) {
++		/*
++		 * This function is being called here to cover the case
++		 * where the userspace calls the FBIOPUT_VSCREENINFO twice,
++		 * passing the same fb_var_screeninfo containing the fields
++		 * yres/xres equal to a number non-multiple of vc_font.height
++		 * and yres_virtual/xres_virtual equal to number lesser than the
++		 * vc_font.height and yres/xres.
++		 * In the second call, the struct fb_var_screeninfo isn't
++		 * being modified by the underlying driver because of the
++		 * if above, and this causes the fbcon_display->vrows to become
++		 * negative and it eventually leads to out-of-bound
++		 * access by the imageblit function.
++		 * To give the correct values to the struct and to not have
++		 * to deal with possible errors from the code below, we call
++		 * the resize_screen here as well.
++		 */
++		return resize_screen(vc, new_cols, new_rows, user);
++	}
+ 
+ 	if (new_screen_size > KMALLOC_MAX_SIZE || !new_screen_size)
+ 		return -EINVAL;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 546dfc1e2349c..71cf3f503f16b 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1677,7 +1677,7 @@ config WDT_MTX1
+ 
+ config SIBYTE_WDOG
+ 	tristate "Sibyte SoC hardware watchdog"
+-	depends on CPU_SB1 || (MIPS && COMPILE_TEST)
++	depends on CPU_SB1
+ 	help
+ 	  Watchdog driver for the built in watchdog hardware in Sibyte
+ 	  SoC processors.  There are apparently two watchdog timers
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 439ed81e755af..964be729ed0a6 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -630,7 +630,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
+ 
+ 			vaddr = eppnt->p_vaddr;
+ 			if (interp_elf_ex->e_type == ET_EXEC || load_addr_set)
+-				elf_type |= MAP_FIXED_NOREPLACE;
++				elf_type |= MAP_FIXED;
+ 			else if (no_base && interp_elf_ex->e_type == ET_DYN)
+ 				load_addr = -vaddr;
+ 
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 8129a430d789d..2f117c57160dc 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -528,7 +528,7 @@ void debugfs_create_file_size(const char *name, umode_t mode,
+ {
+ 	struct dentry *de = debugfs_create_file(name, mode, parent, data, fops);
+ 
+-	if (de)
++	if (!IS_ERR(de))
+ 		d_inode(de)->i_size = file_size;
+ }
+ EXPORT_SYMBOL_GPL(debugfs_create_file_size);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index ffb295aa891c0..74b172a4adda3 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -551,7 +551,7 @@ static int ext4_dx_readdir(struct file *file, struct dir_context *ctx)
+ 	struct dir_private_info *info = file->private_data;
+ 	struct inode *inode = file_inode(file);
+ 	struct fname *fname;
+-	int	ret;
++	int ret = 0;
+ 
+ 	if (!info) {
+ 		info = ext4_htree_create_dir_info(file, ctx->pos);
+@@ -599,7 +599,7 @@ static int ext4_dx_readdir(struct file *file, struct dir_context *ctx)
+ 						   info->curr_minor_hash,
+ 						   &info->next_hash);
+ 			if (ret < 0)
+-				return ret;
++				goto finished;
+ 			if (ret == 0) {
+ 				ctx->pos = ext4_get_htree_eof(file);
+ 				break;
+@@ -630,7 +630,7 @@ static int ext4_dx_readdir(struct file *file, struct dir_context *ctx)
+ 	}
+ finished:
+ 	info->last_pos = ctx->pos;
+-	return 0;
++	return ret < 0 ? ret : 0;
+ }
+ 
+ static int ext4_release_dir(struct inode *inode, struct file *filp)
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 92ad64b89d9b5..b1933e3513d60 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5908,7 +5908,7 @@ void ext4_ext_replay_shrink_inode(struct inode *inode, ext4_lblk_t end)
+ }
+ 
+ /* Check if *cur is a hole and if it is, skip it */
+-static void skip_hole(struct inode *inode, ext4_lblk_t *cur)
++static int skip_hole(struct inode *inode, ext4_lblk_t *cur)
+ {
+ 	int ret;
+ 	struct ext4_map_blocks map;
+@@ -5917,9 +5917,12 @@ static void skip_hole(struct inode *inode, ext4_lblk_t *cur)
+ 	map.m_len = ((inode->i_size) >> inode->i_sb->s_blocksize_bits) - *cur;
+ 
+ 	ret = ext4_map_blocks(NULL, inode, &map, 0);
++	if (ret < 0)
++		return ret;
+ 	if (ret != 0)
+-		return;
++		return 0;
+ 	*cur = *cur + map.m_len;
++	return 0;
+ }
+ 
+ /* Count number of blocks used by this inode and update i_blocks */
+@@ -5968,7 +5971,9 @@ int ext4_ext_replay_set_iblocks(struct inode *inode)
+ 	 * iblocks by total number of differences found.
+ 	 */
+ 	cur = 0;
+-	skip_hole(inode, &cur);
++	ret = skip_hole(inode, &cur);
++	if (ret < 0)
++		goto out;
+ 	path = ext4_find_extent(inode, cur, NULL, 0);
+ 	if (IS_ERR(path))
+ 		goto out;
+@@ -5987,8 +5992,12 @@ int ext4_ext_replay_set_iblocks(struct inode *inode)
+ 		}
+ 		cur = max(cur + 1, le32_to_cpu(ex->ee_block) +
+ 					ext4_ext_get_actual_len(ex));
+-		skip_hole(inode, &cur);
+-
++		ret = skip_hole(inode, &cur);
++		if (ret < 0) {
++			ext4_ext_drop_refs(path);
++			kfree(path);
++			break;
++		}
+ 		path2 = ext4_find_extent(inode, cur, NULL, 0);
+ 		if (IS_ERR(path2)) {
+ 			ext4_ext_drop_refs(path);
+diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
+index e8195229c2529..782d05a3f97a0 100644
+--- a/fs/ext4/fast_commit.c
++++ b/fs/ext4/fast_commit.c
+@@ -893,6 +893,12 @@ static int ext4_fc_write_inode_data(struct inode *inode, u32 *crc)
+ 					    sizeof(lrange), (u8 *)&lrange, crc))
+ 				return -ENOSPC;
+ 		} else {
++			unsigned int max = (map.m_flags & EXT4_MAP_UNWRITTEN) ?
++				EXT_UNWRITTEN_MAX_LEN : EXT_INIT_MAX_LEN;
++
++			/* Limit the number of blocks in one extent */
++			map.m_len = min(max, map.m_len);
++
+ 			fc_ext.fc_ino = cpu_to_le32(inode->i_ino);
+ 			ex = (struct ext4_extent *)&fc_ext.fc_ex;
+ 			ex->ee_block = cpu_to_le32(map.m_lblk);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index d8de607849df3..73daf9443e5e0 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1640,6 +1640,7 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	int ret;
+ 	bool allocated = false;
++	bool reserved = false;
+ 
+ 	/*
+ 	 * If the cluster containing lblk is shared with a delayed,
+@@ -1656,6 +1657,7 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 		ret = ext4_da_reserve_space(inode);
+ 		if (ret != 0)   /* ENOSPC */
+ 			goto errout;
++		reserved = true;
+ 	} else {   /* bigalloc */
+ 		if (!ext4_es_scan_clu(inode, &ext4_es_is_delonly, lblk)) {
+ 			if (!ext4_es_scan_clu(inode,
+@@ -1668,6 +1670,7 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 					ret = ext4_da_reserve_space(inode);
+ 					if (ret != 0)   /* ENOSPC */
+ 						goto errout;
++					reserved = true;
+ 				} else {
+ 					allocated = true;
+ 				}
+@@ -1678,6 +1681,8 @@ static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
+ 	}
+ 
+ 	ret = ext4_es_insert_delayed_block(inode, lblk, allocated);
++	if (ret && reserved)
++		ext4_da_release_space(inode, 1);
+ 
+ errout:
+ 	return ret;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 970013c93d3ea..59c25a95050af 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -661,7 +661,7 @@ static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
+ 		 * constraints, it may not be safe to do it right here so we
+ 		 * defer superblock flushing to a workqueue.
+ 		 */
+-		if (continue_fs)
++		if (continue_fs && journal)
+ 			schedule_work(&EXT4_SB(sb)->s_error_work);
+ 		else
+ 			ext4_commit_super(sb);
+@@ -1351,6 +1351,12 @@ static void ext4_destroy_inode(struct inode *inode)
+ 				true);
+ 		dump_stack();
+ 	}
++
++	if (EXT4_I(inode)->i_reserved_data_blocks)
++		ext4_msg(inode->i_sb, KERN_ERR,
++			 "Inode %lu (%p): i_reserved_data_blocks (%u) not cleared!",
++			 inode->i_ino, EXT4_I(inode),
++			 EXT4_I(inode)->i_reserved_data_blocks);
+ }
+ 
+ static void init_once(void *foo)
+@@ -3185,17 +3191,17 @@ static loff_t ext4_max_size(int blkbits, int has_huge_files)
+  */
+ static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
+ {
+-	loff_t res = EXT4_NDIR_BLOCKS;
++	unsigned long long upper_limit, res = EXT4_NDIR_BLOCKS;
+ 	int meta_blocks;
+-	loff_t upper_limit;
+-	/* This is calculated to be the largest file size for a dense, block
++
++	/*
++	 * This is calculated to be the largest file size for a dense, block
+ 	 * mapped file such that the file's total number of 512-byte sectors,
+ 	 * including data and all indirect blocks, does not exceed (2^48 - 1).
+ 	 *
+ 	 * __u32 i_blocks_lo and _u16 i_blocks_high represent the total
+ 	 * number of 512-byte sectors of the file.
+ 	 */
+-
+ 	if (!has_huge_files) {
+ 		/*
+ 		 * !has_huge_files or implies that the inode i_block field
+@@ -3238,7 +3244,7 @@ static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
+ 	if (res > MAX_LFS_FILESIZE)
+ 		res = MAX_LFS_FILESIZE;
+ 
+-	return res;
++	return (loff_t)res;
+ }
+ 
+ static ext4_fsblk_t descriptor_loc(struct super_block *sb,
+@@ -5183,12 +5189,15 @@ failed_mount_wq:
+ 	sbi->s_ea_block_cache = NULL;
+ 
+ 	if (sbi->s_journal) {
++		/* flush s_error_work before journal destroy. */
++		flush_work(&sbi->s_error_work);
+ 		jbd2_journal_destroy(sbi->s_journal);
+ 		sbi->s_journal = NULL;
+ 	}
+ failed_mount3a:
+ 	ext4_es_unregister_shrinker(sbi);
+ failed_mount3:
++	/* flush s_error_work before sbi destroy */
+ 	flush_work(&sbi->s_error_work);
+ 	del_timer_sync(&sbi->s_err_report);
+ 	ext4_stop_mmpd(sbi);
+diff --git a/fs/verity/enable.c b/fs/verity/enable.c
+index 77e159a0346b1..60a4372aa4d75 100644
+--- a/fs/verity/enable.c
++++ b/fs/verity/enable.c
+@@ -177,7 +177,7 @@ static int build_merkle_tree(struct file *filp,
+ 	 * (level 0) and ascending to the root node (level 'num_levels - 1').
+ 	 * Then at the end (level 'num_levels'), calculate the root hash.
+ 	 */
+-	blocks = (inode->i_size + params->block_size - 1) >>
++	blocks = ((u64)inode->i_size + params->block_size - 1) >>
+ 		 params->log_blocksize;
+ 	for (level = 0; level <= params->num_levels; level++) {
+ 		err = build_merkle_tree_level(filp, level, blocks, params,
+diff --git a/fs/verity/open.c b/fs/verity/open.c
+index 60ff8af7219fe..92df87f5fa388 100644
+--- a/fs/verity/open.c
++++ b/fs/verity/open.c
+@@ -89,7 +89,7 @@ int fsverity_init_merkle_tree_params(struct merkle_tree_params *params,
+ 	 */
+ 
+ 	/* Compute number of levels and the number of blocks in each level */
+-	blocks = (inode->i_size + params->block_size - 1) >> log_blocksize;
++	blocks = ((u64)inode->i_size + params->block_size - 1) >> log_blocksize;
+ 	pr_debug("Data is %lld bytes (%llu blocks)\n", inode->i_size, blocks);
+ 	while (blocks > 1) {
+ 		if (params->num_levels >= FS_VERITY_MAX_LEVELS) {
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index e8e2b0393ca93..11da5671d4f09 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -553,6 +553,8 @@ struct btf_func_model {
+  * programs only. Should not be used with normal calls and indirect calls.
+  */
+ #define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
++/* Return the return value of fentry prog. Only used by bpf_struct_ops. */
++#define BPF_TRAMP_F_RET_FENTRY_RET	BIT(4)
+ 
+ /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
+  * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
+diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h
+index 59828516ebaf1..9f4ad719bfe3f 100644
+--- a/include/linux/fwnode.h
++++ b/include/linux/fwnode.h
+@@ -22,10 +22,15 @@ struct device;
+  * LINKS_ADDED:	The fwnode has already be parsed to add fwnode links.
+  * NOT_DEVICE:	The fwnode will never be populated as a struct device.
+  * INITIALIZED: The hardware corresponding to fwnode has been initialized.
++ * NEEDS_CHILD_BOUND_ON_ADD: For this fwnode/device to probe successfully, its
++ *			     driver needs its child devices to be bound with
++ *			     their respective drivers as soon as they are
++ *			     added.
+  */
+-#define FWNODE_FLAG_LINKS_ADDED		BIT(0)
+-#define FWNODE_FLAG_NOT_DEVICE		BIT(1)
+-#define FWNODE_FLAG_INITIALIZED		BIT(2)
++#define FWNODE_FLAG_LINKS_ADDED			BIT(0)
++#define FWNODE_FLAG_NOT_DEVICE			BIT(1)
++#define FWNODE_FLAG_INITIALIZED			BIT(2)
++#define FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD	BIT(3)
+ 
+ struct fwnode_handle {
+ 	struct fwnode_handle *secondary;
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 3ab2563b1a230..7fd7f60936129 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -597,5 +597,5 @@ int ip_valid_fib_dump_req(struct net *net, const struct nlmsghdr *nlh,
+ int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nh,
+ 		     u8 rt_family, unsigned char *flags, bool skip_oif);
+ int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nh,
+-		    int nh_weight, u8 rt_family);
++		    int nh_weight, u8 rt_family, u32 nh_tclassid);
+ #endif  /* _NET_FIB_H */
+diff --git a/include/net/nexthop.h b/include/net/nexthop.h
+index 10e1777877e6a..28085b995ddcf 100644
+--- a/include/net/nexthop.h
++++ b/include/net/nexthop.h
+@@ -325,7 +325,7 @@ int nexthop_mpath_fill_node(struct sk_buff *skb, struct nexthop *nh,
+ 		struct fib_nh_common *nhc = &nhi->fib_nhc;
+ 		int weight = nhg->nh_entries[i].weight;
+ 
+-		if (fib_add_nexthop(skb, nhc, weight, rt_family) < 0)
++		if (fib_add_nexthop(skb, nhc, weight, rt_family, 0) < 0)
+ 			return -EMSGSIZE;
+ 	}
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f23cb259b0e24..d28b9bb5ef5a0 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -487,8 +487,10 @@ struct sock {
+ 	u8			sk_prefer_busy_poll;
+ 	u16			sk_busy_poll_budget;
+ #endif
++	spinlock_t		sk_peer_lock;
+ 	struct pid		*sk_peer_pid;
+ 	const struct cred	*sk_peer_cred;
++
+ 	long			sk_rcvtimeo;
+ 	ktime_t			sk_stamp;
+ #if BITS_PER_LONG==32
+diff --git a/include/sound/rawmidi.h b/include/sound/rawmidi.h
+index 989e1517332d6..7a08ed2acd609 100644
+--- a/include/sound/rawmidi.h
++++ b/include/sound/rawmidi.h
+@@ -98,6 +98,7 @@ struct snd_rawmidi_file {
+ 	struct snd_rawmidi *rmidi;
+ 	struct snd_rawmidi_substream *input;
+ 	struct snd_rawmidi_substream *output;
++	unsigned int user_pversion;	/* supported protocol version */
+ };
+ 
+ struct snd_rawmidi_str {
+diff --git a/include/uapi/sound/asound.h b/include/uapi/sound/asound.h
+index d17c061950df6..9c5121e6ead45 100644
+--- a/include/uapi/sound/asound.h
++++ b/include/uapi/sound/asound.h
+@@ -783,6 +783,7 @@ struct snd_rawmidi_status {
+ 
+ #define SNDRV_RAWMIDI_IOCTL_PVERSION	_IOR('W', 0x00, int)
+ #define SNDRV_RAWMIDI_IOCTL_INFO	_IOR('W', 0x01, struct snd_rawmidi_info)
++#define SNDRV_RAWMIDI_IOCTL_USER_PVERSION _IOW('W', 0x02, int)
+ #define SNDRV_RAWMIDI_IOCTL_PARAMS	_IOWR('W', 0x10, struct snd_rawmidi_params)
+ #define SNDRV_RAWMIDI_IOCTL_STATUS	_IOWR('W', 0x20, struct snd_rawmidi_status)
+ #define SNDRV_RAWMIDI_IOCTL_DROP	_IOW('W', 0x30, int)
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index 70f6fd4fa3056..2ce17447fb769 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -367,6 +367,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ 		const struct btf_type *mtype, *ptype;
+ 		struct bpf_prog *prog;
+ 		u32 moff;
++		u32 flags;
+ 
+ 		moff = btf_member_bit_offset(t, member) / 8;
+ 		ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL);
+@@ -430,10 +431,12 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ 
+ 		tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
+ 		tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
++		flags = st_ops->func_models[i].ret_size > 0 ?
++			BPF_TRAMP_F_RET_FENTRY_RET : 0;
+ 		err = arch_prepare_bpf_trampoline(NULL, image,
+ 						  st_map->image + PAGE_SIZE,
+-						  &st_ops->func_models[i], 0,
+-						  tprogs, NULL);
++						  &st_ops->func_models[i],
++						  flags, tprogs, NULL);
+ 		if (err < 0)
+ 			goto reset_unlock;
+ 
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 0a28a8095d3e9..c019611fbc8f4 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -827,7 +827,7 @@ int bpf_jit_charge_modmem(u32 pages)
+ {
+ 	if (atomic_long_add_return(pages, &bpf_jit_current) >
+ 	    (bpf_jit_limit >> PAGE_SHIFT)) {
+-		if (!capable(CAP_SYS_ADMIN)) {
++		if (!bpf_capable()) {
+ 			atomic_long_sub(pages, &bpf_jit_current);
+ 			return -EPERM;
+ 		}
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 57124614363df..e7af18857371e 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -537,9 +537,17 @@ static struct attribute *sugov_attrs[] = {
+ };
+ ATTRIBUTE_GROUPS(sugov);
+ 
++static void sugov_tunables_free(struct kobject *kobj)
++{
++	struct gov_attr_set *attr_set = container_of(kobj, struct gov_attr_set, kobj);
++
++	kfree(to_sugov_tunables(attr_set));
++}
++
+ static struct kobj_type sugov_tunables_ktype = {
+ 	.default_groups = sugov_groups,
+ 	.sysfs_ops = &governor_sysfs_ops,
++	.release = &sugov_tunables_free,
+ };
+ 
+ /********************** cpufreq governor interface *********************/
+@@ -639,12 +647,10 @@ static struct sugov_tunables *sugov_tunables_alloc(struct sugov_policy *sg_polic
+ 	return tunables;
+ }
+ 
+-static void sugov_tunables_free(struct sugov_tunables *tunables)
++static void sugov_clear_global_tunables(void)
+ {
+ 	if (!have_governor_per_policy())
+ 		global_tunables = NULL;
+-
+-	kfree(tunables);
+ }
+ 
+ static int sugov_init(struct cpufreq_policy *policy)
+@@ -707,7 +713,7 @@ out:
+ fail:
+ 	kobject_put(&tunables->attr_set.kobj);
+ 	policy->governor_data = NULL;
+-	sugov_tunables_free(tunables);
++	sugov_clear_global_tunables();
+ 
+ stop_kthread:
+ 	sugov_kthread_stop(sg_policy);
+@@ -734,7 +740,7 @@ static void sugov_exit(struct cpufreq_policy *policy)
+ 	count = gov_attr_set_put(&tunables->attr_set, &sg_policy->tunables_hook);
+ 	policy->governor_data = NULL;
+ 	if (!count)
+-		sugov_tunables_free(tunables);
++		sugov_clear_global_tunables();
+ 
+ 	mutex_unlock(&global_tunables_lock);
+ 
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 7e08e3d947c20..2c879cd02a5f7 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -173,16 +173,22 @@ static ssize_t sched_scaling_write(struct file *filp, const char __user *ubuf,
+ 				   size_t cnt, loff_t *ppos)
+ {
+ 	char buf[16];
++	unsigned int scaling;
+ 
+ 	if (cnt > 15)
+ 		cnt = 15;
+ 
+ 	if (copy_from_user(&buf, ubuf, cnt))
+ 		return -EFAULT;
++	buf[cnt] = '\0';
+ 
+-	if (kstrtouint(buf, 10, &sysctl_sched_tunable_scaling))
++	if (kstrtouint(buf, 10, &scaling))
+ 		return -EINVAL;
+ 
++	if (scaling >= SCHED_TUNABLESCALING_END)
++		return -EINVAL;
++
++	sysctl_sched_tunable_scaling = scaling;
+ 	if (sched_update_scaling())
+ 		return -EINVAL;
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 30a6984a58f71..423ec671a3063 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4898,8 +4898,12 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+ 	/* update hierarchical throttle state */
+ 	walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq);
+ 
+-	if (!cfs_rq->load.weight)
++	/* Nothing to run but something to decay (on_list)? Complete the branch */
++	if (!cfs_rq->load.weight) {
++		if (cfs_rq->on_list)
++			goto unthrottle_throttle;
+ 		return;
++	}
+ 
+ 	task_delta = cfs_rq->h_nr_running;
+ 	idle_task_delta = cfs_rq->idle_h_nr_running;
+diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
+index 1e2d10f860117..cdc842d090db3 100644
+--- a/lib/Kconfig.kasan
++++ b/lib/Kconfig.kasan
+@@ -66,6 +66,7 @@ choice
+ config KASAN_GENERIC
+ 	bool "Generic mode"
+ 	depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC
++	depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
+ 	select SLUB_DEBUG if SLUB
+ 	select CONSTRUCTORS
+ 	help
+@@ -86,6 +87,7 @@ config KASAN_GENERIC
+ config KASAN_SW_TAGS
+ 	bool "Software tag-based mode"
+ 	depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS
++	depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
+ 	select SLUB_DEBUG if SLUB
+ 	select CONSTRUCTORS
+ 	help
+diff --git a/mm/util.c b/mm/util.c
+index c18202b3e659d..8bd4a20262a91 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -593,6 +593,10 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
+ 	if (ret || size <= PAGE_SIZE)
+ 		return ret;
+ 
++	/* Don't even allow crazy sizes */
++	if (WARN_ON_ONCE(size > INT_MAX))
++		return NULL;
++
+ 	return __vmalloc_node(size, 1, flags, node,
+ 			__builtin_return_address(0));
+ }
+diff --git a/net/core/sock.c b/net/core/sock.c
+index a3eea6e0b30a7..4a08ae6de578c 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1366,6 +1366,16 @@ set_sndbuf:
+ }
+ EXPORT_SYMBOL(sock_setsockopt);
+ 
++static const struct cred *sk_get_peer_cred(struct sock *sk)
++{
++	const struct cred *cred;
++
++	spin_lock(&sk->sk_peer_lock);
++	cred = get_cred(sk->sk_peer_cred);
++	spin_unlock(&sk->sk_peer_lock);
++
++	return cred;
++}
+ 
+ static void cred_to_ucred(struct pid *pid, const struct cred *cred,
+ 			  struct ucred *ucred)
+@@ -1542,7 +1552,11 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		struct ucred peercred;
+ 		if (len > sizeof(peercred))
+ 			len = sizeof(peercred);
++
++		spin_lock(&sk->sk_peer_lock);
+ 		cred_to_ucred(sk->sk_peer_pid, sk->sk_peer_cred, &peercred);
++		spin_unlock(&sk->sk_peer_lock);
++
+ 		if (copy_to_user(optval, &peercred, len))
+ 			return -EFAULT;
+ 		goto lenout;
+@@ -1550,20 +1564,23 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 
+ 	case SO_PEERGROUPS:
+ 	{
++		const struct cred *cred;
+ 		int ret, n;
+ 
+-		if (!sk->sk_peer_cred)
++		cred = sk_get_peer_cred(sk);
++		if (!cred)
+ 			return -ENODATA;
+ 
+-		n = sk->sk_peer_cred->group_info->ngroups;
++		n = cred->group_info->ngroups;
+ 		if (len < n * sizeof(gid_t)) {
+ 			len = n * sizeof(gid_t);
++			put_cred(cred);
+ 			return put_user(len, optlen) ? -EFAULT : -ERANGE;
+ 		}
+ 		len = n * sizeof(gid_t);
+ 
+-		ret = groups_to_user((gid_t __user *)optval,
+-				     sk->sk_peer_cred->group_info);
++		ret = groups_to_user((gid_t __user *)optval, cred->group_info);
++		put_cred(cred);
+ 		if (ret)
+ 			return ret;
+ 		goto lenout;
+@@ -1921,9 +1938,10 @@ static void __sk_destruct(struct rcu_head *head)
+ 		sk->sk_frag.page = NULL;
+ 	}
+ 
+-	if (sk->sk_peer_cred)
+-		put_cred(sk->sk_peer_cred);
++	/* We do not need to acquire sk->sk_peer_lock, we are the last user. */
++	put_cred(sk->sk_peer_cred);
+ 	put_pid(sk->sk_peer_pid);
++
+ 	if (likely(sk->sk_net_refcnt))
+ 		put_net(sock_net(sk));
+ 	sk_prot_free(sk->sk_prot_creator, sk);
+@@ -3124,6 +3142,8 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ 
+ 	sk->sk_peer_pid 	=	NULL;
+ 	sk->sk_peer_cred	=	NULL;
++	spin_lock_init(&sk->sk_peer_lock);
++
+ 	sk->sk_write_pending	=	0;
+ 	sk->sk_rcvlowat		=	1;
+ 	sk->sk_rcvtimeo		=	MAX_SCHEDULE_TIMEOUT;
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 4c0c33e4710da..27fdd86b9cee7 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1663,7 +1663,7 @@ EXPORT_SYMBOL_GPL(fib_nexthop_info);
+ 
+ #if IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) || IS_ENABLED(CONFIG_IPV6)
+ int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc,
+-		    int nh_weight, u8 rt_family)
++		    int nh_weight, u8 rt_family, u32 nh_tclassid)
+ {
+ 	const struct net_device *dev = nhc->nhc_dev;
+ 	struct rtnexthop *rtnh;
+@@ -1681,6 +1681,9 @@ int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc,
+ 
+ 	rtnh->rtnh_flags = flags;
+ 
++	if (nh_tclassid && nla_put_u32(skb, RTA_FLOW, nh_tclassid))
++		goto nla_put_failure;
++
+ 	/* length of rtnetlink header + attributes */
+ 	rtnh->rtnh_len = nlmsg_get_pos(skb) - (void *)rtnh;
+ 
+@@ -1708,14 +1711,13 @@ static int fib_add_multipath(struct sk_buff *skb, struct fib_info *fi)
+ 	}
+ 
+ 	for_nexthops(fi) {
+-		if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
+-				    AF_INET) < 0)
+-			goto nla_put_failure;
++		u32 nh_tclassid = 0;
+ #ifdef CONFIG_IP_ROUTE_CLASSID
+-		if (nh->nh_tclassid &&
+-		    nla_put_u32(skb, RTA_FLOW, nh->nh_tclassid))
+-			goto nla_put_failure;
++		nh_tclassid = nh->nh_tclassid;
+ #endif
++		if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight,
++				    AF_INET, nh_tclassid) < 0)
++			goto nla_put_failure;
+ 	} endfor_nexthops(fi);
+ 
+ mp_end:
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 1a742b710e543..915ea635b2d5a 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1053,7 +1053,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	__be16 dport;
+ 	u8  tos;
+ 	int err, is_udplite = IS_UDPLITE(sk);
+-	int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
++	int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
+ 	int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
+ 	struct sk_buff *skb;
+ 	struct ip_options_data opt_copy;
+@@ -1361,7 +1361,7 @@ int udp_sendpage(struct sock *sk, struct page *page, int offset,
+ 	}
+ 
+ 	up->len += size;
+-	if (!(up->corkflag || (flags&MSG_MORE)))
++	if (!(READ_ONCE(up->corkflag) || (flags&MSG_MORE)))
+ 		ret = udp_push_pending_frames(sk);
+ 	if (!ret)
+ 		ret = size;
+@@ -2662,9 +2662,9 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 	switch (optname) {
+ 	case UDP_CORK:
+ 		if (val != 0) {
+-			up->corkflag = 1;
++			WRITE_ONCE(up->corkflag, 1);
+ 		} else {
+-			up->corkflag = 0;
++			WRITE_ONCE(up->corkflag, 0);
+ 			lock_sock(sk);
+ 			push_pending_frames(sk);
+ 			release_sock(sk);
+@@ -2787,7 +2787,7 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 
+ 	switch (optname) {
+ 	case UDP_CORK:
+-		val = up->corkflag;
++		val = READ_ONCE(up->corkflag);
+ 		break;
+ 
+ 	case UDP_ENCAP:
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 6033403021019..0aeff2ce17b9f 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5700,14 +5700,15 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 
+ 		if (fib_add_nexthop(skb, &rt->fib6_nh->nh_common,
+-				    rt->fib6_nh->fib_nh_weight, AF_INET6) < 0)
++				    rt->fib6_nh->fib_nh_weight, AF_INET6,
++				    0) < 0)
+ 			goto nla_put_failure;
+ 
+ 		list_for_each_entry_safe(sibling, next_sibling,
+ 					 &rt->fib6_siblings, fib6_siblings) {
+ 			if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common,
+ 					    sibling->fib6_nh->fib_nh_weight,
+-					    AF_INET6) < 0)
++					    AF_INET6, 0) < 0)
+ 				goto nla_put_failure;
+ 		}
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index c5e15e94bb004..80ae024d13c8c 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1303,7 +1303,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	int addr_len = msg->msg_namelen;
+ 	bool connected = false;
+ 	int ulen = len;
+-	int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
++	int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
+ 	int err;
+ 	int is_udplite = IS_UDPLITE(sk);
+ 	int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
+diff --git a/net/mac80211/mesh_ps.c b/net/mac80211/mesh_ps.c
+index 204830a55240b..3fbd0b9ff9135 100644
+--- a/net/mac80211/mesh_ps.c
++++ b/net/mac80211/mesh_ps.c
+@@ -2,6 +2,7 @@
+ /*
+  * Copyright 2012-2013, Marco Porsch <marco.porsch@s2005.tu-chemnitz.de>
+  * Copyright 2012-2013, cozybit Inc.
++ * Copyright (C) 2021 Intel Corporation
+  */
+ 
+ #include "mesh.h"
+@@ -588,7 +589,7 @@ void ieee80211_mps_frame_release(struct sta_info *sta,
+ 
+ 	/* only transmit to PS STA with announced, non-zero awake window */
+ 	if (test_sta_flag(sta, WLAN_STA_PS_STA) &&
+-	    (!elems->awake_window || !le16_to_cpu(*elems->awake_window)))
++	    (!elems->awake_window || !get_unaligned_le16(elems->awake_window)))
+ 		return;
+ 
+ 	if (!test_sta_flag(sta, WLAN_STA_MPSP_OWNER))
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index e5935e3d7a078..8c6416129d5be 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -392,10 +392,6 @@ static bool rate_control_send_low(struct ieee80211_sta *pubsta,
+ 	int mcast_rate;
+ 	bool use_basicrate = false;
+ 
+-	if (ieee80211_is_tx_data(txrc->skb) &&
+-	    info->flags & IEEE80211_TX_CTL_NO_ACK)
+-		return false;
+-
+ 	if (!pubsta || rc_no_data_or_no_ack_use_min(txrc)) {
+ 		__rate_control_send_low(txrc->hw, sband, pubsta, info,
+ 					txrc->rate_idx_mask);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index fa09a369214db..751e601c46235 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -2209,7 +2209,11 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
+ 			}
+ 
+ 			vht_mcs = iterator.this_arg[4] >> 4;
++			if (vht_mcs > 11)
++				vht_mcs = 0;
+ 			vht_nss = iterator.this_arg[4] & 0xF;
++			if (!vht_nss || vht_nss > 8)
++				vht_nss = 1;
+ 			break;
+ 
+ 		/*
+@@ -3380,6 +3384,14 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+ 		goto out;
+ 
++	/* If n == 2, the "while (*frag_tail)" loop above didn't execute
++	 * and  frag_tail should be &skb_shinfo(head)->frag_list.
++	 * However, ieee80211_amsdu_prepare_head() can reallocate it.
++	 * Reload frag_tail to have it pointing to the correct place.
++	 */
++	if (n == 2)
++		frag_tail = &skb_shinfo(head)->frag_list;
++
+ 	/*
+ 	 * Pad out the previous subframe to a multiple of 4 by adding the
+ 	 * padding to the next one, that's being added. Note that head->len
+diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
+index bca47fad5a162..4eed23e276104 100644
+--- a/net/mac80211/wpa.c
++++ b/net/mac80211/wpa.c
+@@ -520,6 +520,9 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
+ 			return RX_DROP_UNUSABLE;
+ 	}
+ 
++	/* reload hdr - skb might have been reallocated */
++	hdr = (void *)rx->skb->data;
++
+ 	data_len = skb->len - hdrlen - IEEE80211_CCMP_HDR_LEN - mic_len;
+ 	if (!rx->sta || data_len < 0)
+ 		return RX_DROP_UNUSABLE;
+@@ -749,6 +752,9 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
+ 			return RX_DROP_UNUSABLE;
+ 	}
+ 
++	/* reload hdr - skb might have been reallocated */
++	hdr = (void *)rx->skb->data;
++
+ 	data_len = skb->len - hdrlen - IEEE80211_GCMP_HDR_LEN - mic_len;
+ 	if (!rx->sta || data_len < 0)
+ 		return RX_DROP_UNUSABLE;
+diff --git a/net/mptcp/mptcp_diag.c b/net/mptcp/mptcp_diag.c
+index f48eb6315bbb4..292374fb07792 100644
+--- a/net/mptcp/mptcp_diag.c
++++ b/net/mptcp/mptcp_diag.c
+@@ -36,7 +36,7 @@ static int mptcp_diag_dump_one(struct netlink_callback *cb,
+ 	struct sock *sk;
+ 
+ 	net = sock_net(in_skb->sk);
+-	msk = mptcp_token_get_sock(req->id.idiag_cookie[0]);
++	msk = mptcp_token_get_sock(net, req->id.idiag_cookie[0]);
+ 	if (!msk)
+ 		goto out_nosk;
+ 
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 89251cbe9f1a7..81103b29c0af1 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -1558,9 +1558,7 @@ static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	list_for_each_entry(entry, &pernet->local_addr_list, list) {
+ 		if (addresses_equal(&entry->addr, &addr.addr, true)) {
+-			ret = mptcp_nl_addr_backup(net, &entry->addr, bkup);
+-			if (ret)
+-				return ret;
++			mptcp_nl_addr_backup(net, &entry->addr, bkup);
+ 
+ 			if (bkup)
+ 				entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 6ac564d584c19..c8a49e92e66f3 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -680,7 +680,7 @@ int mptcp_token_new_connect(struct sock *sk);
+ void mptcp_token_accept(struct mptcp_subflow_request_sock *r,
+ 			struct mptcp_sock *msk);
+ bool mptcp_token_exists(u32 token);
+-struct mptcp_sock *mptcp_token_get_sock(u32 token);
++struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token);
+ struct mptcp_sock *mptcp_token_iter_next(const struct net *net, long *s_slot,
+ 					 long *s_num);
+ void mptcp_token_destroy(struct mptcp_sock *msk);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 966f777d35ce9..1f3039b829a7f 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -86,7 +86,7 @@ static struct mptcp_sock *subflow_token_join_request(struct request_sock *req)
+ 	struct mptcp_sock *msk;
+ 	int local_id;
+ 
+-	msk = mptcp_token_get_sock(subflow_req->token);
++	msk = mptcp_token_get_sock(sock_net(req_to_sk(req)), subflow_req->token);
+ 	if (!msk) {
+ 		SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINNOTOKEN);
+ 		return NULL;
+diff --git a/net/mptcp/syncookies.c b/net/mptcp/syncookies.c
+index 37127781aee98..7f22526346a7e 100644
+--- a/net/mptcp/syncookies.c
++++ b/net/mptcp/syncookies.c
+@@ -108,18 +108,12 @@ bool mptcp_token_join_cookie_init_state(struct mptcp_subflow_request_sock *subfl
+ 
+ 	e->valid = 0;
+ 
+-	msk = mptcp_token_get_sock(e->token);
++	msk = mptcp_token_get_sock(net, e->token);
+ 	if (!msk) {
+ 		spin_unlock_bh(&join_entry_locks[i]);
+ 		return false;
+ 	}
+ 
+-	/* If this fails, the token got re-used in the mean time by another
+-	 * mptcp socket in a different netns, i.e. entry is outdated.
+-	 */
+-	if (!net_eq(sock_net((struct sock *)msk), net))
+-		goto err_put;
+-
+ 	subflow_req->remote_nonce = e->remote_nonce;
+ 	subflow_req->local_nonce = e->local_nonce;
+ 	subflow_req->backup = e->backup;
+@@ -128,11 +122,6 @@ bool mptcp_token_join_cookie_init_state(struct mptcp_subflow_request_sock *subfl
+ 	subflow_req->msk = msk;
+ 	spin_unlock_bh(&join_entry_locks[i]);
+ 	return true;
+-
+-err_put:
+-	spin_unlock_bh(&join_entry_locks[i]);
+-	sock_put((struct sock *)msk);
+-	return false;
+ }
+ 
+ void __init mptcp_join_cookie_init(void)
+diff --git a/net/mptcp/token.c b/net/mptcp/token.c
+index a98e554b034fe..e581b341c5beb 100644
+--- a/net/mptcp/token.c
++++ b/net/mptcp/token.c
+@@ -231,6 +231,7 @@ found:
+ 
+ /**
+  * mptcp_token_get_sock - retrieve mptcp connection sock using its token
++ * @net: restrict to this namespace
+  * @token: token of the mptcp connection to retrieve
+  *
+  * This function returns the mptcp connection structure with the given token.
+@@ -238,7 +239,7 @@ found:
+  *
+  * returns NULL if no connection with the given token value exists.
+  */
+-struct mptcp_sock *mptcp_token_get_sock(u32 token)
++struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token)
+ {
+ 	struct hlist_nulls_node *pos;
+ 	struct token_bucket *bucket;
+@@ -251,11 +252,15 @@ struct mptcp_sock *mptcp_token_get_sock(u32 token)
+ again:
+ 	sk_nulls_for_each_rcu(sk, pos, &bucket->msk_chain) {
+ 		msk = mptcp_sk(sk);
+-		if (READ_ONCE(msk->token) != token)
++		if (READ_ONCE(msk->token) != token ||
++		    !net_eq(sock_net(sk), net))
+ 			continue;
++
+ 		if (!refcount_inc_not_zero(&sk->sk_refcnt))
+ 			goto not_found;
+-		if (READ_ONCE(msk->token) != token) {
++
++		if (READ_ONCE(msk->token) != token ||
++		    !net_eq(sock_net(sk), net)) {
+ 			sock_put(sk);
+ 			goto again;
+ 		}
+diff --git a/net/mptcp/token_test.c b/net/mptcp/token_test.c
+index e1bd6f0a0676f..5d984bec1cd86 100644
+--- a/net/mptcp/token_test.c
++++ b/net/mptcp/token_test.c
+@@ -11,6 +11,7 @@ static struct mptcp_subflow_request_sock *build_req_sock(struct kunit *test)
+ 			    GFP_USER);
+ 	KUNIT_EXPECT_NOT_ERR_OR_NULL(test, req);
+ 	mptcp_token_init_request((struct request_sock *)req);
++	sock_net_set((struct sock *)req, &init_net);
+ 	return req;
+ }
+ 
+@@ -22,7 +23,7 @@ static void mptcp_token_test_req_basic(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, 0,
+ 			mptcp_token_new_request((struct request_sock *)req));
+ 	KUNIT_EXPECT_NE(test, 0, (int)req->token);
+-	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(req->token));
++	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, req->token));
+ 
+ 	/* cleanup */
+ 	mptcp_token_destroy_request((struct request_sock *)req);
+@@ -55,6 +56,7 @@ static struct mptcp_sock *build_msk(struct kunit *test)
+ 	msk = kunit_kzalloc(test, sizeof(struct mptcp_sock), GFP_USER);
+ 	KUNIT_EXPECT_NOT_ERR_OR_NULL(test, msk);
+ 	refcount_set(&((struct sock *)msk)->sk_refcnt, 1);
++	sock_net_set((struct sock *)msk, &init_net);
+ 	return msk;
+ }
+ 
+@@ -74,11 +76,11 @@ static void mptcp_token_test_msk_basic(struct kunit *test)
+ 			mptcp_token_new_connect((struct sock *)icsk));
+ 	KUNIT_EXPECT_NE(test, 0, (int)ctx->token);
+ 	KUNIT_EXPECT_EQ(test, ctx->token, msk->token);
+-	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(ctx->token));
++	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, ctx->token));
+ 	KUNIT_EXPECT_EQ(test, 2, (int)refcount_read(&sk->sk_refcnt));
+ 
+ 	mptcp_token_destroy(msk);
+-	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(ctx->token));
++	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, ctx->token));
+ }
+ 
+ static void mptcp_token_test_accept(struct kunit *test)
+@@ -90,11 +92,11 @@ static void mptcp_token_test_accept(struct kunit *test)
+ 			mptcp_token_new_request((struct request_sock *)req));
+ 	msk->token = req->token;
+ 	mptcp_token_accept(req, msk);
+-	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(msk->token));
++	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, msk->token));
+ 
+ 	/* this is now a no-op */
+ 	mptcp_token_destroy_request((struct request_sock *)req);
+-	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(msk->token));
++	KUNIT_EXPECT_PTR_EQ(test, msk, mptcp_token_get_sock(&init_net, msk->token));
+ 
+ 	/* cleanup */
+ 	mptcp_token_destroy(msk);
+@@ -116,7 +118,7 @@ static void mptcp_token_test_destroyed(struct kunit *test)
+ 
+ 	/* simulate race on removal */
+ 	refcount_set(&sk->sk_refcnt, 0);
+-	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(msk->token));
++	KUNIT_EXPECT_PTR_EQ(test, null_msk, mptcp_token_get_sock(&init_net, msk->token));
+ 
+ 	/* cleanup */
+ 	mptcp_token_destroy(msk);
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 6186358eac7c5..6e391308431da 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -130,11 +130,11 @@ htable_size(u8 hbits)
+ {
+ 	size_t hsize;
+ 
+-	/* We must fit both into u32 in jhash and size_t */
++	/* We must fit both into u32 in jhash and INT_MAX in kvmalloc_node() */
+ 	if (hbits > 31)
+ 		return 0;
+ 	hsize = jhash_size(hbits);
+-	if ((((size_t)-1) - sizeof(struct htable)) / sizeof(struct hbucket *)
++	if ((INT_MAX - sizeof(struct htable)) / sizeof(struct hbucket *)
+ 	    < hsize)
+ 		return 0;
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index c100c6b112c81..2c467c422dc63 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1468,6 +1468,10 @@ int __init ip_vs_conn_init(void)
+ 	int idx;
+ 
+ 	/* Compute size and mask */
++	if (ip_vs_conn_tab_bits < 8 || ip_vs_conn_tab_bits > 20) {
++		pr_info("conn_tab_bits not in [8, 20]. Using default value\n");
++		ip_vs_conn_tab_bits = CONFIG_IP_VS_TAB_BITS;
++	}
+ 	ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits;
+ 	ip_vs_conn_tab_mask = ip_vs_conn_tab_size - 1;
+ 
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index d31dbccbe7bd4..4f074d7653b8a 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -75,6 +75,9 @@ static __read_mostly struct kmem_cache *nf_conntrack_cachep;
+ static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock);
+ static __read_mostly bool nf_conntrack_locks_all;
+ 
++/* serialize hash resizes and nf_ct_iterate_cleanup */
++static DEFINE_MUTEX(nf_conntrack_mutex);
++
+ #define GC_SCAN_INTERVAL	(120u * HZ)
+ #define GC_SCAN_MAX_DURATION	msecs_to_jiffies(10)
+ 
+@@ -2192,28 +2195,31 @@ get_next_corpse(int (*iter)(struct nf_conn *i, void *data),
+ 	spinlock_t *lockp;
+ 
+ 	for (; *bucket < nf_conntrack_htable_size; (*bucket)++) {
++		struct hlist_nulls_head *hslot = &nf_conntrack_hash[*bucket];
++
++		if (hlist_nulls_empty(hslot))
++			continue;
++
+ 		lockp = &nf_conntrack_locks[*bucket % CONNTRACK_LOCKS];
+ 		local_bh_disable();
+ 		nf_conntrack_lock(lockp);
+-		if (*bucket < nf_conntrack_htable_size) {
+-			hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[*bucket], hnnode) {
+-				if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
+-					continue;
+-				/* All nf_conn objects are added to hash table twice, one
+-				 * for original direction tuple, once for the reply tuple.
+-				 *
+-				 * Exception: In the IPS_NAT_CLASH case, only the reply
+-				 * tuple is added (the original tuple already existed for
+-				 * a different object).
+-				 *
+-				 * We only need to call the iterator once for each
+-				 * conntrack, so we just use the 'reply' direction
+-				 * tuple while iterating.
+-				 */
+-				ct = nf_ct_tuplehash_to_ctrack(h);
+-				if (iter(ct, data))
+-					goto found;
+-			}
++		hlist_nulls_for_each_entry(h, n, hslot, hnnode) {
++			if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
++				continue;
++			/* All nf_conn objects are added to hash table twice, one
++			 * for original direction tuple, once for the reply tuple.
++			 *
++			 * Exception: In the IPS_NAT_CLASH case, only the reply
++			 * tuple is added (the original tuple already existed for
++			 * a different object).
++			 *
++			 * We only need to call the iterator once for each
++			 * conntrack, so we just use the 'reply' direction
++			 * tuple while iterating.
++			 */
++			ct = nf_ct_tuplehash_to_ctrack(h);
++			if (iter(ct, data))
++				goto found;
+ 		}
+ 		spin_unlock(lockp);
+ 		local_bh_enable();
+@@ -2231,26 +2237,20 @@ found:
+ static void nf_ct_iterate_cleanup(int (*iter)(struct nf_conn *i, void *data),
+ 				  void *data, u32 portid, int report)
+ {
+-	unsigned int bucket = 0, sequence;
++	unsigned int bucket = 0;
+ 	struct nf_conn *ct;
+ 
+ 	might_sleep();
+ 
+-	for (;;) {
+-		sequence = read_seqcount_begin(&nf_conntrack_generation);
+-
+-		while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) {
+-			/* Time to push up daises... */
++	mutex_lock(&nf_conntrack_mutex);
++	while ((ct = get_next_corpse(iter, data, &bucket)) != NULL) {
++		/* Time to push up daises... */
+ 
+-			nf_ct_delete(ct, portid, report);
+-			nf_ct_put(ct);
+-			cond_resched();
+-		}
+-
+-		if (!read_seqcount_retry(&nf_conntrack_generation, sequence))
+-			break;
+-		bucket = 0;
++		nf_ct_delete(ct, portid, report);
++		nf_ct_put(ct);
++		cond_resched();
+ 	}
++	mutex_unlock(&nf_conntrack_mutex);
+ }
+ 
+ struct iter_data {
+@@ -2486,8 +2486,10 @@ int nf_conntrack_hash_resize(unsigned int hashsize)
+ 	if (!hash)
+ 		return -ENOMEM;
+ 
++	mutex_lock(&nf_conntrack_mutex);
+ 	old_size = nf_conntrack_htable_size;
+ 	if (old_size == hashsize) {
++		mutex_unlock(&nf_conntrack_mutex);
+ 		kvfree(hash);
+ 		return 0;
+ 	}
+@@ -2523,6 +2525,8 @@ int nf_conntrack_hash_resize(unsigned int hashsize)
+ 	nf_conntrack_all_unlock();
+ 	local_bh_enable();
+ 
++	mutex_unlock(&nf_conntrack_mutex);
++
+ 	synchronize_net();
+ 	kvfree(old_hash);
+ 	return 0;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 081437dd75b7e..b9546defdc280 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4336,7 +4336,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ 	if (ops->privsize != NULL)
+ 		size = ops->privsize(nla, &desc);
+ 	alloc_size = sizeof(*set) + size + udlen;
+-	if (alloc_size < size)
++	if (alloc_size < size || alloc_size > INT_MAX)
+ 		return -ENOMEM;
+ 	set = kvzalloc(alloc_size, GFP_KERNEL);
+ 	if (!set)
+@@ -9599,7 +9599,6 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 		table->use--;
+ 		nf_tables_chain_destroy(&ctx);
+ 	}
+-	list_del(&table->list);
+ 	nf_tables_table_destroy(&ctx);
+ }
+ 
+@@ -9612,6 +9611,8 @@ static void __nft_release_tables(struct net *net)
+ 		if (nft_table_has_owner(table))
+ 			continue;
+ 
++		list_del(&table->list);
++
+ 		__nft_release_table(net, table);
+ 	}
+ }
+@@ -9619,31 +9620,38 @@ static void __nft_release_tables(struct net *net)
+ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event,
+ 			    void *ptr)
+ {
++	struct nft_table *table, *to_delete[8];
+ 	struct nftables_pernet *nft_net;
+ 	struct netlink_notify *n = ptr;
+-	struct nft_table *table, *nt;
+ 	struct net *net = n->net;
+-	bool release = false;
++	unsigned int deleted;
++	bool restart = false;
+ 
+ 	if (event != NETLINK_URELEASE || n->protocol != NETLINK_NETFILTER)
+ 		return NOTIFY_DONE;
+ 
+ 	nft_net = nft_pernet(net);
++	deleted = 0;
+ 	mutex_lock(&nft_net->commit_mutex);
++again:
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+ 		if (nft_table_has_owner(table) &&
+ 		    n->portid == table->nlpid) {
+ 			__nft_release_hook(net, table);
+-			release = true;
++			list_del_rcu(&table->list);
++			to_delete[deleted++] = table;
++			if (deleted >= ARRAY_SIZE(to_delete))
++				break;
+ 		}
+ 	}
+-	if (release) {
++	if (deleted) {
++		restart = deleted >= ARRAY_SIZE(to_delete);
+ 		synchronize_rcu();
+-		list_for_each_entry_safe(table, nt, &nft_net->tables, list) {
+-			if (nft_table_has_owner(table) &&
+-			    n->portid == table->nlpid)
+-				__nft_release_table(net, table);
+-		}
++		while (deleted)
++			__nft_release_table(net, to_delete[--deleted]);
++
++		if (restart)
++			goto again;
+ 	}
+ 	mutex_unlock(&nft_net->commit_mutex);
+ 
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 272bcdb1392df..f69cc73c58130 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -19,6 +19,7 @@
+ #include <linux/netfilter_bridge/ebtables.h>
+ #include <linux/netfilter_arp/arp_tables.h>
+ #include <net/netfilter/nf_tables.h>
++#include <net/netfilter/nf_log.h>
+ 
+ /* Used for matches where *info is larger than X byte */
+ #define NFT_MATCH_LARGE_THRESH	192
+@@ -257,8 +258,22 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	nft_compat_wait_for_destructors();
+ 
+ 	ret = xt_check_target(&par, size, proto, inv);
+-	if (ret < 0)
++	if (ret < 0) {
++		if (ret == -ENOENT) {
++			const char *modname = NULL;
++
++			if (strcmp(target->name, "LOG") == 0)
++				modname = "nf_log_syslog";
++			else if (strcmp(target->name, "NFLOG") == 0)
++				modname = "nfnetlink_log";
++
++			if (modname &&
++			    nft_request_module(ctx->net, "%s", modname) == -EAGAIN)
++				return -EAGAIN;
++		}
++
+ 		return ret;
++	}
+ 
+ 	/* The standard target cannot be used */
+ 	if (!target->target)
+diff --git a/net/netfilter/xt_LOG.c b/net/netfilter/xt_LOG.c
+index 2ff75f7637b09..f39244f9c0ed9 100644
+--- a/net/netfilter/xt_LOG.c
++++ b/net/netfilter/xt_LOG.c
+@@ -44,6 +44,7 @@ log_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ static int log_tg_check(const struct xt_tgchk_param *par)
+ {
+ 	const struct xt_log_info *loginfo = par->targinfo;
++	int ret;
+ 
+ 	if (par->family != NFPROTO_IPV4 && par->family != NFPROTO_IPV6)
+ 		return -EINVAL;
+@@ -58,7 +59,14 @@ static int log_tg_check(const struct xt_tgchk_param *par)
+ 		return -EINVAL;
+ 	}
+ 
+-	return nf_logger_find_get(par->family, NF_LOG_TYPE_LOG);
++	ret = nf_logger_find_get(par->family, NF_LOG_TYPE_LOG);
++	if (ret != 0 && !par->nft_compat) {
++		request_module("%s", "nf_log_syslog");
++
++		ret = nf_logger_find_get(par->family, NF_LOG_TYPE_LOG);
++	}
++
++	return ret;
+ }
+ 
+ static void log_tg_destroy(const struct xt_tgdtor_param *par)
+diff --git a/net/netfilter/xt_NFLOG.c b/net/netfilter/xt_NFLOG.c
+index fb57932080598..e660c3710a109 100644
+--- a/net/netfilter/xt_NFLOG.c
++++ b/net/netfilter/xt_NFLOG.c
+@@ -42,13 +42,21 @@ nflog_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ static int nflog_tg_check(const struct xt_tgchk_param *par)
+ {
+ 	const struct xt_nflog_info *info = par->targinfo;
++	int ret;
+ 
+ 	if (info->flags & ~XT_NFLOG_MASK)
+ 		return -EINVAL;
+ 	if (info->prefix[sizeof(info->prefix) - 1] != '\0')
+ 		return -EINVAL;
+ 
+-	return nf_logger_find_get(par->family, NF_LOG_TYPE_ULOG);
++	ret = nf_logger_find_get(par->family, NF_LOG_TYPE_ULOG);
++	if (ret != 0 && !par->nft_compat) {
++		request_module("%s", "nfnetlink_log");
++
++		ret = nf_logger_find_get(par->family, NF_LOG_TYPE_ULOG);
++	}
++
++	return ret;
+ }
+ 
+ static void nflog_tg_destroy(const struct xt_tgdtor_param *par)
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index d7869a984881e..d2a4e31d963d3 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2188,18 +2188,24 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg,
+ 
+ 	arg->count = arg->skip;
+ 
++	rcu_read_lock();
+ 	idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) {
+ 		/* don't return filters that are being deleted */
+ 		if (!refcount_inc_not_zero(&f->refcnt))
+ 			continue;
++		rcu_read_unlock();
++
+ 		if (arg->fn(tp, f, arg) < 0) {
+ 			__fl_put(f);
+ 			arg->stop = 1;
++			rcu_read_lock();
+ 			break;
+ 		}
+ 		__fl_put(f);
+ 		arg->count++;
++		rcu_read_lock();
+ 	}
++	rcu_read_unlock();
+ 	arg->cookie = id;
+ }
+ 
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 5ef86fdb11769..1f1786021d9c8 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -702,7 +702,7 @@ static int sctp_rcv_ootb(struct sk_buff *skb)
+ 		ch = skb_header_pointer(skb, offset, sizeof(*ch), &_ch);
+ 
+ 		/* Break out if chunk length is less then minimal. */
+-		if (ntohs(ch->length) < sizeof(_ch))
++		if (!ch || ntohs(ch->length) < sizeof(_ch))
+ 			break;
+ 
+ 		ch_end = offset + SCTP_PAD4(ntohs(ch->length));
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 91ff09d833e8f..f96ee27d9ff22 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -600,20 +600,42 @@ static void unix_release_sock(struct sock *sk, int embrion)
+ 
+ static void init_peercred(struct sock *sk)
+ {
+-	put_pid(sk->sk_peer_pid);
+-	if (sk->sk_peer_cred)
+-		put_cred(sk->sk_peer_cred);
++	const struct cred *old_cred;
++	struct pid *old_pid;
++
++	spin_lock(&sk->sk_peer_lock);
++	old_pid = sk->sk_peer_pid;
++	old_cred = sk->sk_peer_cred;
+ 	sk->sk_peer_pid  = get_pid(task_tgid(current));
+ 	sk->sk_peer_cred = get_current_cred();
++	spin_unlock(&sk->sk_peer_lock);
++
++	put_pid(old_pid);
++	put_cred(old_cred);
+ }
+ 
+ static void copy_peercred(struct sock *sk, struct sock *peersk)
+ {
+-	put_pid(sk->sk_peer_pid);
+-	if (sk->sk_peer_cred)
+-		put_cred(sk->sk_peer_cred);
++	const struct cred *old_cred;
++	struct pid *old_pid;
++
++	if (sk < peersk) {
++		spin_lock(&sk->sk_peer_lock);
++		spin_lock_nested(&peersk->sk_peer_lock, SINGLE_DEPTH_NESTING);
++	} else {
++		spin_lock(&peersk->sk_peer_lock);
++		spin_lock_nested(&sk->sk_peer_lock, SINGLE_DEPTH_NESTING);
++	}
++	old_pid = sk->sk_peer_pid;
++	old_cred = sk->sk_peer_cred;
+ 	sk->sk_peer_pid  = get_pid(peersk->sk_peer_pid);
+ 	sk->sk_peer_cred = get_cred(peersk->sk_peer_cred);
++
++	spin_unlock(&sk->sk_peer_lock);
++	spin_unlock(&peersk->sk_peer_lock);
++
++	put_pid(old_pid);
++	put_cred(old_cred);
+ }
+ 
+ static int unix_listen(struct socket *sock, int backlog)
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 6c0a4a67ad2e3..6f30231bdb884 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -873,12 +873,21 @@ static long snd_rawmidi_ioctl(struct file *file, unsigned int cmd, unsigned long
+ 			return -EINVAL;
+ 		}
+ 	}
++	case SNDRV_RAWMIDI_IOCTL_USER_PVERSION:
++		if (get_user(rfile->user_pversion, (unsigned int __user *)arg))
++			return -EFAULT;
++		return 0;
++
+ 	case SNDRV_RAWMIDI_IOCTL_PARAMS:
+ 	{
+ 		struct snd_rawmidi_params params;
+ 
+ 		if (copy_from_user(&params, argp, sizeof(struct snd_rawmidi_params)))
+ 			return -EFAULT;
++		if (rfile->user_pversion < SNDRV_PROTOCOL_VERSION(2, 0, 2)) {
++			params.mode = 0;
++			memset(params.reserved, 0, sizeof(params.reserved));
++		}
+ 		switch (params.stream) {
+ 		case SNDRV_RAWMIDI_STREAM_OUTPUT:
+ 			if (rfile->output == NULL)
+diff --git a/sound/firewire/motu/amdtp-motu.c b/sound/firewire/motu/amdtp-motu.c
+index 5388b85fb60e5..a18c2c033e836 100644
+--- a/sound/firewire/motu/amdtp-motu.c
++++ b/sound/firewire/motu/amdtp-motu.c
+@@ -276,10 +276,11 @@ static void __maybe_unused copy_message(u64 *frames, __be32 *buffer,
+ 
+ 	/* This is just for v2/v3 protocol. */
+ 	for (i = 0; i < data_blocks; ++i) {
+-		*frames = (be32_to_cpu(buffer[1]) << 16) |
+-			  (be32_to_cpu(buffer[2]) >> 16);
++		*frames = be32_to_cpu(buffer[1]);
++		*frames <<= 16;
++		*frames |= be32_to_cpu(buffer[2]) >> 16;
++		++frames;
+ 		buffer += data_block_quadlets;
+-		frames++;
+ 	}
+ }
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 70516527ebce3..0b9230a274b0a 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6442,6 +6442,20 @@ static void alc_fixup_thinkpad_acpi(struct hda_codec *codec,
+ 	hda_fixup_thinkpad_acpi(codec, fix, action);
+ }
+ 
++/* Fixup for Lenovo Legion 15IMHg05 speaker output on headset removal. */
++static void alc287_fixup_legion_15imhg05_speakers(struct hda_codec *codec,
++						  const struct hda_fixup *fix,
++						  int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		spec->gen.suppress_auto_mute = 1;
++		break;
++	}
++}
++
+ /* for alc295_fixup_hp_top_speakers */
+ #include "hp_x360_helper.c"
+ 
+@@ -6659,6 +6673,10 @@ enum {
+ 	ALC623_FIXUP_LENOVO_THINKSTATION_P340,
+ 	ALC255_FIXUP_ACER_HEADPHONE_AND_MIC,
+ 	ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST,
++	ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
++	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
++	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
++	ALC287_FIXUP_13S_GEN2_SPEAKERS
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8249,6 +8267,113 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ 	},
++	[ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS] = {
++		.type = HDA_FIXUP_VERBS,
++		//.v.verbs = legion_15imhg05_coefs,
++		.v.verbs = (const struct hda_verb[]) {
++			 // set left speaker Legion 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 // set right speaker Legion 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			 {}
++		},
++		.chained = true,
++		.chain_id = ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
++	},
++	[ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc287_fixup_legion_15imhg05_speakers,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
++	[ALC287_FIXUP_YOGA7_14ITL_SPEAKERS] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			 // set left speaker Yoga 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x1a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 // set right speaker Yoga 7i.
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x46 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xc },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2a },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++
++			 { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			 { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			 {}
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
++	[ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x42 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8643,6 +8768,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
++	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3853, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index a961f837cd094..bda66b30e063c 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -1073,6 +1073,16 @@ static int fsl_esai_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto err_pm_get_sync;
+ 
++	/*
++	 * Register platform component before registering cpu dai for there
++	 * is not defer probe for platform component in snd_soc_add_pcm_runtime().
++	 */
++	ret = imx_pcm_dma_init(pdev, IMX_ESAI_DMABUF_SIZE);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to init imx pcm dma: %d\n", ret);
++		goto err_pm_get_sync;
++	}
++
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_esai_component,
+ 					      &fsl_esai_dai, 1);
+ 	if (ret) {
+@@ -1082,12 +1092,6 @@ static int fsl_esai_probe(struct platform_device *pdev)
+ 
+ 	INIT_WORK(&esai_priv->work, fsl_esai_hw_reset);
+ 
+-	ret = imx_pcm_dma_init(pdev, IMX_ESAI_DMABUF_SIZE);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to init imx pcm dma: %d\n", ret);
+-		goto err_pm_get_sync;
+-	}
+-
+ 	return ret;
+ 
+ err_pm_get_sync:
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 8c0c75ce9490f..9f90989ac59a6 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -737,18 +737,23 @@ static int fsl_micfil_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 	regcache_cache_only(micfil->regmap, true);
+ 
++	/*
++	 * Register platform component before registering cpu dai for there
++	 * is not defer probe for platform component in snd_soc_add_pcm_runtime().
++	 */
++	ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to pcm register\n");
++		return ret;
++	}
++
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_micfil_component,
+ 					      &fsl_micfil_dai, 1);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register component %s\n",
+ 			fsl_micfil_component.name);
+-		return ret;
+ 	}
+ 
+-	ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0);
+-	if (ret)
+-		dev_err(&pdev->dev, "failed to pcm register\n");
+-
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 223fcd15bfccc..38f6362099d58 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -1152,11 +1152,10 @@ static int fsl_sai_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto err_pm_get_sync;
+ 
+-	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_component,
+-					      &sai->cpu_dai_drv, 1);
+-	if (ret)
+-		goto err_pm_get_sync;
+-
++	/*
++	 * Register platform component before registering cpu dai for there
++	 * is not defer probe for platform component in snd_soc_add_pcm_runtime().
++	 */
+ 	if (sai->soc_data->use_imx_pcm) {
+ 		ret = imx_pcm_dma_init(pdev, IMX_SAI_DMABUF_SIZE);
+ 		if (ret)
+@@ -1167,6 +1166,11 @@ static int fsl_sai_probe(struct platform_device *pdev)
+ 			goto err_pm_get_sync;
+ 	}
+ 
++	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_component,
++					      &sai->cpu_dai_drv, 1);
++	if (ret)
++		goto err_pm_get_sync;
++
+ 	return ret;
+ 
+ err_pm_get_sync:
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index 8ffb1a6048d63..1c53719bb61e2 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -1434,16 +1434,20 @@ static int fsl_spdif_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 	regcache_cache_only(spdif_priv->regmap, true);
+ 
+-	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_spdif_component,
+-					      &spdif_priv->cpu_dai_drv, 1);
++	/*
++	 * Register platform component before registering cpu dai for there
++	 * is not defer probe for platform component in snd_soc_add_pcm_runtime().
++	 */
++	ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
++		dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
+ 		goto err_pm_disable;
+ 	}
+ 
+-	ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
++	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_spdif_component,
++					      &spdif_priv->cpu_dai_drv, 1);
+ 	if (ret) {
+-		dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
++		dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
+ 		goto err_pm_disable;
+ 	}
+ 
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index fb7c29fc39d75..477d16713e72e 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -1217,18 +1217,23 @@ static int fsl_xcvr_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	regcache_cache_only(xcvr->regmap, true);
+ 
++	/*
++	 * Register platform component before registering cpu dai for there
++	 * is not defer probe for platform component in snd_soc_add_pcm_runtime().
++	 */
++	ret = devm_snd_dmaengine_pcm_register(dev, NULL, 0);
++	if (ret) {
++		dev_err(dev, "failed to pcm register\n");
++		return ret;
++	}
++
+ 	ret = devm_snd_soc_register_component(dev, &fsl_xcvr_comp,
+ 					      &fsl_xcvr_dai, 1);
+ 	if (ret) {
+ 		dev_err(dev, "failed to register component %s\n",
+ 			fsl_xcvr_comp.name);
+-		return ret;
+ 	}
+ 
+-	ret = devm_snd_dmaengine_pcm_register(dev, NULL, 0);
+-	if (ret)
+-		dev_err(dev, "failed to pcm register\n");
+-
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/common/mtk-afe-fe-dai.c b/sound/soc/mediatek/common/mtk-afe-fe-dai.c
+index 3cb2adf420bbf..ab7bbd53bb013 100644
+--- a/sound/soc/mediatek/common/mtk-afe-fe-dai.c
++++ b/sound/soc/mediatek/common/mtk-afe-fe-dai.c
+@@ -334,9 +334,11 @@ int mtk_afe_suspend(struct snd_soc_component *component)
+ 			devm_kcalloc(dev, afe->reg_back_up_list_num,
+ 				     sizeof(unsigned int), GFP_KERNEL);
+ 
+-	for (i = 0; i < afe->reg_back_up_list_num; i++)
+-		regmap_read(regmap, afe->reg_back_up_list[i],
+-			    &afe->reg_back_up[i]);
++	if (afe->reg_back_up) {
++		for (i = 0; i < afe->reg_back_up_list_num; i++)
++			regmap_read(regmap, afe->reg_back_up_list[i],
++				    &afe->reg_back_up[i]);
++	}
+ 
+ 	afe->suspended = true;
+ 	afe->runtime_suspend(dev);
+@@ -356,12 +358,13 @@ int mtk_afe_resume(struct snd_soc_component *component)
+ 
+ 	afe->runtime_resume(dev);
+ 
+-	if (!afe->reg_back_up)
++	if (!afe->reg_back_up) {
+ 		dev_dbg(dev, "%s no reg_backup\n", __func__);
+-
+-	for (i = 0; i < afe->reg_back_up_list_num; i++)
+-		mtk_regmap_write(regmap, afe->reg_back_up_list[i],
+-				 afe->reg_back_up[i]);
++	} else {
++		for (i = 0; i < afe->reg_back_up_list_num; i++)
++			mtk_regmap_write(regmap, afe->reg_back_up_list[i],
++					 afe->reg_back_up[i]);
++	}
+ 
+ 	afe->suspended = false;
+ 	return 0;
+diff --git a/sound/soc/sof/imx/imx8.c b/sound/soc/sof/imx/imx8.c
+index 12fedf0984bd9..7e9723a10d02e 100644
+--- a/sound/soc/sof/imx/imx8.c
++++ b/sound/soc/sof/imx/imx8.c
+@@ -365,7 +365,14 @@ static int imx8_remove(struct snd_sof_dev *sdev)
+ /* on i.MX8 there is 1 to 1 match between type and BAR idx */
+ static int imx8_get_bar_index(struct snd_sof_dev *sdev, u32 type)
+ {
+-	return type;
++	/* Only IRAM and SRAM bars are valid */
++	switch (type) {
++	case SOF_FW_BLK_TYPE_IRAM:
++	case SOF_FW_BLK_TYPE_SRAM:
++		return type;
++	default:
++		return -EINVAL;
++	}
+ }
+ 
+ static void imx8_ipc_msg_data(struct snd_sof_dev *sdev,
+diff --git a/sound/soc/sof/imx/imx8m.c b/sound/soc/sof/imx/imx8m.c
+index cb822d9537678..892e1482f97fa 100644
+--- a/sound/soc/sof/imx/imx8m.c
++++ b/sound/soc/sof/imx/imx8m.c
+@@ -228,7 +228,14 @@ static int imx8m_remove(struct snd_sof_dev *sdev)
+ /* on i.MX8 there is 1 to 1 match between type and BAR idx */
+ static int imx8m_get_bar_index(struct snd_sof_dev *sdev, u32 type)
+ {
+-	return type;
++	/* Only IRAM and SRAM bars are valid */
++	switch (type) {
++	case SOF_FW_BLK_TYPE_IRAM:
++	case SOF_FW_BLK_TYPE_SRAM:
++		return type;
++	default:
++		return -EINVAL;
++	}
+ }
+ 
+ static void imx8m_ipc_msg_data(struct snd_sof_dev *sdev,
+diff --git a/sound/soc/sof/xtensa/core.c b/sound/soc/sof/xtensa/core.c
+index bbb9a2282ed9e..f6e3411b33cf1 100644
+--- a/sound/soc/sof/xtensa/core.c
++++ b/sound/soc/sof/xtensa/core.c
+@@ -122,9 +122,9 @@ static void xtensa_stack(struct snd_sof_dev *sdev, void *oops, u32 *stack,
+ 	 * 0x0049fbb0: 8000f2d0 0049fc00 6f6c6c61 00632e63
+ 	 */
+ 	for (i = 0; i < stack_words; i += 4) {
+-		hex_dump_to_buffer(stack + i * 4, 16, 16, 4,
++		hex_dump_to_buffer(stack + i, 16, 16, 4,
+ 				   buf, sizeof(buf), false);
+-		dev_err(sdev->dev, "0x%08x: %s\n", stack_ptr + i, buf);
++		dev_err(sdev->dev, "0x%08x: %s\n", stack_ptr + i * 4, buf);
+ 	}
+ }
+ 
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 10911a8cad0f2..2df880cefdaee 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -1649,11 +1649,17 @@ static bool btf_is_non_static(const struct btf_type *t)
+ static int find_glob_sym_btf(struct src_obj *obj, Elf64_Sym *sym, const char *sym_name,
+ 			     int *out_btf_sec_id, int *out_btf_id)
+ {
+-	int i, j, n = btf__get_nr_types(obj->btf), m, btf_id = 0;
++	int i, j, n, m, btf_id = 0;
+ 	const struct btf_type *t;
+ 	const struct btf_var_secinfo *vi;
+ 	const char *name;
+ 
++	if (!obj->btf) {
++		pr_warn("failed to find BTF info for object '%s'\n", obj->filename);
++		return -EINVAL;
++	}
++
++	n = btf__get_nr_types(obj->btf);
+ 	for (i = 1; i <= n; i++) {
+ 		t = btf__type_by_id(obj->btf, i);
+ 
+diff --git a/tools/objtool/special.c b/tools/objtool/special.c
+index bc925cf19e2de..f1428e32a5052 100644
+--- a/tools/objtool/special.c
++++ b/tools/objtool/special.c
+@@ -58,6 +58,24 @@ void __weak arch_handle_alternative(unsigned short feature, struct special_alt *
+ {
+ }
+ 
++static bool reloc2sec_off(struct reloc *reloc, struct section **sec, unsigned long *off)
++{
++	switch (reloc->sym->type) {
++	case STT_FUNC:
++		*sec = reloc->sym->sec;
++		*off = reloc->sym->offset + reloc->addend;
++		return true;
++
++	case STT_SECTION:
++		*sec = reloc->sym->sec;
++		*off = reloc->addend;
++		return true;
++
++	default:
++		return false;
++	}
++}
++
+ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 			 struct section *sec, int idx,
+ 			 struct special_alt *alt)
+@@ -91,15 +109,14 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 		WARN_FUNC("can't find orig reloc", sec, offset + entry->orig);
+ 		return -1;
+ 	}
+-	if (orig_reloc->sym->type != STT_SECTION) {
+-		WARN_FUNC("don't know how to handle non-section reloc symbol %s",
+-			   sec, offset + entry->orig, orig_reloc->sym->name);
++	if (!reloc2sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off)) {
++		WARN_FUNC("don't know how to handle reloc symbol type %d: %s",
++			   sec, offset + entry->orig,
++			   orig_reloc->sym->type,
++			   orig_reloc->sym->name);
+ 		return -1;
+ 	}
+ 
+-	alt->orig_sec = orig_reloc->sym->sec;
+-	alt->orig_off = orig_reloc->addend;
+-
+ 	if (!entry->group || alt->new_len) {
+ 		new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new);
+ 		if (!new_reloc) {
+@@ -116,8 +133,13 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 		if (arch_is_retpoline(new_reloc->sym))
+ 			return 1;
+ 
+-		alt->new_sec = new_reloc->sym->sec;
+-		alt->new_off = (unsigned int)new_reloc->addend;
++		if (!reloc2sec_off(new_reloc, &alt->new_sec, &alt->new_off)) {
++			WARN_FUNC("don't know how to handle reloc symbol type %d: %s",
++				  sec, offset + entry->new,
++				  new_reloc->sym->type,
++				  new_reloc->sym->name);
++			return -1;
++		}
+ 
+ 		/* _ASM_EXTABLE_EX hack */
+ 		if (alt->new_off >= 0x7ffffff0)
+diff --git a/tools/perf/arch/x86/util/iostat.c b/tools/perf/arch/x86/util/iostat.c
+index eeafe97b8105b..792cd75ade33d 100644
+--- a/tools/perf/arch/x86/util/iostat.c
++++ b/tools/perf/arch/x86/util/iostat.c
+@@ -432,7 +432,7 @@ void iostat_print_metric(struct perf_stat_config *config, struct evsel *evsel,
+ 	u8 die = ((struct iio_root_port *)evsel->priv)->die;
+ 	struct perf_counts_values *count = perf_counts(evsel->counts, die, 0);
+ 
+-	if (count->run && count->ena) {
++	if (count && count->run && count->ena) {
+ 		if (evsel->prev_raw_counts && !out->force_header) {
+ 			struct perf_counts_values *prev_count =
+ 				perf_counts(evsel->prev_raw_counts, die, 0);
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 634375937db96..36033a7372f91 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -2406,6 +2406,8 @@ int cmd_stat(int argc, const char **argv)
+ 			goto out;
+ 		} else if (verbose)
+ 			iostat_list(evsel_list, &stat_config);
++		if (iostat_mode == IOSTAT_RUN && !target__has_cpu(&target))
++			target.system_wide = true;
+ 	}
+ 
+ 	if (add_default_attributes())
+diff --git a/tools/perf/tests/dwarf-unwind.c b/tools/perf/tests/dwarf-unwind.c
+index a288035eb3626..c756284b3b135 100644
+--- a/tools/perf/tests/dwarf-unwind.c
++++ b/tools/perf/tests/dwarf-unwind.c
+@@ -20,6 +20,23 @@
+ /* For bsearch. We try to unwind functions in shared object. */
+ #include <stdlib.h>
+ 
++/*
++ * The test will assert frames are on the stack but tail call optimizations lose
++ * the frame of the caller. Clang can disable this optimization on a called
++ * function but GCC currently (11/2020) lacks this attribute. The barrier is
++ * used to inhibit tail calls in these cases.
++ */
++#ifdef __has_attribute
++#if __has_attribute(disable_tail_calls)
++#define NO_TAIL_CALL_ATTRIBUTE __attribute__((disable_tail_calls))
++#define NO_TAIL_CALL_BARRIER
++#endif
++#endif
++#ifndef NO_TAIL_CALL_ATTRIBUTE
++#define NO_TAIL_CALL_ATTRIBUTE
++#define NO_TAIL_CALL_BARRIER __asm__ __volatile__("" : : : "memory");
++#endif
++
+ static int mmap_handler(struct perf_tool *tool __maybe_unused,
+ 			union perf_event *event,
+ 			struct perf_sample *sample,
+@@ -91,7 +108,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	return strcmp((const char *) symbol, funcs[idx]);
+ }
+ 
+-noinline int test_dwarf_unwind__thread(struct thread *thread)
++NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__thread(struct thread *thread)
+ {
+ 	struct perf_sample sample;
+ 	unsigned long cnt = 0;
+@@ -122,7 +139,7 @@ noinline int test_dwarf_unwind__thread(struct thread *thread)
+ 
+ static int global_unwind_retval = -INT_MAX;
+ 
+-noinline int test_dwarf_unwind__compare(void *p1, void *p2)
++NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__compare(void *p1, void *p2)
+ {
+ 	/* Any possible value should be 'thread' */
+ 	struct thread *thread = *(struct thread **)p1;
+@@ -141,7 +158,7 @@ noinline int test_dwarf_unwind__compare(void *p1, void *p2)
+ 	return p1 - p2;
+ }
+ 
+-noinline int test_dwarf_unwind__krava_3(struct thread *thread)
++NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__krava_3(struct thread *thread)
+ {
+ 	struct thread *array[2] = {thread, thread};
+ 	void *fp = &bsearch;
+@@ -160,14 +177,22 @@ noinline int test_dwarf_unwind__krava_3(struct thread *thread)
+ 	return global_unwind_retval;
+ }
+ 
+-noinline int test_dwarf_unwind__krava_2(struct thread *thread)
++NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__krava_2(struct thread *thread)
+ {
+-	return test_dwarf_unwind__krava_3(thread);
++	int ret;
++
++	ret =  test_dwarf_unwind__krava_3(thread);
++	NO_TAIL_CALL_BARRIER;
++	return ret;
+ }
+ 
+-noinline int test_dwarf_unwind__krava_1(struct thread *thread)
++NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__krava_1(struct thread *thread)
+ {
+-	return test_dwarf_unwind__krava_2(thread);
++	int ret;
++
++	ret =  test_dwarf_unwind__krava_2(thread);
++	NO_TAIL_CALL_BARRIER;
++	return ret;
+ }
+ 
+ int test__dwarf_unwind(struct test *test __maybe_unused, int subtest __maybe_unused)
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index f405b20c1e6c5..93f1f124ef89b 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -374,7 +374,8 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.o:				\
+ 		     $(TRUNNER_BPF_PROGS_DIR)/%.c			\
+ 		     $(TRUNNER_BPF_PROGS_DIR)/*.h			\
+ 		     $$(INCLUDE_DIR)/vmlinux.h				\
+-		     $(wildcard $(BPFDIR)/bpf_*.h) | $(TRUNNER_OUTPUT)
++		     $(wildcard $(BPFDIR)/bpf_*.h)			\
++		     | $(TRUNNER_OUTPUT) $$(BPFOBJ)
+ 	$$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@,			\
+ 					  $(TRUNNER_BPF_CFLAGS))
+ 
+diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+index 59ea56945e6cd..b497bb85b667f 100755
+--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
++++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+@@ -112,6 +112,14 @@ setup()
+ 	ip netns add "${NS2}"
+ 	ip netns add "${NS3}"
+ 
++	# rp_filter gets confused by what these tests are doing, so disable it
++	ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.all.rp_filter=0
++	ip netns exec ${NS1} sysctl -wq net.ipv4.conf.default.rp_filter=0
++	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
++	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
++
+ 	ip link add veth1 type veth peer name veth2
+ 	ip link add veth3 type veth peer name veth4
+ 	ip link add veth5 type veth peer name veth6
+@@ -236,11 +244,6 @@ setup()
+ 	ip -netns ${NS1} -6 route add ${IPv6_GRE}/128 dev veth5 via ${IPv6_6} ${VRF}
+ 	ip -netns ${NS2} -6 route add ${IPv6_GRE}/128 dev veth7 via ${IPv6_8} ${VRF}
+ 
+-	# rp_filter gets confused by what these tests are doing, so disable it
+-	ip netns exec ${NS1} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-	ip netns exec ${NS2} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-	ip netns exec ${NS3} sysctl -wq net.ipv4.conf.all.rp_filter=0
+-
+ 	TMPFILE=$(mktemp /tmp/test_lwt_ip_encap.XXXXXX)
+ 
+ 	sleep 1  # reduce flakiness


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-09 21:30 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-09 21:30 UTC (permalink / raw
  To: gentoo-commits

commit:     f745b9507cb522a1a06e93f2dcd1f04d9d0ec998
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct  9 21:30:06 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct  9 21:30:06 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f745b950

Linux patch 5.14.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1010_linux-5.14.11.patch | 1738 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1742 insertions(+)

diff --git a/0000_README b/0000_README
index 11074a3..6096d94 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1009_linux-5.14.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.10
 
+Patch:  1010_linux-5.14.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.14.11.patch b/1010_linux-5.14.11.patch
new file mode 100644
index 0000000..ce6e1f2
--- /dev/null
+++ b/1010_linux-5.14.11.patch
@@ -0,0 +1,1738 @@
+diff --git a/Makefile b/Makefile
+index 9f99a61d2589b..ca6c4472775cb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/mach-imx/mach-imx6q.c b/arch/arm/mach-imx/mach-imx6q.c
+index 11dcc369ec14a..c9d7c29d95e1e 100644
+--- a/arch/arm/mach-imx/mach-imx6q.c
++++ b/arch/arm/mach-imx/mach-imx6q.c
+@@ -172,6 +172,9 @@ static void __init imx6q_init_machine(void)
+ 				imx_get_soc_revision());
+ 
+ 	imx6q_enet_phy_init();
++
++	of_platform_default_populate(NULL, NULL, NULL);
++
+ 	imx_anatop_init();
+ 	cpu_is_imx6q() ?  imx6q_pm_init() : imx6dl_pm_init();
+ 	imx6q_1588_init();
+diff --git a/arch/sparc/lib/iomap.c b/arch/sparc/lib/iomap.c
+index c9da9f139694d..f3a8cd491ce0d 100644
+--- a/arch/sparc/lib/iomap.c
++++ b/arch/sparc/lib/iomap.c
+@@ -19,8 +19,10 @@ void ioport_unmap(void __iomem *addr)
+ EXPORT_SYMBOL(ioport_map);
+ EXPORT_SYMBOL(ioport_unmap);
+ 
++#ifdef CONFIG_PCI
+ void pci_iounmap(struct pci_dev *dev, void __iomem * addr)
+ {
+ 	/* nothing to do */
+ }
+ EXPORT_SYMBOL(pci_iounmap);
++#endif
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 3092fbf9dbe4c..98729ce899175 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2467,6 +2467,7 @@ static int x86_pmu_event_init(struct perf_event *event)
+ 	if (err) {
+ 		if (event->destroy)
+ 			event->destroy(event);
++		event->destroy = NULL;
+ 	}
+ 
+ 	if (READ_ONCE(x86_pmu.attr_rdpmc) &&
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 69639f9624f56..19d6ffdd3f736 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1601,6 +1601,8 @@ static void svm_clear_vintr(struct vcpu_svm *svm)
+ 
+ 		svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl &
+ 			V_IRQ_INJECTION_BITS_MASK;
++
++		svm->vmcb->control.int_vector = svm->nested.ctl.int_vector;
+ 	}
+ 
+ 	vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 6d5d6e93f5c41..4b0e866e9f086 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1327,6 +1327,13 @@ static const u32 msrs_to_save_all[] = {
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
+ 	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
++
++	MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3,
++	MSR_K7_PERFCTR0, MSR_K7_PERFCTR1, MSR_K7_PERFCTR2, MSR_K7_PERFCTR3,
++	MSR_F15H_PERF_CTL0, MSR_F15H_PERF_CTL1, MSR_F15H_PERF_CTL2,
++	MSR_F15H_PERF_CTL3, MSR_F15H_PERF_CTL4, MSR_F15H_PERF_CTL5,
++	MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2,
++	MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5,
+ };
+ 
+ static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
+@@ -7659,6 +7666,13 @@ static void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm)
+ 
+ 		/* Process a latched INIT or SMI, if any.  */
+ 		kvm_make_request(KVM_REQ_EVENT, vcpu);
++
++		/*
++		 * Even if KVM_SET_SREGS2 loaded PDPTRs out of band,
++		 * on SMM exit we still need to reload them from
++		 * guest memory
++		 */
++		vcpu->arch.pdptrs_from_userspace = false;
+ 	}
+ 
+ 	kvm_mmu_reset_context(vcpu);
+diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
+index 058f19b20465f..c565def611e24 100644
+--- a/arch/x86/lib/insn.c
++++ b/arch/x86/lib/insn.c
+@@ -37,10 +37,10 @@
+ 	((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
+ 
+ #define __get_next(t, insn)	\
+-	({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
++	({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+ 
+ #define __peek_nbyte_next(t, insn, n)	\
+-	({ t r = *(t*)((insn)->next_byte + n); leXX_to_cpu(t, r); })
++	({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
+ 
+ #define get_next(t, insn)	\
+ 	({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
+diff --git a/block/bio.c b/block/bio.c
+index d95e3456ba0c5..52548c4878836 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1396,7 +1396,7 @@ again:
+ 	if (!bio_integrity_endio(bio))
+ 		return;
+ 
+-	if (bio->bi_bdev)
++	if (bio->bi_bdev && bio_flagged(bio, BIO_TRACKED))
+ 		rq_qos_done_bio(bio->bi_bdev->bd_disk->queue, bio);
+ 
+ 	if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 0e6e73b8023fc..8916163d508e0 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2199,6 +2199,25 @@ static void ata_dev_config_ncq_prio(struct ata_device *dev)
+ 
+ }
+ 
++static bool ata_dev_check_adapter(struct ata_device *dev,
++				  unsigned short vendor_id)
++{
++	struct pci_dev *pcidev = NULL;
++	struct device *parent_dev = NULL;
++
++	for (parent_dev = dev->tdev.parent; parent_dev != NULL;
++	     parent_dev = parent_dev->parent) {
++		if (dev_is_pci(parent_dev)) {
++			pcidev = to_pci_dev(parent_dev);
++			if (pcidev->vendor == vendor_id)
++				return true;
++			break;
++		}
++	}
++
++	return false;
++}
++
+ static int ata_dev_config_ncq(struct ata_device *dev,
+ 			       char *desc, size_t desc_sz)
+ {
+@@ -2217,6 +2236,13 @@ static int ata_dev_config_ncq(struct ata_device *dev,
+ 		snprintf(desc, desc_sz, "NCQ (not used)");
+ 		return 0;
+ 	}
++
++	if (dev->horkage & ATA_HORKAGE_NO_NCQ_ON_ATI &&
++	    ata_dev_check_adapter(dev, PCI_VENDOR_ID_ATI)) {
++		snprintf(desc, desc_sz, "NCQ (not used)");
++		return 0;
++	}
++
+ 	if (ap->flags & ATA_FLAG_NCQ) {
+ 		hdepth = min(ap->scsi_host->can_queue, ATA_MAX_QUEUE);
+ 		dev->flags |= ATA_DFLAG_NCQ;
+@@ -3951,9 +3977,11 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "Samsung SSD 850*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 860*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+-						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++						ATA_HORKAGE_ZERO_AFTER_TRIM |
++						ATA_HORKAGE_NO_NCQ_ON_ATI, },
+ 	{ "Samsung SSD 870*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+-						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++						ATA_HORKAGE_ZERO_AFTER_TRIM |
++						ATA_HORKAGE_NO_NCQ_ON_ATI, },
+ 	{ "FCCT*M500*",			NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+@@ -6108,6 +6136,8 @@ static int __init ata_parse_force_one(char **cur,
+ 		{ "ncq",	.horkage_off	= ATA_HORKAGE_NONCQ },
+ 		{ "noncqtrim",	.horkage_on	= ATA_HORKAGE_NO_NCQ_TRIM },
+ 		{ "ncqtrim",	.horkage_off	= ATA_HORKAGE_NO_NCQ_TRIM },
++		{ "noncqati",	.horkage_on	= ATA_HORKAGE_NO_NCQ_ON_ATI },
++		{ "ncqati",	.horkage_off	= ATA_HORKAGE_NO_NCQ_ON_ATI },
+ 		{ "dump_id",	.horkage_on	= ATA_HORKAGE_DUMP_ID },
+ 		{ "pio0",	.xfer_mask	= 1 << (ATA_SHIFT_PIO + 0) },
+ 		{ "pio1",	.xfer_mask	= 1 << (ATA_SHIFT_PIO + 1) },
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 5a872adcfdb98..5ba8a4f353440 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -922,7 +922,6 @@ out:
+ void kgd2kfd_device_exit(struct kfd_dev *kfd)
+ {
+ 	if (kfd->init_complete) {
+-		svm_migrate_fini((struct amdgpu_device *)kfd->kgd);
+ 		device_queue_manager_uninit(kfd->dqm);
+ 		kfd_interrupt_exit(kfd);
+ 		kfd_topology_remove_device(kfd);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+index dab290a4d19d1..4a16e3c257b92 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+@@ -891,9 +891,16 @@ int svm_migrate_init(struct amdgpu_device *adev)
+ 	pgmap->ops = &svm_migrate_pgmap_ops;
+ 	pgmap->owner = SVM_ADEV_PGMAP_OWNER(adev);
+ 	pgmap->flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
++
++	/* Device manager releases device-specific resources, memory region and
++	 * pgmap when driver disconnects from device.
++	 */
+ 	r = devm_memremap_pages(adev->dev, pgmap);
+ 	if (IS_ERR(r)) {
+ 		pr_err("failed to register HMM device memory\n");
++
++		/* Disable SVM support capability */
++		pgmap->type = 0;
+ 		devm_release_mem_region(adev->dev, res->start,
+ 					res->end - res->start + 1);
+ 		return PTR_ERR(r);
+@@ -908,12 +915,3 @@ int svm_migrate_init(struct amdgpu_device *adev)
+ 
+ 	return 0;
+ }
+-
+-void svm_migrate_fini(struct amdgpu_device *adev)
+-{
+-	struct dev_pagemap *pgmap = &adev->kfd.dev->pgmap;
+-
+-	devm_memunmap_pages(adev->dev, pgmap);
+-	devm_release_mem_region(adev->dev, pgmap->range.start,
+-				pgmap->range.end - pgmap->range.start + 1);
+-}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
+index 0de76b5d49739..2f5b3394c9ed9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
+@@ -47,7 +47,6 @@ unsigned long
+ svm_migrate_addr_to_pfn(struct amdgpu_device *adev, unsigned long addr);
+ 
+ int svm_migrate_init(struct amdgpu_device *adev);
+-void svm_migrate_fini(struct amdgpu_device *adev);
+ 
+ #else
+ 
+@@ -55,10 +54,6 @@ static inline int svm_migrate_init(struct amdgpu_device *adev)
+ {
+ 	return 0;
+ }
+-static inline void svm_migrate_fini(struct amdgpu_device *adev)
+-{
+-	/* empty */
+-}
+ 
+ #endif /* IS_ENABLED(CONFIG_HSA_AMD_SVM) */
+ 
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index d329ec3d64d81..5f22c9d65e578 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -107,6 +107,8 @@ static DEFINE_RAW_SPINLOCK(cpu_map_lock);
+ 
+ #endif
+ 
++static DEFINE_STATIC_KEY_FALSE(needs_rmw_access);
++
+ /*
+  * The GIC mapping of CPU interfaces does not necessarily match
+  * the logical CPU numbering.  Let's use a mapping as returned
+@@ -774,6 +776,25 @@ static int gic_pm_init(struct gic_chip_data *gic)
+ #endif
+ 
+ #ifdef CONFIG_SMP
++static void rmw_writeb(u8 bval, void __iomem *addr)
++{
++	static DEFINE_RAW_SPINLOCK(rmw_lock);
++	unsigned long offset = (unsigned long)addr & 3UL;
++	unsigned long shift = offset * 8;
++	unsigned long flags;
++	u32 val;
++
++	raw_spin_lock_irqsave(&rmw_lock, flags);
++
++	addr -= offset;
++	val = readl_relaxed(addr);
++	val &= ~GENMASK(shift + 7, shift);
++	val |= bval << shift;
++	writel_relaxed(val, addr);
++
++	raw_spin_unlock_irqrestore(&rmw_lock, flags);
++}
++
+ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
+ 			    bool force)
+ {
+@@ -788,7 +809,10 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
+ 	if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)
+ 		return -EINVAL;
+ 
+-	writeb_relaxed(gic_cpu_map[cpu], reg);
++	if (static_branch_unlikely(&needs_rmw_access))
++		rmw_writeb(gic_cpu_map[cpu], reg);
++	else
++		writeb_relaxed(gic_cpu_map[cpu], reg);
+ 	irq_data_update_effective_affinity(d, cpumask_of(cpu));
+ 
+ 	return IRQ_SET_MASK_OK_DONE;
+@@ -1375,6 +1399,30 @@ static bool gic_check_eoimode(struct device_node *node, void __iomem **base)
+ 	return true;
+ }
+ 
++static bool gic_enable_rmw_access(void *data)
++{
++	/*
++	 * The EMEV2 class of machines has a broken interconnect, and
++	 * locks up on accesses that are less than 32bit. So far, only
++	 * the affinity setting requires it.
++	 */
++	if (of_machine_is_compatible("renesas,emev2")) {
++		static_branch_enable(&needs_rmw_access);
++		return true;
++	}
++
++	return false;
++}
++
++static const struct gic_quirk gic_quirks[] = {
++	{
++		.desc		= "broken byte access",
++		.compatible	= "arm,pl390",
++		.init		= gic_enable_rmw_access,
++	},
++	{ },
++};
++
+ static int gic_of_setup(struct gic_chip_data *gic, struct device_node *node)
+ {
+ 	if (!gic || !node)
+@@ -1391,6 +1439,8 @@ static int gic_of_setup(struct gic_chip_data *gic, struct device_node *node)
+ 	if (of_property_read_u32(node, "cpu-offset", &gic->percpu_offset))
+ 		gic->percpu_offset = 0;
+ 
++	gic_enable_of_quirks(node, gic_quirks, gic);
++
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/misc/habanalabs/common/command_submission.c b/drivers/misc/habanalabs/common/command_submission.c
+index 80c60fb41bbca..d249101106dea 100644
+--- a/drivers/misc/habanalabs/common/command_submission.c
++++ b/drivers/misc/habanalabs/common/command_submission.c
+@@ -1727,6 +1727,15 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type,
+ 			goto free_cs_chunk_array;
+ 		}
+ 
++		if (!hdev->nic_ports_mask) {
++			atomic64_inc(&ctx->cs_counters.validation_drop_cnt);
++			atomic64_inc(&cntr->validation_drop_cnt);
++			dev_err(hdev->dev,
++				"Collective operations not supported when NIC ports are disabled");
++			rc = -EINVAL;
++			goto free_cs_chunk_array;
++		}
++
+ 		collective_engine_id = chunk->collective_engine_id;
+ 	}
+ 
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 409f05c962f24..8a9c4f0f37f96 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -5620,6 +5620,7 @@ static void gaudi_add_end_of_cb_packets(struct hl_device *hdev,
+ {
+ 	struct gaudi_device *gaudi = hdev->asic_specific;
+ 	struct packet_msg_prot *cq_pkt;
++	u64 msi_addr;
+ 	u32 tmp;
+ 
+ 	cq_pkt = kernel_address + len - (sizeof(struct packet_msg_prot) * 2);
+@@ -5641,10 +5642,12 @@ static void gaudi_add_end_of_cb_packets(struct hl_device *hdev,
+ 	cq_pkt->ctl = cpu_to_le32(tmp);
+ 	cq_pkt->value = cpu_to_le32(1);
+ 
+-	if (!gaudi->multi_msi_mode)
+-		msi_vec = 0;
++	if (gaudi->multi_msi_mode)
++		msi_addr = mmPCIE_MSI_INTR_0 + msi_vec * 4;
++	else
++		msi_addr = mmPCIE_CORE_MSI_REQ;
+ 
+-	cq_pkt->addr = cpu_to_le64(CFG_BASE + mmPCIE_MSI_INTR_0 + msi_vec * 4);
++	cq_pkt->addr = cpu_to_le64(CFG_BASE + msi_addr);
+ }
+ 
+ static void gaudi_update_eq_ci(struct hl_device *hdev, u32 val)
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi_security.c b/drivers/misc/habanalabs/gaudi/gaudi_security.c
+index 0d3240f1f7d76..2b8bafda41bcc 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi_security.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi_security.c
+@@ -8,16 +8,21 @@
+ #include "gaudiP.h"
+ #include "../include/gaudi/asic_reg/gaudi_regs.h"
+ 
+-#define GAUDI_NUMBER_OF_RR_REGS		24
+-#define GAUDI_NUMBER_OF_LBW_RANGES	12
++#define GAUDI_NUMBER_OF_LBW_RR_REGS	28
++#define GAUDI_NUMBER_OF_HBW_RR_REGS	24
++#define GAUDI_NUMBER_OF_LBW_RANGES	10
+ 
+-static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_HIT_WPROT,
+ 	mmDMA_IF_W_S_DMA0_HIT_WPROT,
+ 	mmDMA_IF_W_S_DMA1_HIT_WPROT,
++	mmDMA_IF_E_S_SOB_HIT_WPROT,
+ 	mmDMA_IF_E_S_DMA0_HIT_WPROT,
+ 	mmDMA_IF_E_S_DMA1_HIT_WPROT,
++	mmDMA_IF_W_N_SOB_HIT_WPROT,
+ 	mmDMA_IF_W_N_DMA0_HIT_WPROT,
+ 	mmDMA_IF_W_N_DMA1_HIT_WPROT,
++	mmDMA_IF_E_N_SOB_HIT_WPROT,
+ 	mmDMA_IF_E_N_DMA0_HIT_WPROT,
+ 	mmDMA_IF_E_N_DMA1_HIT_WPROT,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_HIT_AW,
+@@ -38,13 +43,17 @@ static u64 gaudi_rr_lbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_HIT_AW,
+ };
+ 
+-static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_HIT_RPROT,
+ 	mmDMA_IF_W_S_DMA0_HIT_RPROT,
+ 	mmDMA_IF_W_S_DMA1_HIT_RPROT,
++	mmDMA_IF_E_S_SOB_HIT_RPROT,
+ 	mmDMA_IF_E_S_DMA0_HIT_RPROT,
+ 	mmDMA_IF_E_S_DMA1_HIT_RPROT,
++	mmDMA_IF_W_N_SOB_HIT_RPROT,
+ 	mmDMA_IF_W_N_DMA0_HIT_RPROT,
+ 	mmDMA_IF_W_N_DMA1_HIT_RPROT,
++	mmDMA_IF_E_N_SOB_HIT_RPROT,
+ 	mmDMA_IF_E_N_DMA0_HIT_RPROT,
+ 	mmDMA_IF_E_N_DMA1_HIT_RPROT,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_HIT_AR,
+@@ -65,13 +74,17 @@ static u64 gaudi_rr_lbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_HIT_AR,
+ };
+ 
+-static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MIN_WPROT_0,
++	mmDMA_IF_E_S_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MIN_WPROT_0,
++	mmDMA_IF_W_N_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MIN_WPROT_0,
++	mmDMA_IF_E_N_SOB_MIN_WPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MIN_WPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MIN_WPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MIN_AW_0,
+@@ -92,13 +105,17 @@ static u64 gaudi_rr_lbw_min_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MIN_AW_0,
+ };
+ 
+-static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MAX_WPROT_0,
++	mmDMA_IF_E_S_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MAX_WPROT_0,
++	mmDMA_IF_W_N_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MAX_WPROT_0,
++	mmDMA_IF_E_N_SOB_MAX_WPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MAX_WPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MAX_WPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MAX_AW_0,
+@@ -119,13 +136,17 @@ static u64 gaudi_rr_lbw_max_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MAX_AW_0,
+ };
+ 
+-static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MIN_RPROT_0,
++	mmDMA_IF_E_S_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MIN_RPROT_0,
++	mmDMA_IF_W_N_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MIN_RPROT_0,
++	mmDMA_IF_E_N_SOB_MIN_RPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MIN_RPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MIN_RPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MIN_AR_0,
+@@ -146,13 +167,17 @@ static u64 gaudi_rr_lbw_min_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MIN_AR_0,
+ };
+ 
+-static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_LBW_RR_REGS] = {
++	mmDMA_IF_W_S_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_W_S_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_W_S_DMA1_MAX_RPROT_0,
++	mmDMA_IF_E_S_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_E_S_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_E_S_DMA1_MAX_RPROT_0,
++	mmDMA_IF_W_N_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_W_N_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_W_N_DMA1_MAX_RPROT_0,
++	mmDMA_IF_E_N_SOB_MAX_RPROT_0,
+ 	mmDMA_IF_E_N_DMA0_MAX_RPROT_0,
+ 	mmDMA_IF_E_N_DMA1_MAX_RPROT_0,
+ 	mmSIF_RTR_0_LBW_RANGE_PROT_MAX_AR_0,
+@@ -173,7 +198,7 @@ static u64 gaudi_rr_lbw_max_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_7_LBW_RANGE_PROT_MAX_AR_0,
+ };
+ 
+-static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_HIT_AW,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_HIT_AW,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_HIT_AW,
+@@ -200,7 +225,7 @@ static u64 gaudi_rr_hbw_hit_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_HIT_AW
+ };
+ 
+-static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_HIT_AR,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_HIT_AR,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_HIT_AR,
+@@ -227,7 +252,7 @@ static u64 gaudi_rr_hbw_hit_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_HIT_AR
+ };
+ 
+-static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_LOW_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AW_0,
+@@ -254,7 +279,7 @@ static u64 gaudi_rr_hbw_base_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_LOW_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_HIGH_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AW_0,
+@@ -281,7 +306,7 @@ static u64 gaudi_rr_hbw_base_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_HIGH_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_LOW_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AW_0,
+@@ -308,7 +333,7 @@ static u64 gaudi_rr_hbw_mask_low_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_LOW_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AW_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_HIGH_AW_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AW_0,
+@@ -335,7 +360,7 @@ static u64 gaudi_rr_hbw_mask_high_aw_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_HIGH_AW_0
+ };
+ 
+-static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_LOW_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_LOW_AR_0,
+@@ -362,7 +387,7 @@ static u64 gaudi_rr_hbw_base_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_LOW_AR_0
+ };
+ 
+-static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_BASE_HIGH_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_BASE_HIGH_AR_0,
+@@ -389,7 +414,7 @@ static u64 gaudi_rr_hbw_base_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_BASE_HIGH_AR_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_LOW_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_LOW_AR_0,
+@@ -416,7 +441,7 @@ static u64 gaudi_rr_hbw_mask_low_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
+ 	mmNIF_RTR_CTRL_7_RANGE_SEC_MASK_LOW_AR_0
+ };
+ 
+-static u64 gaudi_rr_hbw_mask_high_ar_regs[GAUDI_NUMBER_OF_RR_REGS] = {
++static u64 gaudi_rr_hbw_mask_high_ar_regs[GAUDI_NUMBER_OF_HBW_RR_REGS] = {
+ 	mmDMA_IF_W_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AR_0,
+ 	mmDMA_IF_W_S_DOWN_CH1_RANGE_SEC_MASK_HIGH_AR_0,
+ 	mmDMA_IF_E_S_DOWN_CH0_RANGE_SEC_MASK_HIGH_AR_0,
+@@ -12841,50 +12866,44 @@ static void gaudi_init_range_registers_lbw(struct hl_device *hdev)
+ 	u32 lbw_rng_end[GAUDI_NUMBER_OF_LBW_RANGES];
+ 	int i, j;
+ 
+-	lbw_rng_start[0]  = (0xFBFE0000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[0]    = (0xFBFFF000 & 0x3FFFFFF) + 1;
++	lbw_rng_start[0]  = (0xFC0E8000 & 0x3FFFFFF) - 1; /* 0x000E7FFF */
++	lbw_rng_end[0]    = (0xFC11FFFF & 0x3FFFFFF) + 1; /* 0x00120000 */
+ 
+-	lbw_rng_start[1]  = (0xFC0E8000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[1]    = (0xFC120000 & 0x3FFFFFF) + 1;
++	lbw_rng_start[1]  = (0xFC1E8000 & 0x3FFFFFF) - 1; /* 0x001E7FFF */
++	lbw_rng_end[1]    = (0xFC48FFFF & 0x3FFFFFF) + 1; /* 0x00490000 */
+ 
+-	lbw_rng_start[2]  = (0xFC1E8000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[2]    = (0xFC48FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[2]  = (0xFC600000 & 0x3FFFFFF) - 1; /* 0x005FFFFF */
++	lbw_rng_end[2]    = (0xFCC48FFF & 0x3FFFFFF) + 1; /* 0x00C49000 */
+ 
+-	lbw_rng_start[3]  = (0xFC600000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[3]    = (0xFCC48FFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[3]  = (0xFCC4A000 & 0x3FFFFFF) - 1; /* 0x00C49FFF */
++	lbw_rng_end[3]    = (0xFCCDFFFF & 0x3FFFFFF) + 1; /* 0x00CE0000 */
+ 
+-	lbw_rng_start[4]  = (0xFCC4A000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[4]    = (0xFCCDFFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[4]  = (0xFCCE4000 & 0x3FFFFFF) - 1; /* 0x00CE3FFF */
++	lbw_rng_end[4]    = (0xFCD1FFFF & 0x3FFFFFF) + 1; /* 0x00D20000 */
+ 
+-	lbw_rng_start[5]  = (0xFCCE4000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[5]    = (0xFCD1FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[5]  = (0xFCD24000 & 0x3FFFFFF) - 1; /* 0x00D23FFF */
++	lbw_rng_end[5]    = (0xFCD5FFFF & 0x3FFFFFF) + 1; /* 0x00D60000 */
+ 
+-	lbw_rng_start[6]  = (0xFCD24000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[6]    = (0xFCD5FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[6]  = (0xFCD64000 & 0x3FFFFFF) - 1; /* 0x00D63FFF */
++	lbw_rng_end[6]    = (0xFCD9FFFF & 0x3FFFFFF) + 1; /* 0x00DA0000 */
+ 
+-	lbw_rng_start[7]  = (0xFCD64000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[7]    = (0xFCD9FFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[7]  = (0xFCDA4000 & 0x3FFFFFF) - 1; /* 0x00DA3FFF */
++	lbw_rng_end[7]    = (0xFCDDFFFF & 0x3FFFFFF) + 1; /* 0x00DE0000 */
+ 
+-	lbw_rng_start[8]  = (0xFCDA4000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[8]    = (0xFCDDFFFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[8]  = (0xFCDE4000 & 0x3FFFFFF) - 1; /* 0x00DE3FFF */
++	lbw_rng_end[8]    = (0xFCE05FFF & 0x3FFFFFF) + 1; /* 0x00E06000 */
+ 
+-	lbw_rng_start[9]  = (0xFCDE4000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[9]    = (0xFCE05FFF & 0x3FFFFFF) + 1;
++	lbw_rng_start[9]  = (0xFCFC9000 & 0x3FFFFFF) - 1; /* 0x00FC8FFF */
++	lbw_rng_end[9]    = (0xFFFFFFFE & 0x3FFFFFF) + 1; /* 0x03FFFFFF */
+ 
+-	lbw_rng_start[10]  = (0xFEC43000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[10]    = (0xFEC43FFF & 0x3FFFFFF) + 1;
+-
+-	lbw_rng_start[11] = (0xFE484000 & 0x3FFFFFF) - 1;
+-	lbw_rng_end[11]   = (0xFE484FFF & 0x3FFFFFF) + 1;
+-
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
++	for (i = 0 ; i < GAUDI_NUMBER_OF_LBW_RR_REGS ; i++) {
+ 		WREG32(gaudi_rr_lbw_hit_aw_regs[i],
+ 				(1 << GAUDI_NUMBER_OF_LBW_RANGES) - 1);
+ 		WREG32(gaudi_rr_lbw_hit_ar_regs[i],
+ 				(1 << GAUDI_NUMBER_OF_LBW_RANGES) - 1);
+ 	}
+ 
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++)
++	for (i = 0 ; i < GAUDI_NUMBER_OF_LBW_RR_REGS ; i++)
+ 		for (j = 0 ; j < GAUDI_NUMBER_OF_LBW_RANGES ; j++) {
+ 			WREG32(gaudi_rr_lbw_min_aw_regs[i] + (j << 2),
+ 							lbw_rng_start[j]);
+@@ -12931,12 +12950,12 @@ static void gaudi_init_range_registers_hbw(struct hl_device *hdev)
+ 	 * 6th range is the host
+ 	 */
+ 
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
++	for (i = 0 ; i < GAUDI_NUMBER_OF_HBW_RR_REGS ; i++) {
+ 		WREG32(gaudi_rr_hbw_hit_aw_regs[i], 0x1F);
+ 		WREG32(gaudi_rr_hbw_hit_ar_regs[i], 0x1D);
+ 	}
+ 
+-	for (i = 0 ; i < GAUDI_NUMBER_OF_RR_REGS ; i++) {
++	for (i = 0 ; i < GAUDI_NUMBER_OF_HBW_RR_REGS ; i++) {
+ 		WREG32(gaudi_rr_hbw_base_low_aw_regs[i], dram_addr_lo);
+ 		WREG32(gaudi_rr_hbw_base_low_ar_regs[i], dram_addr_lo);
+ 
+diff --git a/drivers/misc/habanalabs/include/gaudi/asic_reg/gaudi_regs.h b/drivers/misc/habanalabs/include/gaudi/asic_reg/gaudi_regs.h
+index 5bb54b34a8aeb..907644202b0c3 100644
+--- a/drivers/misc/habanalabs/include/gaudi/asic_reg/gaudi_regs.h
++++ b/drivers/misc/habanalabs/include/gaudi/asic_reg/gaudi_regs.h
+@@ -305,6 +305,8 @@
+ #define mmPCIE_AUX_FLR_CTRL                                          0xC07394
+ #define mmPCIE_AUX_DBI                                               0xC07490
+ 
++#define mmPCIE_CORE_MSI_REQ                                          0xC04100
++
+ #define mmPSOC_PCI_PLL_NR                                            0xC72100
+ #define mmSRAM_W_PLL_NR                                              0x4C8100
+ #define mmPSOC_HBM_PLL_NR                                            0xC74100
+diff --git a/drivers/net/phy/mdio_device.c b/drivers/net/phy/mdio_device.c
+index c94cb5382dc92..250742ffdfd91 100644
+--- a/drivers/net/phy/mdio_device.c
++++ b/drivers/net/phy/mdio_device.c
+@@ -179,6 +179,16 @@ static int mdio_remove(struct device *dev)
+ 	return 0;
+ }
+ 
++static void mdio_shutdown(struct device *dev)
++{
++	struct mdio_device *mdiodev = to_mdio_device(dev);
++	struct device_driver *drv = mdiodev->dev.driver;
++	struct mdio_driver *mdiodrv = to_mdio_driver(drv);
++
++	if (mdiodrv->shutdown)
++		mdiodrv->shutdown(mdiodev);
++}
++
+ /**
+  * mdio_driver_register - register an mdio_driver with the MDIO layer
+  * @drv: new mdio_driver to register
+@@ -193,6 +203,7 @@ int mdio_driver_register(struct mdio_driver *drv)
+ 	mdiodrv->driver.bus = &mdio_bus_type;
+ 	mdiodrv->driver.probe = mdio_probe;
+ 	mdiodrv->driver.remove = mdio_remove;
++	mdiodrv->driver.shutdown = mdio_shutdown;
+ 
+ 	retval = driver_register(&mdiodrv->driver);
+ 	if (retval) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index cedba56fc448e..ef925895739f0 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -7455,23 +7455,18 @@ static s32 brcmf_translate_country_code(struct brcmf_pub *drvr, char alpha2[2],
+ 	s32 found_index;
+ 	int i;
+ 
++	country_codes = drvr->settings->country_codes;
++	if (!country_codes) {
++		brcmf_dbg(TRACE, "No country codes configured for device\n");
++		return -EINVAL;
++	}
++
+ 	if ((alpha2[0] == ccreq->country_abbrev[0]) &&
+ 	    (alpha2[1] == ccreq->country_abbrev[1])) {
+ 		brcmf_dbg(TRACE, "Country code already set\n");
+ 		return -EAGAIN;
+ 	}
+ 
+-	country_codes = drvr->settings->country_codes;
+-	if (!country_codes) {
+-		brcmf_dbg(TRACE, "No country codes configured for device, using ISO3166 code and 0 rev\n");
+-		memset(ccreq, 0, sizeof(*ccreq));
+-		ccreq->country_abbrev[0] = alpha2[0];
+-		ccreq->country_abbrev[1] = alpha2[1];
+-		ccreq->ccode[0] = alpha2[0];
+-		ccreq->ccode[1] = alpha2[1];
+-		return 0;
+-	}
+-
+ 	found_index = -1;
+ 	for (i = 0; i < country_codes->table_size; i++) {
+ 		cc = &country_codes->table[i];
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 39a01c2a3058d..32d5bc4919d8c 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -499,7 +499,7 @@ check_frags:
+ 				 * the header's copy failed, and they are
+ 				 * sharing a slot, send an error
+ 				 */
+-				if (i == 0 && sharedslot)
++				if (i == 0 && !first_shinfo && sharedslot)
+ 					xenvif_idx_release(queue, pending_idx,
+ 							   XEN_NETIF_RSP_ERROR);
+ 				else
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index b08a61ca283f2..6ebe68396712c 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2487,6 +2487,7 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ 	 */
+ 	if (ctrl->ctrl.queue_count > 1) {
+ 		nvme_stop_queues(&ctrl->ctrl);
++		nvme_sync_io_queues(&ctrl->ctrl);
+ 		blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ 				nvme_fc_terminate_exchange, &ctrl->ctrl);
+ 		blk_mq_tagset_wait_completed_request(&ctrl->tag_set);
+@@ -2510,6 +2511,7 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ 	 * clean up the admin queue. Same thing as above.
+ 	 */
+ 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
++	blk_sync_queue(ctrl->ctrl.admin_q);
+ 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
+ 				nvme_fc_terminate_exchange, &ctrl->ctrl);
+ 	blk_mq_tagset_wait_completed_request(&ctrl->admin_tag_set);
+@@ -2951,14 +2953,6 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
+ 	if (ctrl->ctrl.queue_count == 1)
+ 		return 0;
+ 
+-	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
+-	if (ret)
+-		goto out_free_io_queues;
+-
+-	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
+-	if (ret)
+-		goto out_delete_hw_queues;
+-
+ 	if (prior_ioq_cnt != nr_io_queues) {
+ 		dev_info(ctrl->ctrl.device,
+ 			"reconnect: revising io queue count from %d to %d\n",
+@@ -2968,6 +2962,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
+ 		nvme_unfreeze(&ctrl->ctrl);
+ 	}
+ 
++	ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
++	if (ret)
++		goto out_free_io_queues;
++
++	ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
++	if (ret)
++		goto out_delete_hw_queues;
++
+ 	return 0;
+ 
+ out_delete_hw_queues:
+diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c
+index 7f3a03f937f66..d53634c8a6e09 100644
+--- a/drivers/platform/x86/gigabyte-wmi.c
++++ b/drivers/platform/x86/gigabyte-wmi.c
+@@ -144,6 +144,7 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"),
++	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550I AORUS PRO AX"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M AORUS PRO-P"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"),
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 0e1451b1d9c6c..033f797861d8a 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -100,10 +100,10 @@ static const struct ts_dmi_data chuwi_hi10_air_data = {
+ };
+ 
+ static const struct property_entry chuwi_hi10_plus_props[] = {
+-	PROPERTY_ENTRY_U32("touchscreen-min-x", 0),
+-	PROPERTY_ENTRY_U32("touchscreen-min-y", 5),
+-	PROPERTY_ENTRY_U32("touchscreen-size-x", 1914),
+-	PROPERTY_ENTRY_U32("touchscreen-size-y", 1283),
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 12),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 10),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1908),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1270),
+ 	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10plus.fw"),
+ 	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
+ 	PROPERTY_ENTRY_BOOL("silead,home-button"),
+@@ -111,6 +111,15 @@ static const struct property_entry chuwi_hi10_plus_props[] = {
+ };
+ 
+ static const struct ts_dmi_data chuwi_hi10_plus_data = {
++	.embedded_fw = {
++		.name	= "silead/gsl1680-chuwi-hi10plus.fw",
++		.prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
++		.length	= 34056,
++		.sha256	= { 0xfd, 0x0a, 0x08, 0x08, 0x3c, 0xa6, 0x34, 0x4e,
++			    0x2c, 0x49, 0x9c, 0xcd, 0x7d, 0x44, 0x9d, 0x38,
++			    0x10, 0x68, 0xb5, 0xbd, 0xb7, 0x2a, 0x63, 0xb5,
++			    0x67, 0x0b, 0x96, 0xbd, 0x89, 0x67, 0x85, 0x09 },
++	},
+ 	.acpi_name      = "MSSL0017:00",
+ 	.properties     = chuwi_hi10_plus_props,
+ };
+@@ -141,6 +150,33 @@ static const struct ts_dmi_data chuwi_hi10_pro_data = {
+ 	.properties     = chuwi_hi10_pro_props,
+ };
+ 
++static const struct property_entry chuwi_hibook_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 30),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 4),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1892),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1276),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hibook.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data chuwi_hibook_data = {
++	.embedded_fw = {
++		.name	= "silead/gsl1680-chuwi-hibook.fw",
++		.prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
++		.length	= 40392,
++		.sha256	= { 0xf7, 0xc0, 0xe8, 0x5a, 0x6c, 0xf2, 0xeb, 0x8d,
++			    0x12, 0xc4, 0x45, 0xbf, 0x55, 0x13, 0x4c, 0x1a,
++			    0x13, 0x04, 0x31, 0x08, 0x65, 0x73, 0xf7, 0xa8,
++			    0x1b, 0x7d, 0x59, 0xc9, 0xe6, 0x97, 0xf7, 0x38 },
++	},
++	.acpi_name      = "MSSL0017:00",
++	.properties     = chuwi_hibook_props,
++};
++
+ static const struct property_entry chuwi_vi8_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
+ 	PROPERTY_ENTRY_U32("touchscreen-min-y", 6),
+@@ -979,6 +1015,16 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ 		},
+ 	},
++	{
++		/* Chuwi HiBook (CWI514) */
++		.driver_data = (void *)&chuwi_hibook_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
++			DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			/* Above matches are too generic, add bios-date match */
++			DMI_MATCH(DMI_BIOS_DATE, "05/07/2016"),
++		},
++	},
+ 	{
+ 		/* Chuwi Vi8 (CWI506) */
+ 		.driver_data = (void *)&chuwi_vi8_data,
+diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
+index e0d798d6baee3..f1d6fcfe12f0d 100644
+--- a/drivers/scsi/elx/efct/efct_lio.c
++++ b/drivers/scsi/elx/efct/efct_lio.c
+@@ -880,11 +880,11 @@ efct_lio_npiv_drop_nport(struct se_wwn *wwn)
+ 	struct efct *efct = lio_vport->efct;
+ 	unsigned long flags = 0;
+ 
+-	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
+-
+ 	if (lio_vport->fc_vport)
+ 		fc_vport_terminate(lio_vport->fc_vport);
+ 
++	spin_lock_irqsave(&efct->tgt_efct.efct_lio_lock, flags);
++
+ 	list_for_each_entry_safe(vport, next_vport, &efct->tgt_efct.vport_list,
+ 				 list_entry) {
+ 		if (vport->lio_vport == lio_vport) {
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index b8d55af763f92..134c7a8145efa 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3441,15 +3441,16 @@ static int sd_probe(struct device *dev)
+ 	}
+ 
+ 	device_initialize(&sdkp->dev);
+-	sdkp->dev.parent = dev;
++	sdkp->dev.parent = get_device(dev);
+ 	sdkp->dev.class = &sd_disk_class;
+ 	dev_set_name(&sdkp->dev, "%s", dev_name(dev));
+ 
+ 	error = device_add(&sdkp->dev);
+-	if (error)
+-		goto out_free_index;
++	if (error) {
++		put_device(&sdkp->dev);
++		goto out;
++	}
+ 
+-	get_device(dev);
+ 	dev_set_drvdata(dev, sdkp);
+ 
+ 	gd->major = sd_major((index & 0xf0) >> 4);
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index c2afba2a5414d..43e682297fd5f 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -87,9 +87,16 @@ static int ses_recv_diag(struct scsi_device *sdev, int page_code,
+ 		0
+ 	};
+ 	unsigned char recv_page_code;
++	unsigned int retries = SES_RETRIES;
++	struct scsi_sense_hdr sshdr;
++
++	do {
++		ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
++				       &sshdr, SES_TIMEOUT, 1, NULL);
++	} while (ret > 0 && --retries && scsi_sense_valid(&sshdr) &&
++		 (sshdr.sense_key == NOT_READY ||
++		  (sshdr.sense_key == UNIT_ATTENTION && sshdr.asc == 0x29)));
+ 
+-	ret =  scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
+-				NULL, SES_TIMEOUT, SES_RETRIES, NULL);
+ 	if (unlikely(ret))
+ 		return ret;
+ 
+@@ -121,9 +128,16 @@ static int ses_send_diag(struct scsi_device *sdev, int page_code,
+ 		bufflen & 0xff,
+ 		0
+ 	};
++	struct scsi_sense_hdr sshdr;
++	unsigned int retries = SES_RETRIES;
++
++	do {
++		result = scsi_execute_req(sdev, cmd, DMA_TO_DEVICE, buf, bufflen,
++					  &sshdr, SES_TIMEOUT, 1, NULL);
++	} while (result > 0 && --retries && scsi_sense_valid(&sshdr) &&
++		 (sshdr.sense_key == NOT_READY ||
++		  (sshdr.sense_key == UNIT_ATTENTION && sshdr.asc == 0x29)));
+ 
+-	result = scsi_execute_req(sdev, cmd, DMA_TO_DEVICE, buf, bufflen,
+-				  NULL, SES_TIMEOUT, SES_RETRIES, NULL);
+ 	if (result)
+ 		sdev_printk(KERN_ERR, sdev, "SEND DIAGNOSTIC result: %8x\n",
+ 			    result);
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 540861ca2ba37..553b6b9d02222 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -600,6 +600,12 @@ static int rockchip_spi_transfer_one(
+ 	int ret;
+ 	bool use_dma;
+ 
++	/* Zero length transfers won't trigger an interrupt on completion */
++	if (!xfer->len) {
++		spi_finalize_current_transfer(ctlr);
++		return 1;
++	}
++
+ 	WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
+ 		(readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY));
+ 
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index 4c7ebd1d3f9c9..b1162e566a707 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -417,7 +417,7 @@ static irqreturn_t tsens_critical_irq_thread(int irq, void *data)
+ 		const struct tsens_sensor *s = &priv->sensor[i];
+ 		u32 hw_id = s->hw_id;
+ 
+-		if (IS_ERR(s->tzd))
++		if (!s->tzd)
+ 			continue;
+ 		if (!tsens_threshold_violated(priv, hw_id, &d))
+ 			continue;
+@@ -467,7 +467,7 @@ static irqreturn_t tsens_irq_thread(int irq, void *data)
+ 		const struct tsens_sensor *s = &priv->sensor[i];
+ 		u32 hw_id = s->hw_id;
+ 
+-		if (IS_ERR(s->tzd))
++		if (!s->tzd)
+ 			continue;
+ 		if (!tsens_threshold_violated(priv, hw_id, &d))
+ 			continue;
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 2a7828971d056..a215ec9e172e6 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -5191,6 +5191,10 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
+ 	hcd->has_tt = 1;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res) {
++		retval = -EINVAL;
++		goto error1;
++	}
+ 	hcd->rsrc_start = res->start;
+ 	hcd->rsrc_len = resource_size(res);
+ 
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index a3e7be96527d7..23fd09a9bbaf1 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -396,6 +396,14 @@ static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,
+ 			map->unmap_ops[offset+i].handle,
+ 			map->unmap_ops[offset+i].status);
+ 		map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
++		if (use_ptemod) {
++			if (map->kunmap_ops[offset+i].status)
++				err = -EINVAL;
++			pr_debug("kunmap handle=%u st=%d\n",
++				 map->kunmap_ops[offset+i].handle,
++				 map->kunmap_ops[offset+i].status);
++			map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;
++		}
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index dbb18dc956f34..de4f55154d498 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -232,10 +232,11 @@ retry:
+ 	/*
+ 	 * Get IO TLB memory from any location.
+ 	 */
+-	start = memblock_alloc(PAGE_ALIGN(bytes), PAGE_SIZE);
++	start = memblock_alloc(PAGE_ALIGN(bytes),
++			       IO_TLB_SEGSIZE << IO_TLB_SHIFT);
+ 	if (!start)
+-		panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+-		      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
++		panic("%s: Failed to allocate %lu bytes\n",
++		      __func__, PAGE_ALIGN(bytes));
+ 
+ 	/*
+ 	 * And replace that memory with pages under 4GB.
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 54ee54ae36bc8..4579bbda46346 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1760,6 +1760,10 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 		goto error;
+ 	}
+ 
++	ret = afs_validate(vnode, op->key);
++	if (ret < 0)
++		goto error_op;
++
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	afs_op_set_vnode(op, 1, vnode);
+ 	op->file[0].dv_delta = 1;
+@@ -1773,6 +1777,8 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 	op->create.reason	= afs_edit_dir_for_link;
+ 	return afs_do_sync_operation(op);
+ 
++error_op:
++	afs_put_operation(op);
+ error:
+ 	d_drop(dentry);
+ 	_leave(" = %d", ret);
+@@ -1957,6 +1963,11 @@ static int afs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
+ 	if (IS_ERR(op))
+ 		return PTR_ERR(op);
+ 
++	ret = afs_validate(vnode, op->key);
++	op->error = ret;
++	if (ret < 0)
++		goto error;
++
+ 	afs_op_set_vnode(op, 0, orig_dvnode);
+ 	afs_op_set_vnode(op, 1, new_dvnode); /* May be same as orig_dvnode */
+ 	op->file[0].dv_delta = 1;
+diff --git a/fs/afs/file.c b/fs/afs/file.c
+index db035ae2a1345..5efa1cf2a20a4 100644
+--- a/fs/afs/file.c
++++ b/fs/afs/file.c
+@@ -24,12 +24,13 @@ static void afs_invalidatepage(struct page *page, unsigned int offset,
+ static int afs_releasepage(struct page *page, gfp_t gfp_flags);
+ 
+ static void afs_readahead(struct readahead_control *ractl);
++static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
+ 
+ const struct file_operations afs_file_operations = {
+ 	.open		= afs_open,
+ 	.release	= afs_release,
+ 	.llseek		= generic_file_llseek,
+-	.read_iter	= generic_file_read_iter,
++	.read_iter	= afs_file_read_iter,
+ 	.write_iter	= afs_file_write,
+ 	.mmap		= afs_file_mmap,
+ 	.splice_read	= generic_file_splice_read,
+@@ -502,3 +503,16 @@ static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 		vma->vm_ops = &afs_vm_ops;
+ 	return ret;
+ }
++
++static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++	struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
++	struct afs_file *af = iocb->ki_filp->private_data;
++	int ret;
++
++	ret = afs_validate(vnode, af->key);
++	if (ret < 0)
++		return ret;
++
++	return generic_file_read_iter(iocb, iter);
++}
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index e86f5a245514d..2dfe3b3a53d69 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -807,6 +807,7 @@ int afs_writepages(struct address_space *mapping,
+ ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ {
+ 	struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
++	struct afs_file *af = iocb->ki_filp->private_data;
+ 	ssize_t result;
+ 	size_t count = iov_iter_count(from);
+ 
+@@ -822,6 +823,10 @@ ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (!count)
+ 		return 0;
+ 
++	result = afs_validate(vnode, af->key);
++	if (result < 0)
++		return result;
++
+ 	result = generic_file_write_iter(iocb, from);
+ 
+ 	_leave(" = %zd", result);
+@@ -835,13 +840,18 @@ ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
+  */
+ int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+-	struct inode *inode = file_inode(file);
+-	struct afs_vnode *vnode = AFS_FS_I(inode);
++	struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
++	struct afs_file *af = file->private_data;
++	int ret;
+ 
+ 	_enter("{%llx:%llu},{n=%pD},%d",
+ 	       vnode->fid.vid, vnode->fid.vnode, file,
+ 	       datasync);
+ 
++	ret = afs_validate(vnode, af->key);
++	if (ret < 0)
++		return ret;
++
+ 	return file_write_and_wait_range(file, start, end);
+ }
+ 
+@@ -855,11 +865,14 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
+ 	struct file *file = vmf->vma->vm_file;
+ 	struct inode *inode = file_inode(file);
+ 	struct afs_vnode *vnode = AFS_FS_I(inode);
++	struct afs_file *af = file->private_data;
+ 	unsigned long priv;
+ 	vm_fault_t ret = VM_FAULT_RETRY;
+ 
+ 	_enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, page->index);
+ 
++	afs_validate(vnode, af->key);
++
+ 	sb_start_pagefault(inode->i_sb);
+ 
+ 	/* Wait for the page to be written to the cache before we allow it to
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index df6631eefc652..5e8a56113b23d 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -666,7 +666,18 @@ blk_status_t btrfs_csum_one_bio(struct btrfs_inode *inode, struct bio *bio,
+ 
+ 		if (!ordered) {
+ 			ordered = btrfs_lookup_ordered_extent(inode, offset);
+-			BUG_ON(!ordered); /* Logic error */
++			/*
++			 * The bio range is not covered by any ordered extent,
++			 * must be a code logic error.
++			 */
++			if (unlikely(!ordered)) {
++				WARN(1, KERN_WARNING
++			"no ordered extent for root %llu ino %llu offset %llu\n",
++				     inode->root->root_key.objectid,
++				     btrfs_ino(inode), offset);
++				kvfree(sums);
++				return BLK_STS_IOERR;
++			}
+ 		}
+ 
+ 		nr_sectors = BTRFS_BYTES_TO_BLKS(fs_info,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 682416d4edefa..19c780242e127 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1149,6 +1149,19 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 	atomic_set(&device->dev_stats_ccnt, 0);
+ 	extent_io_tree_release(&device->alloc_state);
+ 
++	/*
++	 * Reset the flush error record. We might have a transient flush error
++	 * in this mount, and if so we aborted the current transaction and set
++	 * the fs to an error state, guaranteeing no super blocks can be further
++	 * committed. However that error might be transient and if we unmount the
++	 * filesystem and mount it again, we should allow the mount to succeed
++	 * (btrfs_check_rw_degradable() should not fail) - if after mounting the
++	 * filesystem again we still get flush errors, then we will again abort
++	 * any transaction and set the error state, guaranteeing no commits of
++	 * unsafe super blocks.
++	 */
++	device->last_flush_error = 0;
++
+ 	/* Verify the device is back in a pristine state  */
+ 	ASSERT(!test_bit(BTRFS_DEV_STATE_FLUSH_SENT, &device->dev_state));
+ 	ASSERT(!test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state));
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index b6d2e35919278..e1739d0135b44 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2398,7 +2398,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	buf->sd.OffsetDacl = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+ 	/* Ship the ACL for now. we will copy it into buf later. */
+ 	aclptr = ptr;
+-	ptr += sizeof(struct cifs_acl);
++	ptr += sizeof(struct smb3_acl);
+ 
+ 	/* create one ACE to hold the mode embedded in reserved special SID */
+ 	acelen = setup_special_mode_ACE((struct cifs_ace *)ptr, (__u64)mode);
+@@ -2423,7 +2423,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	acl.AclRevision = ACL_REVISION; /* See 2.4.4.1 of MS-DTYP */
+ 	acl.AclSize = cpu_to_le16(acl_size);
+ 	acl.AceCount = cpu_to_le16(ace_count);
+-	memcpy(aclptr, &acl, sizeof(struct cifs_acl));
++	memcpy(aclptr, &acl, sizeof(struct smb3_acl));
+ 
+ 	buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+ 	*len = roundup(ptr - (__u8 *)buf, 8);
+diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c
+index 1f3f4326bf3ce..c17ccc19b938e 100644
+--- a/fs/ext2/balloc.c
++++ b/fs/ext2/balloc.c
+@@ -48,10 +48,9 @@ struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb,
+ 	struct ext2_sb_info *sbi = EXT2_SB(sb);
+ 
+ 	if (block_group >= sbi->s_groups_count) {
+-		ext2_error (sb, "ext2_get_group_desc",
+-			    "block_group >= groups_count - "
+-			    "block_group = %d, groups_count = %lu",
+-			    block_group, sbi->s_groups_count);
++		WARN(1, "block_group >= groups_count - "
++		     "block_group = %d, groups_count = %lu",
++		     block_group, sbi->s_groups_count);
+ 
+ 		return NULL;
+ 	}
+@@ -59,10 +58,9 @@ struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb,
+ 	group_desc = block_group >> EXT2_DESC_PER_BLOCK_BITS(sb);
+ 	offset = block_group & (EXT2_DESC_PER_BLOCK(sb) - 1);
+ 	if (!sbi->s_group_desc[group_desc]) {
+-		ext2_error (sb, "ext2_get_group_desc",
+-			    "Group descriptor not loaded - "
+-			    "block_group = %d, group_desc = %lu, desc = %lu",
+-			     block_group, group_desc, offset);
++		WARN(1, "Group descriptor not loaded - "
++		     "block_group = %d, group_desc = %lu, desc = %lu",
++		      block_group, group_desc, offset);
+ 		return NULL;
+ 	}
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 699a08d724c24..675216f7022da 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -8648,8 +8648,10 @@ static void io_destroy_buffers(struct io_ring_ctx *ctx)
+ 	struct io_buffer *buf;
+ 	unsigned long index;
+ 
+-	xa_for_each(&ctx->io_buffers, index, buf)
++	xa_for_each(&ctx->io_buffers, index, buf) {
+ 		__io_remove_buffers(ctx, buf, index, -1U);
++		cond_resched();
++	}
+ }
+ 
+ static void io_req_cache_free(struct list_head *list, struct task_struct *tsk)
+@@ -9145,8 +9147,10 @@ static void io_uring_clean_tctx(struct io_uring_task *tctx)
+ 	struct io_tctx_node *node;
+ 	unsigned long index;
+ 
+-	xa_for_each(&tctx->xa, index, node)
++	xa_for_each(&tctx->xa, index, node) {
+ 		io_uring_del_tctx_node(index);
++		cond_resched();
++	}
+ 	if (wq) {
+ 		/*
+ 		 * Must be after io_uring_del_task_file() (removes nodes under
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 3d805f5b1f5d2..1c33a52558930 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -3570,7 +3570,7 @@ static struct nfsd4_conn *__nfsd4_find_conn(struct svc_xprt *xpt, struct nfsd4_s
+ }
+ 
+ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
+-				struct nfsd4_session *session, u32 req)
++		struct nfsd4_session *session, u32 req, struct nfsd4_conn **conn)
+ {
+ 	struct nfs4_client *clp = session->se_client;
+ 	struct svc_xprt *xpt = rqst->rq_xprt;
+@@ -3593,6 +3593,8 @@ static __be32 nfsd4_match_existing_connection(struct svc_rqst *rqst,
+ 	else
+ 		status = nfserr_inval;
+ 	spin_unlock(&clp->cl_lock);
++	if (status == nfs_ok && conn)
++		*conn = c;
+ 	return status;
+ }
+ 
+@@ -3617,8 +3619,16 @@ __be32 nfsd4_bind_conn_to_session(struct svc_rqst *rqstp,
+ 	status = nfserr_wrong_cred;
+ 	if (!nfsd4_mach_creds_match(session->se_client, rqstp))
+ 		goto out;
+-	status = nfsd4_match_existing_connection(rqstp, session, bcts->dir);
+-	if (status == nfs_ok || status == nfserr_inval)
++	status = nfsd4_match_existing_connection(rqstp, session,
++			bcts->dir, &conn);
++	if (status == nfs_ok) {
++		if (bcts->dir == NFS4_CDFC4_FORE_OR_BOTH ||
++				bcts->dir == NFS4_CDFC4_BACK)
++			conn->cn_flags |= NFS4_CDFC4_BACK;
++		nfsd4_probe_callback(session->se_client);
++		goto out;
++	}
++	if (status == nfserr_inval)
+ 		goto out;
+ 	status = nfsd4_map_bcts_dir(&bcts->dir);
+ 	if (status)
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 3fcd24236793e..cb95d3f3337d5 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -422,6 +422,7 @@ enum {
+ 	ATA_HORKAGE_NOTRIM	= (1 << 24),	/* don't use TRIM */
+ 	ATA_HORKAGE_MAX_SEC_1024 = (1 << 25),	/* Limit max sects to 1024 */
+ 	ATA_HORKAGE_MAX_TRIM_128M = (1 << 26),	/* Limit max trim size to 128M */
++	ATA_HORKAGE_NO_NCQ_ON_ATI = (1 << 27),	/* Disable NCQ on ATI chipset */
+ 
+ 	 /* DMA mask for user DMA control: User visible values; DO NOT
+ 	    renumber */
+diff --git a/include/linux/mdio.h b/include/linux/mdio.h
+index ffb787d5ebde3..5e6dc38f418e4 100644
+--- a/include/linux/mdio.h
++++ b/include/linux/mdio.h
+@@ -80,6 +80,9 @@ struct mdio_driver {
+ 
+ 	/* Clears up any memory if needed */
+ 	void (*remove)(struct mdio_device *mdiodev);
++
++	/* Quiesces the device on system shutdown, turns off interrupts etc */
++	void (*shutdown)(struct mdio_device *mdiodev);
+ };
+ 
+ static inline struct mdio_driver *
+diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
+index 801c415bac59d..b9e94c5e70970 100644
+--- a/scripts/Makefile.kasan
++++ b/scripts/Makefile.kasan
+@@ -33,10 +33,11 @@ else
+ 	CFLAGS_KASAN := $(CFLAGS_KASAN_SHADOW) \
+ 	 $(call cc-param,asan-globals=1) \
+ 	 $(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \
+-	 $(call cc-param,asan-stack=$(stack_enable)) \
+ 	 $(call cc-param,asan-instrument-allocas=1)
+ endif
+ 
++CFLAGS_KASAN += $(call cc-param,asan-stack=$(stack_enable))
++
+ endif # CONFIG_KASAN_GENERIC
+ 
+ ifdef CONFIG_KASAN_SW_TAGS
+diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c
+index c41f95815480d..797699462cd8e 100644
+--- a/tools/arch/x86/lib/insn.c
++++ b/tools/arch/x86/lib/insn.c
+@@ -37,10 +37,10 @@
+ 	((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
+ 
+ #define __get_next(t, insn)	\
+-	({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
++	({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+ 
+ #define __peek_nbyte_next(t, insn, n)	\
+-	({ t r = *(t*)((insn)->next_byte + n); leXX_to_cpu(t, r); })
++	({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
+ 
+ #define get_next(t, insn)	\
+ 	({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
+diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
+index d79be15dd3d20..451fed5ce8e72 100644
+--- a/tools/testing/selftests/kvm/include/test_util.h
++++ b/tools/testing/selftests/kvm/include/test_util.h
+@@ -95,6 +95,8 @@ struct vm_mem_backing_src_alias {
+ 	uint32_t flag;
+ };
+ 
++#define MIN_RUN_DELAY_NS	200000UL
++
+ bool thp_configured(void);
+ size_t get_trans_hugepagesz(void);
+ size_t get_def_hugetlb_pagesz(void);
+@@ -102,6 +104,7 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i);
+ size_t get_backing_src_pagesz(uint32_t i);
+ void backing_src_help(void);
+ enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
++long get_run_delay(void);
+ 
+ /*
+  * Whether or not the given source type is shared memory (as opposed to
+diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
+index af1031fed97f7..a9107bfae4021 100644
+--- a/tools/testing/selftests/kvm/lib/test_util.c
++++ b/tools/testing/selftests/kvm/lib/test_util.c
+@@ -11,6 +11,7 @@
+ #include <stdlib.h>
+ #include <time.h>
+ #include <sys/stat.h>
++#include <sys/syscall.h>
+ #include <linux/mman.h>
+ #include "linux/kernel.h"
+ 
+@@ -129,13 +130,16 @@ size_t get_trans_hugepagesz(void)
+ {
+ 	size_t size;
+ 	FILE *f;
++	int ret;
+ 
+ 	TEST_ASSERT(thp_configured(), "THP is not configured in host kernel");
+ 
+ 	f = fopen("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", "r");
+ 	TEST_ASSERT(f != NULL, "Error in opening transparent_hugepage/hpage_pmd_size");
+ 
+-	fscanf(f, "%ld", &size);
++	ret = fscanf(f, "%ld", &size);
++	ret = fscanf(f, "%ld", &size);
++	TEST_ASSERT(ret < 1, "Error reading transparent_hugepage/hpage_pmd_size");
+ 	fclose(f);
+ 
+ 	return size;
+@@ -300,3 +304,19 @@ enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name)
+ 	TEST_FAIL("Unknown backing src type: %s", type_name);
+ 	return -1;
+ }
++
++long get_run_delay(void)
++{
++	char path[64];
++	long val[2];
++	FILE *fp;
++
++	sprintf(path, "/proc/%ld/schedstat", syscall(SYS_gettid));
++	fp = fopen(path, "r");
++	/* Return MIN_RUN_DELAY_NS upon failure just to be safe */
++	if (fscanf(fp, "%ld %ld ", &val[0], &val[1]) < 2)
++		val[1] = MIN_RUN_DELAY_NS;
++	fclose(fp);
++
++	return val[1];
++}
+diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
+index ecec30865a74f..62f2eb9ee3d56 100644
+--- a/tools/testing/selftests/kvm/steal_time.c
++++ b/tools/testing/selftests/kvm/steal_time.c
+@@ -10,7 +10,6 @@
+ #include <sched.h>
+ #include <pthread.h>
+ #include <linux/kernel.h>
+-#include <sys/syscall.h>
+ #include <asm/kvm.h>
+ #include <asm/kvm_para.h>
+ 
+@@ -20,7 +19,6 @@
+ 
+ #define NR_VCPUS		4
+ #define ST_GPA_BASE		(1 << 30)
+-#define MIN_RUN_DELAY_NS	200000UL
+ 
+ static void *st_gva[NR_VCPUS];
+ static uint64_t guest_stolen_time[NR_VCPUS];
+@@ -118,12 +116,12 @@ struct st_time {
+ 	uint64_t st_time;
+ };
+ 
+-static int64_t smccc(uint32_t func, uint32_t arg)
++static int64_t smccc(uint32_t func, uint64_t arg)
+ {
+ 	unsigned long ret;
+ 
+ 	asm volatile(
+-		"mov	x0, %1\n"
++		"mov	w0, %w1\n"
+ 		"mov	x1, %2\n"
+ 		"hvc	#0\n"
+ 		"mov	%0, x0\n"
+@@ -217,20 +215,6 @@ static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpuid)
+ 
+ #endif
+ 
+-static long get_run_delay(void)
+-{
+-	char path[64];
+-	long val[2];
+-	FILE *fp;
+-
+-	sprintf(path, "/proc/%ld/schedstat", syscall(SYS_gettid));
+-	fp = fopen(path, "r");
+-	fscanf(fp, "%ld %ld ", &val[0], &val[1]);
+-	fclose(fp);
+-
+-	return val[1];
+-}
+-
+ static void *do_steal_time(void *arg)
+ {
+ 	struct timespec ts, stop;
+diff --git a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+index e6480fd5c4bdc..8039e1eff9388 100644
+--- a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
++++ b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+@@ -82,7 +82,8 @@ int get_warnings_count(void)
+ 	FILE *f;
+ 
+ 	f = popen("dmesg | grep \"WARNING:\" | wc -l", "r");
+-	fscanf(f, "%d", &warnings);
++	if (fscanf(f, "%d", &warnings) < 1)
++		warnings = 0;
+ 	fclose(f);
+ 
+ 	return warnings;
+diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+index 117bf49a3d795..eda0d2a51224b 100644
+--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
++++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+@@ -14,7 +14,6 @@
+ #include <stdint.h>
+ #include <time.h>
+ #include <sched.h>
+-#include <sys/syscall.h>
+ 
+ #define VCPU_ID		5
+ 
+@@ -98,20 +97,6 @@ static void guest_code(void)
+ 	GUEST_DONE();
+ }
+ 
+-static long get_run_delay(void)
+-{
+-        char path[64];
+-        long val[2];
+-        FILE *fp;
+-
+-        sprintf(path, "/proc/%ld/schedstat", syscall(SYS_gettid));
+-        fp = fopen(path, "r");
+-        fscanf(fp, "%ld %ld ", &val[0], &val[1]);
+-        fclose(fp);
+-
+-        return val[1];
+-}
+-
+ static int cmp_timespec(struct timespec *a, struct timespec *b)
+ {
+ 	if (a->tv_sec > b->tv_sec)
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index fa2ac0e56b43c..fe7ee2b0f29c2 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -48,6 +48,7 @@ ARCH		?= $(SUBARCH)
+ # When local build is done, headers are installed in the default
+ # INSTALL_HDR_PATH usr/include.
+ .PHONY: khdr
++.NOTPARALLEL:
+ khdr:
+ ifndef KSFT_KHDR_INSTALL_DONE
+ ifeq (1,$(DEFAULT_INSTALL_HDR_PATH))
+diff --git a/tools/usb/testusb.c b/tools/usb/testusb.c
+index ee8208b2f9460..69c3ead25313d 100644
+--- a/tools/usb/testusb.c
++++ b/tools/usb/testusb.c
+@@ -265,12 +265,6 @@ nomem:
+ 	}
+ 
+ 	entry->ifnum = ifnum;
+-
+-	/* FIXME update USBDEVFS_CONNECTINFO so it tells about high speed etc */
+-
+-	fprintf(stderr, "%s speed\t%s\t%u\n",
+-		speed(entry->speed), entry->name, entry->ifnum);
+-
+ 	entry->next = testdevs;
+ 	testdevs = entry;
+ 	return 0;
+@@ -299,6 +293,14 @@ static void *handle_testdev (void *arg)
+ 		return 0;
+ 	}
+ 
++	status  =  ioctl(fd, USBDEVFS_GET_SPEED, NULL);
++	if (status < 0)
++		fprintf(stderr, "USBDEVFS_GET_SPEED failed %d\n", status);
++	else
++		dev->speed = status;
++	fprintf(stderr, "%s speed\t%s\t%u\n",
++			speed(dev->speed), dev->name, dev->ifnum);
++
+ restart:
+ 	for (i = 0; i < TEST_CASES; i++) {
+ 		if (dev->test != -1 && dev->test != i)
+diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
+index 0517c744b04e8..f62f10c988db1 100644
+--- a/tools/vm/page-types.c
++++ b/tools/vm/page-types.c
+@@ -1331,7 +1331,7 @@ int main(int argc, char *argv[])
+ 	if (opt_list && opt_list_mapcnt)
+ 		kpagecount_fd = checked_open(PROC_KPAGECOUNT, O_RDONLY);
+ 
+-	if (opt_mark_idle && opt_file)
++	if (opt_mark_idle)
+ 		page_idle_fd = checked_open(SYS_KERNEL_MM_PAGE_IDLE, O_RDWR);
+ 
+ 	if (opt_list && opt_pid)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index b50dbe269f4bf..1a11dcb670a39 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -3053,15 +3053,19 @@ out:
+ 
+ static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu)
+ {
+-	unsigned int old, val, shrink;
++	unsigned int old, val, shrink, grow_start;
+ 
+ 	old = val = vcpu->halt_poll_ns;
+ 	shrink = READ_ONCE(halt_poll_ns_shrink);
++	grow_start = READ_ONCE(halt_poll_ns_grow_start);
+ 	if (shrink == 0)
+ 		val = 0;
+ 	else
+ 		val /= shrink;
+ 
++	if (val < grow_start)
++		val = 0;
++
+ 	vcpu->halt_poll_ns = val;
+ 	trace_kvm_halt_poll_ns_shrink(vcpu->vcpu_id, val, old);
+ }


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-13 16:15 Alice Ferrazzi
  0 siblings, 0 replies; 40+ messages in thread
From: Alice Ferrazzi @ 2021-10-13 16:15 UTC (permalink / raw
  To: gentoo-commits

commit:     94580b1463bbf03ca324c74c008b18d535048b6f
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 13 16:14:14 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Oct 13 16:14:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=94580b14

Linux patch 5.14.12

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-5.14.12.patch | 5808 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5812 insertions(+)

diff --git a/0000_README b/0000_README
index 6096d94..4456b48 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1010_linux-5.14.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.11
 
+Patch:  1011_linux-5.14.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.14.12.patch b/1011_linux-5.14.12.patch
new file mode 100644
index 0000000..5f0db51
--- /dev/null
+++ b/1011_linux-5.14.12.patch
@@ -0,0 +1,5808 @@
+diff --git a/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml b/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml
+index 26932d2e86aba..8608b9dd8e9db 100644
+--- a/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml
++++ b/Documentation/devicetree/bindings/display/bridge/ti,sn65dsi86.yaml
+@@ -18,7 +18,7 @@ properties:
+     const: ti,sn65dsi86
+ 
+   reg:
+-    const: 0x2d
++    enum: [ 0x2c, 0x2d ]
+ 
+   enable-gpios:
+     maxItems: 1
+diff --git a/Makefile b/Makefile
+index ca6c4472775cb..02cde08f4978e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index d3082b9774e40..4f88e96d81ddb 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -56,6 +56,7 @@
+ 	panel {
+ 		compatible = "edt,etm0700g0dh6";
+ 		pinctrl-0 = <&pinctrl_display_gpio>;
++		pinctrl-names = "default";
+ 		enable-gpios = <&gpio6 0 GPIO_ACTIVE_HIGH>;
+ 
+ 		port {
+@@ -76,8 +77,7 @@
+ 		regulator-name = "vbus";
+ 		regulator-min-microvolt = <5000000>;
+ 		regulator-max-microvolt = <5000000>;
+-		gpio = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
++		gpio = <&gpio1 2 0>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+index cb8b539eb29d1..e5c4dc65fbabf 100644
+--- a/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
++++ b/arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
+@@ -5,6 +5,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/input/input.h>
++#include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pwm/pwm.h>
+ 
+ / {
+@@ -277,6 +278,7 @@
+ 			led-cur = /bits/ 8 <0x20>;
+ 			max-cur = /bits/ 8 <0x60>;
+ 			reg = <0>;
++			color = <LED_COLOR_ID_RED>;
+ 		};
+ 
+ 		chan@1 {
+@@ -284,6 +286,7 @@
+ 			led-cur = /bits/ 8 <0x20>;
+ 			max-cur = /bits/ 8 <0x60>;
+ 			reg = <1>;
++			color = <LED_COLOR_ID_GREEN>;
+ 		};
+ 
+ 		chan@2 {
+@@ -291,6 +294,7 @@
+ 			led-cur = /bits/ 8 <0x20>;
+ 			max-cur = /bits/ 8 <0x60>;
+ 			reg = <2>;
++			color = <LED_COLOR_ID_BLUE>;
+ 		};
+ 
+ 		chan@3 {
+@@ -298,6 +302,7 @@
+ 			led-cur = /bits/ 8 <0x0>;
+ 			max-cur = /bits/ 8 <0x0>;
+ 			reg = <3>;
++			color = <LED_COLOR_ID_WHITE>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-pico.dtsi b/arch/arm/boot/dts/imx6qdl-pico.dtsi
+index 5de4ccb979163..f7a56d6b160c8 100644
+--- a/arch/arm/boot/dts/imx6qdl-pico.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-pico.dtsi
+@@ -176,7 +176,18 @@
+ 	pinctrl-0 = <&pinctrl_enet>;
+ 	phy-mode = "rgmii-id";
+ 	phy-reset-gpios = <&gpio1 26 GPIO_ACTIVE_LOW>;
++	phy-handle = <&phy>;
+ 	status = "okay";
++
++	mdio {
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		phy: ethernet-phy@1 {
++			reg = <1>;
++			qca,clk-out-frequency = <125000000>;
++		};
++	};
+ };
+ 
+ &hdmi {
+diff --git a/arch/arm/boot/dts/imx6sx-sdb.dts b/arch/arm/boot/dts/imx6sx-sdb.dts
+index 5a63ca6157229..99f4cf777a384 100644
+--- a/arch/arm/boot/dts/imx6sx-sdb.dts
++++ b/arch/arm/boot/dts/imx6sx-sdb.dts
+@@ -114,7 +114,7 @@
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		spi-max-frequency = <29000000>;
+ 		spi-rx-bus-width = <4>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		reg = <0>;
+ 	};
+ 
+@@ -124,7 +124,7 @@
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		spi-max-frequency = <29000000>;
+ 		spi-rx-bus-width = <4>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		reg = <2>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi b/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
+index 779cc536566d6..a3fde3316c736 100644
+--- a/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
++++ b/arch/arm/boot/dts/imx6ul-14x14-evk.dtsi
+@@ -292,7 +292,7 @@
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		spi-max-frequency = <29000000>;
+ 		spi-rx-bus-width = <4>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		reg = <0>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/omap3430-sdp.dts b/arch/arm/boot/dts/omap3430-sdp.dts
+index c5b9037184149..7d530ae3483b8 100644
+--- a/arch/arm/boot/dts/omap3430-sdp.dts
++++ b/arch/arm/boot/dts/omap3430-sdp.dts
+@@ -101,7 +101,7 @@
+ 
+ 	nand@1,0 {
+ 		compatible = "ti,omap2-nand";
+-		reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
++		reg = <1 0 4>; /* CS1, offset 0, IO size 4 */
+ 		interrupt-parent = <&gpmc>;
+ 		interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
+ 			     <1 IRQ_TYPE_NONE>;	/* termcount */
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index e36d590e83732..72c4a9fc41a20 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -198,7 +198,7 @@
+ 			clock-frequency = <19200000>;
+ 		};
+ 
+-		pxo_board {
++		pxo_board: pxo_board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <27000000>;
+@@ -1148,7 +1148,7 @@
+ 		};
+ 
+ 		gpu: adreno-3xx@4300000 {
+-			compatible = "qcom,adreno-3xx";
++			compatible = "qcom,adreno-320.2", "qcom,adreno";
+ 			reg = <0x04300000 0x20000>;
+ 			reg-names = "kgsl_3d0_reg_memory";
+ 			interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
+@@ -1163,7 +1163,6 @@
+ 			    <&mmcc GFX3D_AHB_CLK>,
+ 			    <&mmcc GFX3D_AXI_CLK>,
+ 			    <&mmcc MMSS_IMEM_AHB_CLK>;
+-			qcom,chipid = <0x03020002>;
+ 
+ 			iommus = <&gfx3d 0
+ 				  &gfx3d 1
+@@ -1306,7 +1305,7 @@
+ 			reg-names = "dsi_pll", "dsi_phy", "dsi_phy_regulator";
+ 			clock-names = "iface_clk", "ref";
+ 			clocks = <&mmcc DSI_M_AHB_CLK>,
+-				 <&cxo_board>;
++				 <&pxo_board>;
+ 		};
+ 
+ 
+diff --git a/arch/arm/configs/gemini_defconfig b/arch/arm/configs/gemini_defconfig
+index d2d5f1cf815f2..e6ff844821cfb 100644
+--- a/arch/arm/configs/gemini_defconfig
++++ b/arch/arm/configs/gemini_defconfig
+@@ -76,6 +76,7 @@ CONFIG_REGULATOR_FIXED_VOLTAGE=y
+ CONFIG_DRM=y
+ CONFIG_DRM_PANEL_ILITEK_IL9322=y
+ CONFIG_DRM_TVE200=y
++CONFIG_FB=y
+ CONFIG_LOGO=y
+ CONFIG_USB=y
+ CONFIG_USB_MON=y
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 90dcdfe3b3d0d..2dee383f90509 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -514,18 +514,22 @@ static const struct of_device_id ramc_ids[] __initconst = {
+ 	{ /*sentinel*/ }
+ };
+ 
+-static __init void at91_dt_ramc(void)
++static __init int at91_dt_ramc(void)
+ {
+ 	struct device_node *np;
+ 	const struct of_device_id *of_id;
+ 	int idx = 0;
+ 	void *standby = NULL;
+ 	const struct ramc_info *ramc;
++	int ret;
+ 
+ 	for_each_matching_node_and_match(np, ramc_ids, &of_id) {
+ 		soc_pm.data.ramc[idx] = of_iomap(np, 0);
+-		if (!soc_pm.data.ramc[idx])
+-			panic(pr_fmt("unable to map ramc[%d] cpu registers\n"), idx);
++		if (!soc_pm.data.ramc[idx]) {
++			pr_err("unable to map ramc[%d] cpu registers\n", idx);
++			ret = -ENOMEM;
++			goto unmap_ramc;
++		}
+ 
+ 		ramc = of_id->data;
+ 		if (!standby)
+@@ -535,15 +539,26 @@ static __init void at91_dt_ramc(void)
+ 		idx++;
+ 	}
+ 
+-	if (!idx)
+-		panic(pr_fmt("unable to find compatible ram controller node in dtb\n"));
++	if (!idx) {
++		pr_err("unable to find compatible ram controller node in dtb\n");
++		ret = -ENODEV;
++		goto unmap_ramc;
++	}
+ 
+ 	if (!standby) {
+ 		pr_warn("ramc no standby function available\n");
+-		return;
++		return 0;
+ 	}
+ 
+ 	at91_cpuidle_device.dev.platform_data = standby;
++
++	return 0;
++
++unmap_ramc:
++	while (idx)
++		iounmap(soc_pm.data.ramc[--idx]);
++
++	return ret;
+ }
+ 
+ static void at91rm9200_idle(void)
+@@ -866,6 +881,8 @@ static void __init at91_pm_init(void (*pm_idle)(void))
+ 
+ void __init at91rm9200_pm_init(void)
+ {
++	int ret;
++
+ 	if (!IS_ENABLED(CONFIG_SOC_AT91RM9200))
+ 		return;
+ 
+@@ -877,7 +894,9 @@ void __init at91rm9200_pm_init(void)
+ 	soc_pm.data.standby_mode = AT91_PM_STANDBY;
+ 	soc_pm.data.suspend_mode = AT91_PM_ULP0;
+ 
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
+ 
+ 	/*
+ 	 * AT91RM9200 SDRAM low-power mode cannot be used with self-refresh.
+@@ -892,13 +911,17 @@ void __init sam9x60_pm_init(void)
+ 	static const int modes[] __initconst = {
+ 		AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST, AT91_PM_ULP1,
+ 	};
++	int ret;
+ 
+ 	if (!IS_ENABLED(CONFIG_SOC_SAM9X60))
+ 		return;
+ 
+ 	at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
+ 	at91_pm_modes_init();
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(NULL);
+ 
+ 	soc_pm.ws_ids = sam9x60_ws_ids;
+@@ -907,6 +930,8 @@ void __init sam9x60_pm_init(void)
+ 
+ void __init at91sam9_pm_init(void)
+ {
++	int ret;
++
+ 	if (!IS_ENABLED(CONFIG_SOC_AT91SAM9))
+ 		return;
+ 
+@@ -918,7 +943,10 @@ void __init at91sam9_pm_init(void)
+ 	soc_pm.data.standby_mode = AT91_PM_STANDBY;
+ 	soc_pm.data.suspend_mode = AT91_PM_ULP0;
+ 
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(at91sam9_idle);
+ }
+ 
+@@ -927,12 +955,16 @@ void __init sama5_pm_init(void)
+ 	static const int modes[] __initconst = {
+ 		AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST,
+ 	};
++	int ret;
+ 
+ 	if (!IS_ENABLED(CONFIG_SOC_SAMA5))
+ 		return;
+ 
+ 	at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(NULL);
+ }
+ 
+@@ -942,13 +974,17 @@ void __init sama5d2_pm_init(void)
+ 		AT91_PM_STANDBY, AT91_PM_ULP0, AT91_PM_ULP0_FAST, AT91_PM_ULP1,
+ 		AT91_PM_BACKUP,
+ 	};
++	int ret;
+ 
+ 	if (!IS_ENABLED(CONFIG_SOC_SAMA5D2))
+ 		return;
+ 
+ 	at91_pm_modes_validate(modes, ARRAY_SIZE(modes));
+ 	at91_pm_modes_init();
+-	at91_dt_ramc();
++	ret = at91_dt_ramc();
++	if (ret)
++		return;
++
+ 	at91_pm_init(NULL);
+ 
+ 	soc_pm.ws_ids = sama5d2_ws_ids;
+diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c
+index 9244437cb1b9b..f2ecca339910a 100644
+--- a/arch/arm/mach-imx/pm-imx6.c
++++ b/arch/arm/mach-imx/pm-imx6.c
+@@ -10,6 +10,7 @@
+ #include <linux/io.h>
+ #include <linux/irq.h>
+ #include <linux/genalloc.h>
++#include <linux/irqchip/arm-gic.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
+ #include <linux/of.h>
+@@ -619,6 +620,7 @@ static void __init imx6_pm_common_init(const struct imx6_pm_socdata
+ 
+ static void imx6_pm_stby_poweroff(void)
+ {
++	gic_cpu_if_down(0);
+ 	imx6_set_lpm(STOP_POWER_OFF);
+ 	imx6q_suspend_finish(0);
+ 
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 12b26e04686fa..0c2936c7a3799 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -3614,6 +3614,8 @@ int omap_hwmod_init_module(struct device *dev,
+ 		oh->flags |= HWMOD_SWSUP_SIDLE_ACT;
+ 	if (data->cfg->quirks & SYSC_QUIRK_SWSUP_MSTANDBY)
+ 		oh->flags |= HWMOD_SWSUP_MSTANDBY;
++	if (data->cfg->quirks & SYSC_QUIRK_CLKDM_NOAUTO)
++		oh->flags |= HWMOD_CLKDM_NOAUTO;
+ 
+ 	error = omap_hwmod_check_module(dev, oh, data, sysc_fields,
+ 					rev_offs, sysc_offs, syss_offs,
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index a951276f05475..a903b26cde409 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -36,6 +36,10 @@
+  *                        +-----+
+  *                        |RSVD | JIT scratchpad
+  * current ARM_SP =>      +-----+ <= (BPF_FP - STACK_SIZE + SCRATCH_SIZE)
++ *                        | ... | caller-saved registers
++ *                        +-----+
++ *                        | ... | arguments passed on stack
++ * ARM_SP during call =>  +-----|
+  *                        |     |
+  *                        | ... | Function call stack
+  *                        |     |
+@@ -63,6 +67,12 @@
+  *
+  * When popping registers off the stack at the end of a BPF function, we
+  * reference them via the current ARM_FP register.
++ *
++ * Some eBPF operations are implemented via a call to a helper function.
++ * Such calls are "invisible" in the eBPF code, so it is up to the calling
++ * program to preserve any caller-saved ARM registers during the call. The
++ * JIT emits code to push and pop those registers onto the stack, immediately
++ * above the callee stack frame.
+  */
+ #define CALLEE_MASK	(1 << ARM_R4 | 1 << ARM_R5 | 1 << ARM_R6 | \
+ 			 1 << ARM_R7 | 1 << ARM_R8 | 1 << ARM_R9 | \
+@@ -70,6 +80,8 @@
+ #define CALLEE_PUSH_MASK (CALLEE_MASK | 1 << ARM_LR)
+ #define CALLEE_POP_MASK  (CALLEE_MASK | 1 << ARM_PC)
+ 
++#define CALLER_MASK	(1 << ARM_R0 | 1 << ARM_R1 | 1 << ARM_R2 | 1 << ARM_R3)
++
+ enum {
+ 	/* Stack layout - these are offsets from (top of stack - 4) */
+ 	BPF_R2_HI,
+@@ -464,6 +476,7 @@ static inline int epilogue_offset(const struct jit_ctx *ctx)
+ 
+ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
+ {
++	const int exclude_mask = BIT(ARM_R0) | BIT(ARM_R1);
+ 	const s8 *tmp = bpf2a32[TMP_REG_1];
+ 
+ #if __LINUX_ARM_ARCH__ == 7
+@@ -495,11 +508,17 @@ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
+ 		emit(ARM_MOV_R(ARM_R0, rm), ctx);
+ 	}
+ 
++	/* Push caller-saved registers on stack */
++	emit(ARM_PUSH(CALLER_MASK & ~exclude_mask), ctx);
++
+ 	/* Call appropriate function */
+ 	emit_mov_i(ARM_IP, op == BPF_DIV ?
+ 		   (u32)jit_udiv32 : (u32)jit_mod32, ctx);
+ 	emit_blx_r(ARM_IP, ctx);
+ 
++	/* Restore caller-saved registers from stack */
++	emit(ARM_POP(CALLER_MASK & ~exclude_mask), ctx);
++
+ 	/* Save return value */
+ 	if (rd != ARM_R0)
+ 		emit(ARM_MOV_R(rd, ARM_R0), ctx);
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index 343ecf0e8973a..06b36cc65865c 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -405,9 +405,9 @@
+ 			interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
+ 			clock-frequency = <0>; /* fixed up by bootloader */
+ 			clocks = <&clockgen QORIQ_CLK_HWACCEL 1>;
+-			voltage-ranges = <1800 1800 3300 3300>;
++			voltage-ranges = <1800 1800>;
+ 			sdhci,auto-cmd12;
+-			broken-cd;
++			non-removable;
+ 			little-endian;
+ 			bus-width = <4>;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 988f8ab679ad6..40f5e7a3b0644 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -91,7 +91,7 @@
+ 		#size-cells = <1>;
+ 		compatible = "jedec,spi-nor";
+ 		spi-max-frequency = <80000000>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		spi-rx-bus-width = <4>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-evk.dts b/arch/arm64/boot/dts/freescale/imx8mm-evk.dts
+index 4e2820d19244a..a2b24d4d4e3e7 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-evk.dts
+@@ -48,7 +48,7 @@
+ 		#size-cells = <1>;
+ 		compatible = "jedec,spi-nor";
+ 		spi-max-frequency = <80000000>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		spi-rx-bus-width = <4>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
+index d0456daefda88..9db9b90bf2bc9 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
+@@ -102,6 +102,7 @@
+ 				regulator-min-microvolt = <850000>;
+ 				regulator-max-microvolt = <950000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 				regulator-ramp-delay = <3125>;
+ 				nxp,dvs-run-voltage = <950000>;
+ 				nxp,dvs-standby-voltage = <850000>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index 54eaf3d6055b1..3b2d627a03428 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -101,7 +101,7 @@
+ 		#size-cells = <1>;
+ 		compatible = "jedec,spi-nor";
+ 		spi-max-frequency = <80000000>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		spi-rx-bus-width = <4>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
+index aa78e0d8c72b2..fc178eebf8aa4 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
+@@ -74,7 +74,7 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <80000000>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		spi-rx-bus-width = <4>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-evk.dts b/arch/arm64/boot/dts/freescale/imx8mq-evk.dts
+index 4d2035e3dd7cc..4886f3e31587a 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mq-evk.dts
+@@ -337,6 +337,8 @@
+ 		#size-cells = <1>;
+ 		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		spi-max-frequency = <29000000>;
++		spi-tx-bus-width = <1>;
++		spi-rx-bus-width = <4>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-kontron-pitx-imx8m.dts b/arch/arm64/boot/dts/freescale/imx8mq-kontron-pitx-imx8m.dts
+index f593e4ff62e1c..564746d5000d5 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-kontron-pitx-imx8m.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mq-kontron-pitx-imx8m.dts
+@@ -281,7 +281,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		reg = <0>;
+-		spi-tx-bus-width = <4>;
++		spi-tx-bus-width = <1>;
+ 		spi-rx-bus-width = <4>;
+ 		m25p,fast-read;
+ 		spi-max-frequency = <50000000>;
+diff --git a/arch/arm64/boot/dts/qcom/pm8150.dtsi b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+index c566a64b1373f..00385b1fd358f 100644
+--- a/arch/arm64/boot/dts/qcom/pm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8150.dtsi
+@@ -48,7 +48,7 @@
+ 		#size-cells = <0>;
+ 
+ 		pon: power-on@800 {
+-			compatible = "qcom,pm8916-pon";
++			compatible = "qcom,pm8998-pon";
+ 			reg = <0x0800>;
+ 
+ 			pon_pwrkey: pwrkey {
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index c08f074106994..188c5768a55ae 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -1437,9 +1437,9 @@
+ 
+ 		cpufreq_hw: cpufreq@18591000 {
+ 			compatible = "qcom,cpufreq-epss";
+-			reg = <0 0x18591100 0 0x900>,
+-			      <0 0x18592100 0 0x900>,
+-			      <0 0x18593100 0 0x900>;
++			reg = <0 0x18591000 0 0x1000>,
++			      <0 0x18592000 0 0x1000>,
++			      <0 0x18593000 0 0x1000>;
+ 			clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ 			clock-names = "xo", "alternate";
+ 			#freq-domain-cells = <1>;
+diff --git a/arch/mips/include/asm/mips-cps.h b/arch/mips/include/asm/mips-cps.h
+index 35fb8ee6dd33e..fd43d876892ec 100644
+--- a/arch/mips/include/asm/mips-cps.h
++++ b/arch/mips/include/asm/mips-cps.h
+@@ -10,8 +10,6 @@
+ #include <linux/io.h>
+ #include <linux/types.h>
+ 
+-#include <asm/mips-boards/launch.h>
+-
+ extern unsigned long __cps_access_bad_size(void)
+ 	__compiletime_error("Bad size for CPS accessor");
+ 
+@@ -167,30 +165,11 @@ static inline uint64_t mips_cps_cluster_config(unsigned int cluster)
+  */
+ static inline unsigned int mips_cps_numcores(unsigned int cluster)
+ {
+-	unsigned int ncores;
+-
+ 	if (!mips_cm_present())
+ 		return 0;
+ 
+ 	/* Add one before masking to handle 0xff indicating no cores */
+-	ncores = (mips_cps_cluster_config(cluster) + 1) & CM_GCR_CONFIG_PCORES;
+-
+-	if (IS_ENABLED(CONFIG_SOC_MT7621)) {
+-		struct cpulaunch *launch;
+-
+-		/*
+-		 * Ralink MT7621S SoC is single core, but the GCR_CONFIG method
+-		 * always reports 2 cores. Check the second core's LAUNCH_FREADY
+-		 * flag to detect if the second core is missing. This method
+-		 * only works before the core has been started.
+-		 */
+-		launch = (struct cpulaunch *)CKSEG0ADDR(CPULAUNCH);
+-		launch += 2; /* MT7621 has 2 VPEs per core */
+-		if (!(launch->flags & LAUNCH_FREADY))
+-			ncores = 1;
+-	}
+-
+-	return ncores;
++	return (mips_cps_cluster_config(cluster) + 1) & CM_GCR_CONFIG_PCORES;
+ }
+ 
+ /**
+diff --git a/arch/powerpc/boot/dts/fsl/t1023rdb.dts b/arch/powerpc/boot/dts/fsl/t1023rdb.dts
+index 5ba6fbfca2742..f82f85c65964c 100644
+--- a/arch/powerpc/boot/dts/fsl/t1023rdb.dts
++++ b/arch/powerpc/boot/dts/fsl/t1023rdb.dts
+@@ -154,7 +154,7 @@
+ 
+ 			fm1mac3: ethernet@e4000 {
+ 				phy-handle = <&sgmii_aqr_phy3>;
+-				phy-connection-type = "sgmii-2500";
++				phy-connection-type = "2500base-x";
+ 				sleep = <&rcpm 0x20000000>;
+ 			};
+ 
+diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
+index d4b145b279f6c..9f38040f0641d 100644
+--- a/arch/powerpc/include/asm/book3s/32/kup.h
++++ b/arch/powerpc/include/asm/book3s/32/kup.h
+@@ -136,6 +136,14 @@ static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+ 	if (kuap_is_disabled())
+ 		return;
+ 
++	if (unlikely(kuap != KUAP_NONE)) {
++		current->thread.kuap = KUAP_NONE;
++		kuap_lock(kuap, false);
++	}
++
++	if (likely(regs->kuap == KUAP_NONE))
++		return;
++
+ 	current->thread.kuap = regs->kuap;
+ 
+ 	kuap_unlock(regs->kuap, false);
+diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
+index 6b800d3e2681f..a925dbc5833c7 100644
+--- a/arch/powerpc/include/asm/interrupt.h
++++ b/arch/powerpc/include/asm/interrupt.h
+@@ -525,10 +525,9 @@ static __always_inline long ____##func(struct pt_regs *regs)
+ /* kernel/traps.c */
+ DECLARE_INTERRUPT_HANDLER_NMI(system_reset_exception);
+ #ifdef CONFIG_PPC_BOOK3S_64
+-DECLARE_INTERRUPT_HANDLER_ASYNC(machine_check_exception);
+-#else
+-DECLARE_INTERRUPT_HANDLER_NMI(machine_check_exception);
++DECLARE_INTERRUPT_HANDLER_ASYNC(machine_check_exception_async);
+ #endif
++DECLARE_INTERRUPT_HANDLER_NMI(machine_check_exception);
+ DECLARE_INTERRUPT_HANDLER(SMIException);
+ DECLARE_INTERRUPT_HANDLER(handle_hmi_exception);
+ DECLARE_INTERRUPT_HANDLER(unknown_exception);
+diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c
+index 111249fd619de..038ce8d9061d1 100644
+--- a/arch/powerpc/kernel/dma-iommu.c
++++ b/arch/powerpc/kernel/dma-iommu.c
+@@ -184,6 +184,15 @@ u64 dma_iommu_get_required_mask(struct device *dev)
+ 	struct iommu_table *tbl = get_iommu_table_base(dev);
+ 	u64 mask;
+ 
++	if (dev_is_pci(dev)) {
++		u64 bypass_mask = dma_direct_get_required_mask(dev);
++
++		if (dma_iommu_dma_supported(dev, bypass_mask)) {
++			dev_info(dev, "%s: returning bypass mask 0x%llx\n", __func__, bypass_mask);
++			return bypass_mask;
++		}
++	}
++
+ 	if (!tbl)
+ 		return 0;
+ 
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 37859e62a8dcb..eaf1f72131a18 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1243,7 +1243,7 @@ EXC_COMMON_BEGIN(machine_check_common)
+ 	li	r10,MSR_RI
+ 	mtmsrd 	r10,1
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+-	bl	machine_check_exception
++	bl	machine_check_exception_async
+ 	b	interrupt_return_srr
+ 
+ 
+@@ -1303,7 +1303,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+ 	subi	r12,r12,1
+ 	sth	r12,PACA_IN_MCE(r13)
+ 
+-	/* Invoke machine_check_exception to print MCE event and panic. */
++	/*
++	 * Invoke machine_check_exception to print MCE event and panic.
++	 * This is the NMI version of the handler because we are called from
++	 * the early handler which is a true NMI.
++	 */
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+ 	bl	machine_check_exception
+ 
+@@ -1665,27 +1669,30 @@ EXC_COMMON_BEGIN(program_check_common)
+ 	 */
+ 
+ 	andi.	r10,r12,MSR_PR
+-	bne	2f			/* If userspace, go normal path */
++	bne	.Lnormal_stack		/* If userspace, go normal path */
+ 
+ 	andis.	r10,r12,(SRR1_PROGTM)@h
+-	bne	1f			/* If TM, emergency		*/
++	bne	.Lemergency_stack	/* If TM, emergency		*/
+ 
+ 	cmpdi	r1,-INT_FRAME_SIZE	/* check if r1 is in userspace	*/
+-	blt	2f			/* normal path if not		*/
++	blt	.Lnormal_stack		/* normal path if not		*/
+ 
+ 	/* Use the emergency stack					*/
+-1:	andi.	r10,r12,MSR_PR		/* Set CR0 correctly for label	*/
++.Lemergency_stack:
++	andi.	r10,r12,MSR_PR		/* Set CR0 correctly for label	*/
+ 					/* 3 in EXCEPTION_PROLOG_COMMON	*/
+ 	mr	r10,r1			/* Save r1			*/
+ 	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
+ 	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
+ 	__ISTACK(program_check)=0
+ 	__GEN_COMMON_BODY program_check
+-	b 3f
+-2:
++	b .Ldo_program_check
++
++.Lnormal_stack:
+ 	__ISTACK(program_check)=1
+ 	__GEN_COMMON_BODY program_check
+-3:
++
++.Ldo_program_check:
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
+ 	bl	program_check_exception
+ 	REST_NVGPRS(r1) /* instruction emulation may change GPRs */
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index d56254f05e174..08356ec9bfed4 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -341,10 +341,16 @@ static bool exception_common(int signr, struct pt_regs *regs, int code,
+ 		return false;
+ 	}
+ 
+-	show_signal_msg(signr, regs, code, addr);
++	/*
++	 * Must not enable interrupts even for user-mode exception, because
++	 * this can be called from machine check, which may be a NMI or IRQ
++	 * which don't like interrupts being enabled. Could check for
++	 * in_hardirq || in_nmi perhaps, but there doesn't seem to be a good
++	 * reason why _exception() should enable irqs for an exception handler,
++	 * the handlers themselves do that directly.
++	 */
+ 
+-	if (arch_irqs_disabled())
+-		interrupt_cond_local_irq_enable(regs);
++	show_signal_msg(signr, regs, code, addr);
+ 
+ 	current->thread.trap_nr = code;
+ 
+@@ -791,24 +797,22 @@ void die_mce(const char *str, struct pt_regs *regs, long err)
+ 	 * do_exit() checks for in_interrupt() and panics in that case, so
+ 	 * exit the irq/nmi before calling die.
+ 	 */
+-	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+-		irq_exit();
+-	else
++	if (in_nmi())
+ 		nmi_exit();
++	else
++		irq_exit();
+ 	die(str, regs, err);
+ }
+ 
+ /*
+- * BOOK3S_64 does not call this handler as a non-maskable interrupt
++ * BOOK3S_64 does not usually call this handler as a non-maskable interrupt
+  * (it uses its own early real-mode handler to handle the MCE proper
+  * and then raises irq_work to call this handler when interrupts are
+- * enabled).
++ * enabled). The only time when this is not true is if the early handler
++ * is unrecoverable, then it does call this directly to try to get a
++ * message out.
+  */
+-#ifdef CONFIG_PPC_BOOK3S_64
+-DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception)
+-#else
+-DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
+-#endif
++static void __machine_check_exception(struct pt_regs *regs)
+ {
+ 	int recover = 0;
+ 
+@@ -842,12 +846,19 @@ bail:
+ 	/* Must die if the interrupt is not recoverable */
+ 	if (!(regs->msr & MSR_RI))
+ 		die_mce("Unrecoverable Machine check", regs, SIGBUS);
++}
+ 
+ #ifdef CONFIG_PPC_BOOK3S_64
+-	return;
+-#else
+-	return 0;
++DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception_async)
++{
++	__machine_check_exception(regs);
++}
+ #endif
++DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
++{
++	__machine_check_exception(regs);
++
++	return 0;
+ }
+ 
+ DEFINE_INTERRUPT_HANDLER(SMIException) /* async? */
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index beb12cbc8c299..a7759aa8043d2 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -355,7 +355,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 				PPC_LI32(_R0, imm);
+ 				EMIT(PPC_RAW_ADDC(dst_reg, dst_reg, _R0));
+ 			}
+-			if (imm >= 0)
++			if (imm >= 0 || (BPF_OP(code) == BPF_SUB && imm == 0x80000000))
+ 				EMIT(PPC_RAW_ADDZE(dst_reg_h, dst_reg_h));
+ 			else
+ 				EMIT(PPC_RAW_ADDME(dst_reg_h, dst_reg_h));
+@@ -623,7 +623,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 			EMIT(PPC_RAW_LI(dst_reg_h, 0));
+ 			break;
+ 		case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */
+-			EMIT(PPC_RAW_SRAW(dst_reg_h, dst_reg, src_reg));
++			EMIT(PPC_RAW_SRAW(dst_reg, dst_reg, src_reg));
+ 			break;
+ 		case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */
+ 			bpf_set_seen_register(ctx, tmp_reg);
+@@ -1073,7 +1073,7 @@ cond_branch:
+ 				break;
+ 			case BPF_JMP32 | BPF_JSET | BPF_K:
+ 				/* andi does not sign-extend the immediate */
+-				if (imm >= -32768 && imm < 32768) {
++				if (imm >= 0 && imm < 32768) {
+ 					/* PPC_ANDI is _only/always_ dot-form */
+ 					EMIT(PPC_RAW_ANDI(_R0, dst_reg, imm));
+ 				} else {
+@@ -1103,7 +1103,7 @@ cond_branch:
+ 			return -EOPNOTSUPP;
+ 		}
+ 		if (BPF_CLASS(code) == BPF_ALU && !fp->aux->verifier_zext &&
+-		    !insn_is_zext(&insn[i + 1]))
++		    !insn_is_zext(&insn[i + 1]) && !(BPF_OP(code) == BPF_END && imm == 64))
+ 			EMIT(PPC_RAW_LI(dst_reg_h, 0));
+ 	}
+ 
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index b87a63dba9c8f..dff4a2930970b 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -328,18 +328,25 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 			EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
+ 			goto bpf_alu32_trunc;
+ 		case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
+-		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+ 		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
++			if (!imm) {
++				goto bpf_alu32_trunc;
++			} else if (imm >= -32768 && imm < 32768) {
++				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
++			} else {
++				PPC_LI32(b2p[TMP_REG_1], imm);
++				EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
++			}
++			goto bpf_alu32_trunc;
++		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+ 		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
+-			if (BPF_OP(code) == BPF_SUB)
+-				imm = -imm;
+-			if (imm) {
+-				if (imm >= -32768 && imm < 32768)
+-					EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
+-				else {
+-					PPC_LI32(b2p[TMP_REG_1], imm);
+-					EMIT(PPC_RAW_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]));
+-				}
++			if (!imm) {
++				goto bpf_alu32_trunc;
++			} else if (imm > -32768 && imm <= 32768) {
++				EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(-imm)));
++			} else {
++				PPC_LI32(b2p[TMP_REG_1], imm);
++				EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
+ 			}
+ 			goto bpf_alu32_trunc;
+ 		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
+@@ -389,8 +396,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 		case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
+ 			if (imm == 0)
+ 				return -EINVAL;
+-			else if (imm == 1)
+-				goto bpf_alu32_trunc;
++			if (imm == 1) {
++				if (BPF_OP(code) == BPF_DIV) {
++					goto bpf_alu32_trunc;
++				} else {
++					EMIT(PPC_RAW_LI(dst_reg, 0));
++					break;
++				}
++			}
+ 
+ 			PPC_LI32(b2p[TMP_REG_1], imm);
+ 			switch (BPF_CLASS(code)) {
+diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c
+index bc15200852b7c..09fafcf2d3a06 100644
+--- a/arch/powerpc/platforms/pseries/eeh_pseries.c
++++ b/arch/powerpc/platforms/pseries/eeh_pseries.c
+@@ -867,6 +867,10 @@ static int __init eeh_pseries_init(void)
+ 	if (is_kdump_kernel() || reset_devices) {
+ 		pr_info("Issue PHB reset ...\n");
+ 		list_for_each_entry(phb, &hose_list, list_node) {
++			// Skip if the slot is empty
++			if (list_empty(&PCI_DN(phb->dn)->child_list))
++				continue;
++
+ 			pdn = list_first_entry(&PCI_DN(phb->dn)->child_list, struct pci_dn, list);
+ 			config_addr = pseries_eeh_get_pe_config_addr(pdn);
+ 
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index bc74afdbf31e2..83ee0e71204cb 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -108,6 +108,12 @@ PHONY += vdso_install
+ vdso_install:
+ 	$(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso $@
+ 
++ifeq ($(CONFIG_MMU),y)
++prepare: vdso_prepare
++vdso_prepare: prepare0
++	$(Q)$(MAKE) $(build)=arch/riscv/kernel/vdso include/generated/vdso-offsets.h
++endif
++
+ ifneq ($(CONFIG_XIP_KERNEL),y)
+ ifeq ($(CONFIG_RISCV_M_MODE)$(CONFIG_SOC_CANAAN),yy)
+ KBUILD_IMAGE := $(boot)/loader.bin
+diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
+index b933b1583c9fd..34fbb3ea21d5b 100644
+--- a/arch/riscv/include/asm/syscall.h
++++ b/arch/riscv/include/asm/syscall.h
+@@ -82,4 +82,5 @@ static inline int syscall_get_arch(struct task_struct *task)
+ #endif
+ }
+ 
++asmlinkage long sys_riscv_flush_icache(uintptr_t, uintptr_t, uintptr_t);
+ #endif	/* _ASM_RISCV_SYSCALL_H */
+diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h
+index 1453a2f563bcc..208e31bc5d1c2 100644
+--- a/arch/riscv/include/asm/vdso.h
++++ b/arch/riscv/include/asm/vdso.h
+@@ -8,27 +8,32 @@
+ #ifndef _ASM_RISCV_VDSO_H
+ #define _ASM_RISCV_VDSO_H
+ 
+-#include <linux/types.h>
+ 
+-#ifndef CONFIG_GENERIC_TIME_VSYSCALL
+-struct vdso_data {
+-};
+-#endif
++/*
++ * All systems with an MMU have a VDSO, but systems without an MMU don't
++ * support shared libraries and therefor don't have one.
++ */
++#ifdef CONFIG_MMU
+ 
++#include <linux/types.h>
+ /*
+- * The VDSO symbols are mapped into Linux so we can just use regular symbol
+- * addressing to get their offsets in userspace.  The symbols are mapped at an
+- * offset of 0, but since the linker must support setting weak undefined
+- * symbols to the absolute address 0 it also happens to support other low
+- * addresses even when the code model suggests those low addresses would not
+- * otherwise be availiable.
++ * All systems with an MMU have a VDSO, but systems without an MMU don't
++ * support shared libraries and therefor don't have one.
+  */
++#ifdef CONFIG_MMU
++
++#define __VVAR_PAGES    1
++
++#ifndef __ASSEMBLY__
++#include <generated/vdso-offsets.h>
++
+ #define VDSO_SYMBOL(base, name)							\
+-({										\
+-	extern const char __vdso_##name[];					\
+-	(void __user *)((unsigned long)(base) + __vdso_##name);			\
+-})
++	(void __user *)((unsigned long)(base) + __vdso_##name##_offset)
++
++#endif /* CONFIG_MMU */
++
++#endif /* !__ASSEMBLY__ */
+ 
+-asmlinkage long sys_riscv_flush_icache(uintptr_t, uintptr_t, uintptr_t);
++#endif /* CONFIG_MMU */
+ 
+ #endif /* _ASM_RISCV_VDSO_H */
+diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
+index 4b989ae15d59f..8062996c2dfd0 100644
+--- a/arch/riscv/include/uapi/asm/unistd.h
++++ b/arch/riscv/include/uapi/asm/unistd.h
+@@ -18,9 +18,10 @@
+ #ifdef __LP64__
+ #define __ARCH_WANT_NEW_STAT
+ #define __ARCH_WANT_SET_GET_RLIMIT
+-#define __ARCH_WANT_SYS_CLONE3
+ #endif /* __LP64__ */
+ 
++#define __ARCH_WANT_SYS_CLONE3
++
+ #include <asm-generic/unistd.h>
+ 
+ /*
+diff --git a/arch/riscv/kernel/syscall_table.c b/arch/riscv/kernel/syscall_table.c
+index a63c667c27b35..44b1420a22705 100644
+--- a/arch/riscv/kernel/syscall_table.c
++++ b/arch/riscv/kernel/syscall_table.c
+@@ -7,7 +7,6 @@
+ #include <linux/linkage.h>
+ #include <linux/syscalls.h>
+ #include <asm-generic/syscalls.h>
+-#include <asm/vdso.h>
+ #include <asm/syscall.h>
+ 
+ #undef __SYSCALL
+diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
+index 25a3b88495991..b70956d804081 100644
+--- a/arch/riscv/kernel/vdso.c
++++ b/arch/riscv/kernel/vdso.c
+@@ -12,14 +12,24 @@
+ #include <linux/binfmts.h>
+ #include <linux/err.h>
+ #include <asm/page.h>
++#include <asm/vdso.h>
++
+ #ifdef CONFIG_GENERIC_TIME_VSYSCALL
+ #include <vdso/datapage.h>
+ #else
+-#include <asm/vdso.h>
++struct vdso_data {
++};
+ #endif
+ 
+ extern char vdso_start[], vdso_end[];
+ 
++enum vvar_pages {
++	VVAR_DATA_PAGE_OFFSET,
++	VVAR_NR_PAGES,
++};
++
++#define VVAR_SIZE  (VVAR_NR_PAGES << PAGE_SHIFT)
++
+ static unsigned int vdso_pages __ro_after_init;
+ static struct page **vdso_pagelist __ro_after_init;
+ 
+@@ -38,7 +48,7 @@ static int __init vdso_init(void)
+ 
+ 	vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
+ 	vdso_pagelist =
+-		kcalloc(vdso_pages + 1, sizeof(struct page *), GFP_KERNEL);
++		kcalloc(vdso_pages + VVAR_NR_PAGES, sizeof(struct page *), GFP_KERNEL);
+ 	if (unlikely(vdso_pagelist == NULL)) {
+ 		pr_err("vdso: pagelist allocation failed\n");
+ 		return -ENOMEM;
+@@ -63,38 +73,41 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
+ 	unsigned long vdso_base, vdso_len;
+ 	int ret;
+ 
+-	vdso_len = (vdso_pages + 1) << PAGE_SHIFT;
++	BUILD_BUG_ON(VVAR_NR_PAGES != __VVAR_PAGES);
++
++	vdso_len = (vdso_pages + VVAR_NR_PAGES) << PAGE_SHIFT;
++
++	if (mmap_write_lock_killable(mm))
++		return -EINTR;
+ 
+-	mmap_write_lock(mm);
+ 	vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
+ 	if (IS_ERR_VALUE(vdso_base)) {
+ 		ret = vdso_base;
+ 		goto end;
+ 	}
+ 
+-	/*
+-	 * Put vDSO base into mm struct. We need to do this before calling
+-	 * install_special_mapping or the perf counter mmap tracking code
+-	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
+-	 */
+-	mm->context.vdso = (void *)vdso_base;
++	mm->context.vdso = NULL;
++	ret = install_special_mapping(mm, vdso_base, VVAR_SIZE,
++		(VM_READ | VM_MAYREAD), &vdso_pagelist[vdso_pages]);
++	if (unlikely(ret))
++		goto end;
+ 
+ 	ret =
+-	   install_special_mapping(mm, vdso_base, vdso_pages << PAGE_SHIFT,
++	   install_special_mapping(mm, vdso_base + VVAR_SIZE,
++		vdso_pages << PAGE_SHIFT,
+ 		(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
+ 		vdso_pagelist);
+ 
+-	if (unlikely(ret)) {
+-		mm->context.vdso = NULL;
++	if (unlikely(ret))
+ 		goto end;
+-	}
+ 
+-	vdso_base += (vdso_pages << PAGE_SHIFT);
+-	ret = install_special_mapping(mm, vdso_base, PAGE_SIZE,
+-		(VM_READ | VM_MAYREAD), &vdso_pagelist[vdso_pages]);
++	/*
++	 * Put vDSO base into mm struct. We need to do this before calling
++	 * install_special_mapping or the perf counter mmap tracking code
++	 * will fail to recognise it as a vDSO (since arch_vma_name fails).
++	 */
++	mm->context.vdso = (void *)vdso_base + VVAR_SIZE;
+ 
+-	if (unlikely(ret))
+-		mm->context.vdso = NULL;
+ end:
+ 	mmap_write_unlock(mm);
+ 	return ret;
+@@ -105,7 +118,7 @@ const char *arch_vma_name(struct vm_area_struct *vma)
+ 	if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
+ 		return "[vdso]";
+ 	if (vma->vm_mm && (vma->vm_start ==
+-			   (long)vma->vm_mm->context.vdso + PAGE_SIZE))
++			   (long)vma->vm_mm->context.vdso - VVAR_SIZE))
+ 		return "[vdso_data]";
+ 	return NULL;
+ }
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 24d936c147cdf..f8cb9144a2842 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -23,10 +23,10 @@ ifneq ($(c-gettimeofday-y),)
+ endif
+ 
+ # Build rules
+-targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-syms.S
++targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds
+ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
+ 
+-obj-y += vdso.o vdso-syms.o
++obj-y += vdso.o
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+ 
+ # Disable -pg to prevent insert call site
+@@ -43,20 +43,22 @@ $(obj)/vdso.o: $(obj)/vdso.so
+ # link rule for the .so file, .lds has to be first
+ $(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE
+ 	$(call if_changed,vdsold)
+-LDFLAGS_vdso.so.dbg = -shared -s -soname=linux-vdso.so.1 \
++LDFLAGS_vdso.so.dbg = -shared -S -soname=linux-vdso.so.1 \
+ 	--build-id=sha1 --hash-style=both --eh-frame-hdr
+ 
+-# We also create a special relocatable object that should mirror the symbol
+-# table and layout of the linked DSO. With ld --just-symbols we can then
+-# refer to these symbols in the kernel code rather than hand-coded addresses.
+-$(obj)/vdso-syms.S: $(obj)/vdso.so FORCE
+-	$(call if_changed,so2s)
+-
+ # strip rule for the .so file
+ $(obj)/%.so: OBJCOPYFLAGS := -S
+ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ 	$(call if_changed,objcopy)
+ 
++# Generate VDSO offsets using helper script
++gen-vdsosym := $(srctree)/$(src)/gen_vdso_offsets.sh
++quiet_cmd_vdsosym = VDSOSYM $@
++	cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
++
++include/generated/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE
++	$(call if_changed,vdsosym)
++
+ # actual build commands
+ # The DSO images are built using a special linker script
+ # Make sure only to export the intended __vdso_xxx symbol offsets.
+@@ -65,11 +67,6 @@ quiet_cmd_vdsold = VDSOLD  $@
+                    $(OBJCOPY) $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \
+                    rm $@.tmp
+ 
+-# Extracts symbol offsets from the VDSO, converting them into an assembly file
+-# that contains the same symbols at the same offsets.
+-quiet_cmd_so2s = SO2S    $@
+-      cmd_so2s = $(NM) -D $< | $(srctree)/$(src)/so2s.sh > $@
+-
+ # install commands for the unstripped file
+ quiet_cmd_vdso_install = INSTALL $@
+       cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@
+diff --git a/arch/riscv/kernel/vdso/gen_vdso_offsets.sh b/arch/riscv/kernel/vdso/gen_vdso_offsets.sh
+new file mode 100755
+index 0000000000000..c2e5613f34951
+--- /dev/null
++++ b/arch/riscv/kernel/vdso/gen_vdso_offsets.sh
+@@ -0,0 +1,5 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++
++LC_ALL=C
++sed -n -e 's/^[0]\+\(0[0-9a-fA-F]*\) . \(__vdso_[a-zA-Z0-9_]*\)$/\#define \2_offset\t0x\1/p'
+diff --git a/arch/riscv/kernel/vdso/so2s.sh b/arch/riscv/kernel/vdso/so2s.sh
+deleted file mode 100755
+index e64cb6d9440e7..0000000000000
+--- a/arch/riscv/kernel/vdso/so2s.sh
++++ /dev/null
+@@ -1,6 +0,0 @@
+-#!/bin/sh
+-# SPDX-License-Identifier: GPL-2.0+
+-# Copyright 2020 Palmer Dabbelt <palmerdabbelt@google.com>
+-
+-sed 's!\([0-9a-f]*\) T \([a-z0-9_]*\)\(@@LINUX_4.15\)*!.global \2\n.set \2,0x\1!' \
+-| grep '^\.'
+diff --git a/arch/riscv/kernel/vdso/vdso.lds.S b/arch/riscv/kernel/vdso/vdso.lds.S
+index e6f558bca71bb..e9111f700af08 100644
+--- a/arch/riscv/kernel/vdso/vdso.lds.S
++++ b/arch/riscv/kernel/vdso/vdso.lds.S
+@@ -3,12 +3,13 @@
+  * Copyright (C) 2012 Regents of the University of California
+  */
+ #include <asm/page.h>
++#include <asm/vdso.h>
+ 
+ OUTPUT_ARCH(riscv)
+ 
+ SECTIONS
+ {
+-	PROVIDE(_vdso_data = . + PAGE_SIZE);
++	PROVIDE(_vdso_data = . - __VVAR_PAGES * PAGE_SIZE);
+ 	. = SIZEOF_HEADERS;
+ 
+ 	.hash		: { *(.hash) }			:text
+diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
+index 094118663285d..89f81067e09ed 100644
+--- a/arch/riscv/mm/cacheflush.c
++++ b/arch/riscv/mm/cacheflush.c
+@@ -16,6 +16,8 @@ static void ipi_remote_fence_i(void *info)
+ 
+ void flush_icache_all(void)
+ {
++	local_flush_icache_all();
++
+ 	if (IS_ENABLED(CONFIG_RISCV_SBI))
+ 		sbi_remote_fence_i(NULL);
+ 	else
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 840d8594437d5..1a374d021e256 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1826,7 +1826,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
+ 	jit.addrs = kvcalloc(fp->len + 1, sizeof(*jit.addrs), GFP_KERNEL);
+ 	if (jit.addrs == NULL) {
+ 		fp = orig_fp;
+-		goto out;
++		goto free_addrs;
+ 	}
+ 	/*
+ 	 * Three initial passes:
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 88fb922c23a0a..51341f2e218de 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1400,7 +1400,7 @@ config HIGHMEM4G
+ 
+ config HIGHMEM64G
+ 	bool "64GB"
+-	depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !WINCHIP3D && !MK6
++	depends on !M486SX && !M486 && !M586 && !M586TSC && !M586MMX && !MGEODE_LX && !MGEODEGX1 && !MCYRIXIII && !MELAN && !MWINCHIPC6 && !MWINCHIP3D && !MK6
+ 	select X86_PAE
+ 	help
+ 	  Select this if you have a 32-bit processor and more than 4
+diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
+index 14ebd21965691..43184640b579a 100644
+--- a/arch/x86/include/asm/entry-common.h
++++ b/arch/x86/include/asm/entry-common.h
+@@ -25,7 +25,7 @@ static __always_inline void arch_check_user_regs(struct pt_regs *regs)
+ 		 * For !SMAP hardware we patch out CLAC on entry.
+ 		 */
+ 		if (boot_cpu_has(X86_FEATURE_SMAP) ||
+-		    (IS_ENABLED(CONFIG_64_BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
++		    (IS_ENABLED(CONFIG_64BIT) && boot_cpu_has(X86_FEATURE_XENPV)))
+ 			mask |= X86_EFLAGS_AC;
+ 
+ 		WARN_ON_ONCE(flags & mask);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 64b805bd6a542..340caa7aebfba 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -320,6 +320,7 @@ static __always_inline void setup_smap(struct cpuinfo_x86 *c)
+ #ifdef CONFIG_X86_SMAP
+ 		cr4_set_bits(X86_CR4_SMAP);
+ #else
++		clear_cpu_cap(c, X86_FEATURE_SMAP);
+ 		cr4_clear_bits(X86_CR4_SMAP);
+ #endif
+ 	}
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 38837dad46e62..391a4e2b86049 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -714,12 +714,6 @@ static struct chipset early_qrk[] __initdata = {
+ 	 */
+ 	{ PCI_VENDOR_ID_INTEL, 0x0f00,
+ 		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+-	{ PCI_VENDOR_ID_INTEL, 0x3e20,
+-		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+-	{ PCI_VENDOR_ID_INTEL, 0x3ec4,
+-		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+-	{ PCI_VENDOR_ID_INTEL, 0x8a12,
+-		PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+ 	{ PCI_VENDOR_ID_BROADCOM, 0x4331,
+ 	  PCI_CLASS_NETWORK_OTHER, PCI_ANY_ID, 0, apple_airport_reset},
+ 	{}
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 445c57c9c5397..fa17a27390ab0 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -379,9 +379,14 @@ static int __fpu_restore_sig(void __user *buf, void __user *buf_fx,
+ 				     sizeof(fpu->state.fxsave)))
+ 			return -EFAULT;
+ 
+-		/* Reject invalid MXCSR values. */
+-		if (fpu->state.fxsave.mxcsr & ~mxcsr_feature_mask)
+-			return -EINVAL;
++		if (IS_ENABLED(CONFIG_X86_64)) {
++			/* Reject invalid MXCSR values. */
++			if (fpu->state.fxsave.mxcsr & ~mxcsr_feature_mask)
++				return -EINVAL;
++		} else {
++			/* Mask invalid bits out for historical reasons (broken hardware). */
++			fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask;
++		}
+ 
+ 		/* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
+ 		if (use_xsave())
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 42fc41dd0e1f1..882213df37130 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -10,6 +10,7 @@
+ #include <asm/irq_remapping.h>
+ #include <asm/hpet.h>
+ #include <asm/time.h>
++#include <asm/mwait.h>
+ 
+ #undef  pr_fmt
+ #define pr_fmt(fmt) "hpet: " fmt
+@@ -916,6 +917,83 @@ static bool __init hpet_counting(void)
+ 	return false;
+ }
+ 
++static bool __init mwait_pc10_supported(void)
++{
++	unsigned int eax, ebx, ecx, mwait_substates;
++
++	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++		return false;
++
++	if (!cpu_feature_enabled(X86_FEATURE_MWAIT))
++		return false;
++
++	if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)
++		return false;
++
++	cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &mwait_substates);
++
++	return (ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED) &&
++	       (ecx & CPUID5_ECX_INTERRUPT_BREAK) &&
++	       (mwait_substates & (0xF << 28));
++}
++
++/*
++ * Check whether the system supports PC10. If so force disable HPET as that
++ * stops counting in PC10. This check is overbroad as it does not take any
++ * of the following into account:
++ *
++ *	- ACPI tables
++ *	- Enablement of intel_idle
++ *	- Command line arguments which limit intel_idle C-state support
++ *
++ * That's perfectly fine. HPET is a piece of hardware designed by committee
++ * and the only reasons why it is still in use on modern systems is the
++ * fact that it is impossible to reliably query TSC and CPU frequency via
++ * CPUID or firmware.
++ *
++ * If HPET is functional it is useful for calibrating TSC, but this can be
++ * done via PMTIMER as well which seems to be the last remaining timer on
++ * X86/INTEL platforms that has not been completely wreckaged by feature
++ * creep.
++ *
++ * In theory HPET support should be removed altogether, but there are older
++ * systems out there which depend on it because TSC and APIC timer are
++ * dysfunctional in deeper C-states.
++ *
++ * It's only 20 years now that hardware people have been asked to provide
++ * reliable and discoverable facilities which can be used for timekeeping
++ * and per CPU timer interrupts.
++ *
++ * The probability that this problem is going to be solved in the
++ * forseeable future is close to zero, so the kernel has to be cluttered
++ * with heuristics to keep up with the ever growing amount of hardware and
++ * firmware trainwrecks. Hopefully some day hardware people will understand
++ * that the approach of "This can be fixed in software" is not sustainable.
++ * Hope dies last...
++ */
++static bool __init hpet_is_pc10_damaged(void)
++{
++	unsigned long long pcfg;
++
++	/* Check whether PC10 substates are supported */
++	if (!mwait_pc10_supported())
++		return false;
++
++	/* Check whether PC10 is enabled in PKG C-state limit */
++	rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, pcfg);
++	if ((pcfg & 0xF) < 8)
++		return false;
++
++	if (hpet_force_user) {
++		pr_warn("HPET force enabled via command line, but dysfunctional in PC10.\n");
++		return false;
++	}
++
++	pr_info("HPET dysfunctional in PC10. Force disabled.\n");
++	boot_hpet_disable = true;
++	return true;
++}
++
+ /**
+  * hpet_enable - Try to setup the HPET timer. Returns 1 on success.
+  */
+@@ -929,6 +1007,9 @@ int __init hpet_enable(void)
+ 	if (!is_hpet_capable())
+ 		return 0;
+ 
++	if (hpet_is_pc10_damaged())
++		return 0;
++
+ 	hpet_set_mapping();
+ 	if (!hpet_virt_address)
+ 		return 0;
+diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
+index 9f90f460a28cc..bf1033a62e480 100644
+--- a/arch/x86/kernel/sev-shared.c
++++ b/arch/x86/kernel/sev-shared.c
+@@ -130,6 +130,8 @@ static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
+ 		} else {
+ 			ret = ES_VMM_ERROR;
+ 		}
++	} else if (ghcb->save.sw_exit_info_1 & 0xffffffff) {
++		ret = ES_VMM_ERROR;
+ 	} else {
+ 		ret = ES_OK;
+ 	}
+diff --git a/arch/x86/platform/olpc/olpc.c b/arch/x86/platform/olpc/olpc.c
+index ee2beda590d0d..1d4a00e767ece 100644
+--- a/arch/x86/platform/olpc/olpc.c
++++ b/arch/x86/platform/olpc/olpc.c
+@@ -274,7 +274,7 @@ static struct olpc_ec_driver ec_xo1_driver = {
+ 
+ static struct olpc_ec_driver ec_xo1_5_driver = {
+ 	.ec_cmd = olpc_xo1_ec_cmd,
+-#ifdef CONFIG_OLPC_XO1_5_SCI
++#ifdef CONFIG_OLPC_XO15_SCI
+ 	/*
+ 	 * XO-1.5 EC wakeups are available when olpc-xo15-sci driver is
+ 	 * compiled in
+diff --git a/arch/xtensa/include/asm/kmem_layout.h b/arch/xtensa/include/asm/kmem_layout.h
+index 7cbf68ca71060..6fc05cba61a27 100644
+--- a/arch/xtensa/include/asm/kmem_layout.h
++++ b/arch/xtensa/include/asm/kmem_layout.h
+@@ -78,7 +78,7 @@
+ #endif
+ #define XCHAL_KIO_SIZE			0x10000000
+ 
+-#if (!XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY) && defined(CONFIG_OF)
++#if (!XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY) && defined(CONFIG_USE_OF)
+ #define XCHAL_KIO_PADDR			xtensa_get_kio_paddr()
+ #ifndef __ASSEMBLY__
+ extern unsigned long xtensa_kio_paddr;
+diff --git a/arch/xtensa/kernel/irq.c b/arch/xtensa/kernel/irq.c
+index a48bf2d10ac2d..80cc9770a8d2d 100644
+--- a/arch/xtensa/kernel/irq.c
++++ b/arch/xtensa/kernel/irq.c
+@@ -145,7 +145,7 @@ unsigned xtensa_get_ext_irq_no(unsigned irq)
+ 
+ void __init init_IRQ(void)
+ {
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 	irqchip_init();
+ #else
+ #ifdef CONFIG_HAVE_SMP
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index ed184106e4cf9..ee9082a142feb 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -63,7 +63,7 @@ extern unsigned long initrd_end;
+ extern int initrd_below_start_ok;
+ #endif
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ void *dtb_start = __dtb_start;
+ #endif
+ 
+@@ -125,7 +125,7 @@ __tagtable(BP_TAG_INITRD, parse_tag_initrd);
+ 
+ #endif /* CONFIG_BLK_DEV_INITRD */
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 
+ static int __init parse_tag_fdt(const bp_tag_t *tag)
+ {
+@@ -135,7 +135,7 @@ static int __init parse_tag_fdt(const bp_tag_t *tag)
+ 
+ __tagtable(BP_TAG_FDT, parse_tag_fdt);
+ 
+-#endif /* CONFIG_OF */
++#endif /* CONFIG_USE_OF */
+ 
+ static int __init parse_tag_cmdline(const bp_tag_t* tag)
+ {
+@@ -183,7 +183,7 @@ static int __init parse_bootparam(const bp_tag_t *tag)
+ }
+ #endif
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 
+ #if !XCHAL_HAVE_PTP_MMU || XCHAL_HAVE_SPANNING_WAY
+ unsigned long xtensa_kio_paddr = XCHAL_KIO_DEFAULT_PADDR;
+@@ -232,7 +232,7 @@ void __init early_init_devtree(void *params)
+ 		strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
+ }
+ 
+-#endif /* CONFIG_OF */
++#endif /* CONFIG_USE_OF */
+ 
+ /*
+  * Initialize architecture. (Early stage)
+@@ -253,7 +253,7 @@ void __init init_arch(bp_tag_t *bp_start)
+ 	if (bp_start)
+ 		parse_bootparam(bp_start);
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 	early_init_devtree(dtb_start);
+ #endif
+ 
+diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c
+index 7e4d97dc8bd8f..38acda4f04e85 100644
+--- a/arch/xtensa/mm/mmu.c
++++ b/arch/xtensa/mm/mmu.c
+@@ -101,7 +101,7 @@ void init_mmu(void)
+ 
+ void init_kio(void)
+ {
+-#if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY && defined(CONFIG_OF)
++#if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY && defined(CONFIG_USE_OF)
+ 	/*
+ 	 * Update the IO area mapping in case xtensa_kio_paddr has changed
+ 	 */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 148a4dd8cb9ac..418ada474a85d 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1468,6 +1468,9 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 	/* Quirks that need to be set based on detected module */
+ 	SYSC_QUIRK("aess", 0, 0, 0x10, -ENODEV, 0x40000000, 0xffffffff,
+ 		   SYSC_MODULE_QUIRK_AESS),
++	/* Errata i893 handling for dra7 dcan1 and 2 */
++	SYSC_QUIRK("dcan", 0x4ae3c000, 0x20, -ENODEV, -ENODEV, 0xa3170504, 0xffffffff,
++		   SYSC_QUIRK_CLKDM_NOAUTO),
+ 	SYSC_QUIRK("dcan", 0x48480000, 0x20, -ENODEV, -ENODEV, 0xa3170504, 0xffffffff,
+ 		   SYSC_QUIRK_CLKDM_NOAUTO),
+ 	SYSC_QUIRK("dss", 0x4832a000, 0, 0x10, 0x14, 0x00000020, 0xffffffff,
+@@ -2955,6 +2958,7 @@ static int sysc_init_soc(struct sysc *ddata)
+ 			break;
+ 		case SOC_AM3:
+ 			sysc_add_disabled(0x48310000);  /* rng */
++			break;
+ 		default:
+ 			break;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 177a663a6a691..a1c5bd2859fc3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1082,6 +1082,7 @@ struct amdgpu_device {
+ 
+ 	bool                            no_hw_access;
+ 	struct pci_saved_state          *pci_state;
++	pci_channel_state_t		pci_channel_state;
+ 
+ 	struct amdgpu_reset_control     *reset_cntl;
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 4fb15750b9bb4..b18c0697356c1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -563,6 +563,7 @@ kfd_mem_dmaunmap_userptr(struct kgd_mem *mem,
+ 
+ 	dma_unmap_sgtable(adev->dev, ttm->sg, direction, 0);
+ 	sg_free_table(ttm->sg);
++	kfree(ttm->sg);
+ 	ttm->sg = NULL;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index d3247a5cceb4c..d60096b3b2c2a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5329,6 +5329,8 @@ pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev *pdev, pci_channel_sta
+ 		return PCI_ERS_RESULT_DISCONNECT;
+ 	}
+ 
++	adev->pci_channel_state = state;
++
+ 	switch (state) {
+ 	case pci_channel_io_normal:
+ 		return PCI_ERS_RESULT_CAN_RECOVER;
+@@ -5471,6 +5473,10 @@ void amdgpu_pci_resume(struct pci_dev *pdev)
+ 
+ 	DRM_INFO("PCI error: resume callback!!\n");
+ 
++	/* Only continue execution for the case of pci_channel_io_frozen */
++	if (adev->pci_channel_state != pci_channel_io_frozen)
++		return;
++
+ 	for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+ 		struct amdgpu_ring *ring = adev->rings[i];
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index b4ced45301bec..1795d448c7000 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -31,6 +31,8 @@
+ /* delay 0.1 second to enable gfx off feature */
+ #define GFX_OFF_DELAY_ENABLE         msecs_to_jiffies(100)
+ 
++#define GFX_OFF_NO_DELAY 0
++
+ /*
+  * GPU GFX IP block helpers function.
+  */
+@@ -558,6 +560,8 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev)
+ 
+ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
+ {
++	unsigned long delay = GFX_OFF_DELAY_ENABLE;
++
+ 	if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))
+ 		return;
+ 
+@@ -573,8 +577,14 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
+ 
+ 		adev->gfx.gfx_off_req_count--;
+ 
+-		if (adev->gfx.gfx_off_req_count == 0 && !adev->gfx.gfx_off_state)
+-			schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
++		if (adev->gfx.gfx_off_req_count == 0 &&
++		    !adev->gfx.gfx_off_state) {
++			/* If going to s2idle, no need to wait */
++			if (adev->in_s0ix)
++				delay = GFX_OFF_NO_DELAY;
++			schedule_delayed_work(&adev->gfx.gfx_off_delay_work,
++					      delay);
++		}
+ 	} else {
+ 		if (adev->gfx.gfx_off_req_count == 0) {
+ 			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
+index d8b22618b79e8..c337588231ff0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
+@@ -118,6 +118,7 @@ struct dcn10_link_enc_registers {
+ 	uint32_t RDPCSTX_PHY_CNTL4;
+ 	uint32_t RDPCSTX_PHY_CNTL5;
+ 	uint32_t RDPCSTX_PHY_CNTL6;
++	uint32_t RDPCSPIPE_PHY_CNTL6;
+ 	uint32_t RDPCSTX_PHY_CNTL7;
+ 	uint32_t RDPCSTX_PHY_CNTL8;
+ 	uint32_t RDPCSTX_PHY_CNTL9;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
+index 90127c1f9e35d..b0892443fbd57 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
+@@ -37,6 +37,7 @@
+ 
+ #include "link_enc_cfg.h"
+ #include "dc_dmub_srv.h"
++#include "dal_asic_id.h"
+ 
+ #define CTX \
+ 	enc10->base.ctx
+@@ -62,6 +63,10 @@
+ #define AUX_REG_WRITE(reg_name, val) \
+ 			dm_write_reg(CTX, AUX_REG(reg_name), val)
+ 
++#ifndef MIN
++#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
++#endif
++
+ void dcn31_link_encoder_set_dio_phy_mux(
+ 	struct link_encoder *enc,
+ 	enum encoder_type_select sel,
+@@ -215,8 +220,8 @@ static const struct link_encoder_funcs dcn31_link_enc_funcs = {
+ 	.fec_is_active = enc2_fec_is_active,
+ 	.get_dig_frontend = dcn10_get_dig_frontend,
+ 	.get_dig_mode = dcn10_get_dig_mode,
+-	.is_in_alt_mode = dcn20_link_encoder_is_in_alt_mode,
+-	.get_max_link_cap = dcn20_link_encoder_get_max_link_cap,
++	.is_in_alt_mode = dcn31_link_encoder_is_in_alt_mode,
++	.get_max_link_cap = dcn31_link_encoder_get_max_link_cap,
+ 	.set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
+ };
+ 
+@@ -404,3 +409,60 @@ void dcn31_link_encoder_disable_output(
+ 	}
+ }
+ 
++bool dcn31_link_encoder_is_in_alt_mode(struct link_encoder *enc)
++{
++	struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
++	uint32_t dp_alt_mode_disable;
++	bool is_usb_c_alt_mode = false;
++
++	if (enc->features.flags.bits.DP_IS_USB_C) {
++		if (enc->ctx->asic_id.hw_internal_rev != YELLOW_CARP_B0) {
++			// [Note] no need to check hw_internal_rev once phy mux selection is ready
++			REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
++		} else {
++		/*
++		 * B0 phys use a new set of registers to check whether alt mode is disabled.
++		 * if value == 1 alt mode is disabled, otherwise it is enabled.
++		 */
++			if ((enc10->base.transmitter == TRANSMITTER_UNIPHY_A)
++					|| (enc10->base.transmitter == TRANSMITTER_UNIPHY_B)
++					|| (enc10->base.transmitter == TRANSMITTER_UNIPHY_E)) {
++				REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
++			} else {
++			// [Note] need to change TRANSMITTER_UNIPHY_C/D to F/G once phy mux selection is ready
++				REG_GET(RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
++			}
++		}
++
++		is_usb_c_alt_mode = (dp_alt_mode_disable == 0);
++	}
++
++	return is_usb_c_alt_mode;
++}
++
++void dcn31_link_encoder_get_max_link_cap(struct link_encoder *enc,
++										 struct dc_link_settings *link_settings)
++{
++	struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
++	uint32_t is_in_usb_c_dp4_mode = 0;
++
++	dcn10_link_encoder_get_max_link_cap(enc, link_settings);
++
++	/* in usb c dp2 mode, max lane count is 2 */
++	if (enc->funcs->is_in_alt_mode && enc->funcs->is_in_alt_mode(enc)) {
++		if (enc->ctx->asic_id.hw_internal_rev != YELLOW_CARP_B0) {
++			// [Note] no need to check hw_internal_rev once phy mux selection is ready
++			REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
++		} else {
++			if ((enc10->base.transmitter == TRANSMITTER_UNIPHY_A)
++					|| (enc10->base.transmitter == TRANSMITTER_UNIPHY_B)
++					|| (enc10->base.transmitter == TRANSMITTER_UNIPHY_E)) {
++				REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
++			} else {
++				REG_GET(RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
++			}
++		}
++		if (!is_in_usb_c_dp4_mode)
++			link_settings->lane_count = MIN(LANE_COUNT_TWO, link_settings->lane_count);
++	}
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.h
+index 32d146312838b..3454f1e7c1f17 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.h
+@@ -69,6 +69,7 @@
+ 	SRI(RDPCSTX_PHY_CNTL4, RDPCSTX, id), \
+ 	SRI(RDPCSTX_PHY_CNTL5, RDPCSTX, id), \
+ 	SRI(RDPCSTX_PHY_CNTL6, RDPCSTX, id), \
++	SRI(RDPCSPIPE_PHY_CNTL6, RDPCSPIPE, id), \
+ 	SRI(RDPCSTX_PHY_CNTL7, RDPCSTX, id), \
+ 	SRI(RDPCSTX_PHY_CNTL8, RDPCSTX, id), \
+ 	SRI(RDPCSTX_PHY_CNTL9, RDPCSTX, id), \
+@@ -115,7 +116,9 @@
+ 	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX2_MPLL_EN, mask_sh),\
+ 	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DP_TX3_MPLL_EN, mask_sh),\
+ 	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, mask_sh),\
+-	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, mask_sh),\
++	LE_SF(RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, mask_sh),\
++	LE_SF(RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, mask_sh),\
++	LE_SF(RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE_ACK, mask_sh),\
+ 	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL7, RDPCS_PHY_DP_MPLLB_FRACN_QUOT, mask_sh),\
+ 	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL7, RDPCS_PHY_DP_MPLLB_FRACN_DEN, mask_sh),\
+ 	LE_SF(RDPCSTX0_RDPCSTX_PHY_CNTL8, RDPCS_PHY_DP_MPLLB_SSC_PEAK, mask_sh),\
+@@ -243,4 +246,13 @@ void dcn31_link_encoder_disable_output(
+ 	struct link_encoder *enc,
+ 	enum signal_type signal);
+ 
++/*
++ * Check whether USB-C DP Alt mode is disabled
++ */
++bool dcn31_link_encoder_is_in_alt_mode(
++	struct link_encoder *enc);
++
++void dcn31_link_encoder_get_max_link_cap(struct link_encoder *enc,
++	struct dc_link_settings *link_settings);
++
+ #endif /* __DC_LINK_ENCODER__DCN31_H__ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index cd3248dc31d87..7ea362a864c43 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -928,7 +928,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 	.disable_dcc = DCC_ENABLE,
+ 	.vsr_support = true,
+ 	.performance_trace = false,
+-	.max_downscale_src_width = 7680,/*upto 8K*/
++	.max_downscale_src_width = 3840,/*upto 4K*/
+ 	.disable_pplib_wm_range = false,
+ 	.scl_reset_length10 = true,
+ 	.sanity_checks = false,
+@@ -1284,6 +1284,12 @@ static struct stream_encoder *dcn31_stream_encoder_create(
+ 	if (!enc1 || !vpg || !afmt)
+ 		return NULL;
+ 
++	if (ctx->asic_id.chip_family == FAMILY_YELLOW_CARP &&
++			ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {
++		if ((eng_id == ENGINE_ID_DIGC) || (eng_id == ENGINE_ID_DIGD))
++			eng_id = eng_id + 3; // For B0 only. C->F, D->G.
++	}
++
+ 	dcn30_dio_stream_encoder_construct(enc1, ctx, ctx->dc_bios,
+ 					eng_id, vpg, afmt,
+ 					&stream_enc_regs[eng_id],
+diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+index 381c17caace18..5adc471bef57f 100644
+--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
++++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+@@ -227,7 +227,7 @@ enum {
+ #define FAMILY_YELLOW_CARP                     146
+ 
+ #define YELLOW_CARP_A0 0x01
+-#define YELLOW_CARP_B0 0x02		// TODO: DCN31 - update with correct B0 ID
++#define YELLOW_CARP_B0 0x1A
+ #define YELLOW_CARP_UNKNOWN 0xFF
+ 
+ #ifndef ASICREV_IS_YELLOW_CARP
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_4_2_0_offset.h b/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_4_2_0_offset.h
+index 92caf8441d1e0..01a56556cde13 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_4_2_0_offset.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/dpcs/dpcs_4_2_0_offset.h
+@@ -11932,5 +11932,32 @@
+ #define ixDPCSSYS_CR4_RAWLANEX_DIG_PCS_XF_RX_OVRD_OUT_2                                                0xe0c7
+ #define ixDPCSSYS_CR4_RAWLANEX_DIG_PCS_XF_TX_OVRD_IN_2                                                 0xe0c8
+ 
++//RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6
++#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                            0x10
++#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                        0x11
++#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                    0x12
++#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                              0x00010000L
++#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                          0x00020000L
++#define RDPCSPIPE0_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                      0x00040000L
++
++//RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6
++#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4__SHIFT                                            0x10
++#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE__SHIFT                                        0x11
++#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK__SHIFT                                    0x12
++#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DP4_MASK                                              0x00010000L
++#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_MASK                                          0x00020000L
++#define RDPCSPIPE1_RDPCSPIPE_PHY_CNTL6__RDPCS_PHY_DPALT_DISABLE_ACK_MASK                                      0x00040000L
++
++//[Note] Hack. RDPCSPIPE only has 2 instances.
++#define regRDPCSPIPE0_RDPCSPIPE_PHY_CNTL6                                                              0x2d73
++#define regRDPCSPIPE0_RDPCSPIPE_PHY_CNTL6_BASE_IDX                                                     2
++#define regRDPCSPIPE1_RDPCSPIPE_PHY_CNTL6                                                              0x2e4b
++#define regRDPCSPIPE1_RDPCSPIPE_PHY_CNTL6_BASE_IDX                                                     2
++#define regRDPCSPIPE2_RDPCSPIPE_PHY_CNTL6                                                              0x2d73
++#define regRDPCSPIPE2_RDPCSPIPE_PHY_CNTL6_BASE_IDX                                                     2
++#define regRDPCSPIPE3_RDPCSPIPE_PHY_CNTL6                                                              0x2e4b
++#define regRDPCSPIPE3_RDPCSPIPE_PHY_CNTL6_BASE_IDX                                                     2
++#define regRDPCSPIPE4_RDPCSPIPE_PHY_CNTL6                                                              0x2d73
++#define regRDPCSPIPE4_RDPCSPIPE_PHY_CNTL6_BASE_IDX                                                     2
+ 
+ #endif
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index 16812488c5ddc..13bafa9d49c01 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1253,15 +1253,36 @@ static void gen11_dsi_pre_enable(struct intel_atomic_state *state,
+ 	gen11_dsi_set_transcoder_timings(encoder, pipe_config);
+ }
+ 
++/*
++ * Wa_1409054076:icl,jsl,ehl
++ * When pipe A is disabled and MIPI DSI is enabled on pipe B,
++ * the AMT KVMR feature will incorrectly see pipe A as enabled.
++ * Set 0x42080 bit 23=1 before enabling DSI on pipe B and leave
++ * it set while DSI is enabled on pipe B
++ */
++static void icl_apply_kvmr_pipe_a_wa(struct intel_encoder *encoder,
++				     enum pipe pipe, bool enable)
++{
++	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++
++	if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B)
++		intel_de_rmw(dev_priv, CHICKEN_PAR1_1,
++			     IGNORE_KVMR_PIPE_A,
++			     enable ? IGNORE_KVMR_PIPE_A : 0);
++}
+ static void gen11_dsi_enable(struct intel_atomic_state *state,
+ 			     struct intel_encoder *encoder,
+ 			     const struct intel_crtc_state *crtc_state,
+ 			     const struct drm_connector_state *conn_state)
+ {
+ 	struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
++	struct intel_crtc *crtc = to_intel_crtc(conn_state->crtc);
+ 
+ 	drm_WARN_ON(state->base.dev, crtc_state->has_pch_encoder);
+ 
++	/* Wa_1409054076:icl,jsl,ehl */
++	icl_apply_kvmr_pipe_a_wa(encoder, crtc->pipe, true);
++
+ 	/* step6d: enable dsi transcoder */
+ 	gen11_dsi_enable_transcoder(encoder);
+ 
+@@ -1415,6 +1436,7 @@ static void gen11_dsi_disable(struct intel_atomic_state *state,
+ 			      const struct drm_connector_state *old_conn_state)
+ {
+ 	struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
++	struct intel_crtc *crtc = to_intel_crtc(old_conn_state->crtc);
+ 
+ 	/* step1: turn off backlight */
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_OFF);
+@@ -1423,6 +1445,9 @@ static void gen11_dsi_disable(struct intel_atomic_state *state,
+ 	/* step2d,e: disable transcoder and wait */
+ 	gen11_dsi_disable_transcoder(encoder);
+ 
++	/* Wa_1409054076:icl,jsl,ehl */
++	icl_apply_kvmr_pipe_a_wa(encoder, crtc->pipe, false);
++
+ 	/* step2f,g: powerdown panel */
+ 	gen11_dsi_powerdown_panel(encoder);
+ 
+@@ -1548,6 +1573,28 @@ static void gen11_dsi_get_config(struct intel_encoder *encoder,
+ 		pipe_config->mode_flags |= I915_MODE_FLAG_DSI_PERIODIC_CMD_MODE;
+ }
+ 
++static void gen11_dsi_sync_state(struct intel_encoder *encoder,
++				 const struct intel_crtc_state *crtc_state)
++{
++	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++	struct intel_crtc *intel_crtc;
++	enum pipe pipe;
++
++	if (!crtc_state)
++		return;
++
++	intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
++	pipe = intel_crtc->pipe;
++
++	/* wa verify 1409054076:icl,jsl,ehl */
++	if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B &&
++	    !(intel_de_read(dev_priv, CHICKEN_PAR1_1) & IGNORE_KVMR_PIPE_A))
++		drm_dbg_kms(&dev_priv->drm,
++			    "[ENCODER:%d:%s] BIOS left IGNORE_KVMR_PIPE_A cleared with pipe B enabled\n",
++			    encoder->base.base.id,
++			    encoder->base.name);
++}
++
+ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
+ 					struct intel_crtc_state *crtc_state)
+ {
+@@ -1966,6 +2013,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
+ 	encoder->post_disable = gen11_dsi_post_disable;
+ 	encoder->port = port;
+ 	encoder->get_config = gen11_dsi_get_config;
++	encoder->sync_state = gen11_dsi_sync_state;
+ 	encoder->update_pipe = intel_panel_update_backlight;
+ 	encoder->compute_config = gen11_dsi_compute_config;
+ 	encoder->get_hw_state = gen11_dsi_get_hw_state;
+diff --git a/drivers/gpu/drm/i915/display/intel_audio.c b/drivers/gpu/drm/i915/display/intel_audio.c
+index 5f4f316b3ab5c..4e4429535f9e4 100644
+--- a/drivers/gpu/drm/i915/display/intel_audio.c
++++ b/drivers/gpu/drm/i915/display/intel_audio.c
+@@ -1308,8 +1308,9 @@ static void i915_audio_component_init(struct drm_i915_private *dev_priv)
+ 		else
+ 			aud_freq = aud_freq_init;
+ 
+-		/* use BIOS provided value for TGL unless it is a known bad value */
+-		if (IS_TIGERLAKE(dev_priv) && aud_freq_init != AUD_FREQ_TGL_BROKEN)
++		/* use BIOS provided value for TGL and RKL unless it is a known bad value */
++		if ((IS_TIGERLAKE(dev_priv) || IS_ROCKETLAKE(dev_priv)) &&
++		    aud_freq_init != AUD_FREQ_TGL_BROKEN)
+ 			aud_freq = aud_freq_init;
+ 
+ 		drm_dbg_kms(&dev_priv->drm, "use AUD_FREQ_CNTRL of 0x%x (init value 0x%x)\n",
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index aa667fa711584..106f696e50a0e 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -451,13 +451,23 @@ parse_lfp_backlight(struct drm_i915_private *i915,
+ 	}
+ 
+ 	i915->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI;
+-	if (bdb->version >= 191 &&
+-	    get_blocksize(backlight_data) >= sizeof(*backlight_data)) {
+-		const struct lfp_backlight_control_method *method;
++	if (bdb->version >= 191) {
++		size_t exp_size;
+ 
+-		method = &backlight_data->backlight_control[panel_type];
+-		i915->vbt.backlight.type = method->type;
+-		i915->vbt.backlight.controller = method->controller;
++		if (bdb->version >= 236)
++			exp_size = sizeof(struct bdb_lfp_backlight_data);
++		else if (bdb->version >= 234)
++			exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_234;
++		else
++			exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_191;
++
++		if (get_blocksize(backlight_data) >= exp_size) {
++			const struct lfp_backlight_control_method *method;
++
++			method = &backlight_data->backlight_control[panel_type];
++			i915->vbt.backlight.type = method->type;
++			i915->vbt.backlight.controller = method->controller;
++		}
+ 	}
+ 
+ 	i915->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz;
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 00dade49665b8..89a109f65f389 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3899,7 +3899,13 @@ void hsw_ddi_get_config(struct intel_encoder *encoder,
+ static void intel_ddi_sync_state(struct intel_encoder *encoder,
+ 				 const struct intel_crtc_state *crtc_state)
+ {
+-	if (intel_crtc_has_dp_encoder(crtc_state))
++	struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++	enum phy phy = intel_port_to_phy(i915, encoder->port);
++
++	if (intel_phy_is_tc(i915, phy))
++		intel_tc_port_sanitize(enc_to_dig_port(encoder));
++
++	if (crtc_state && intel_crtc_has_dp_encoder(crtc_state))
+ 		intel_dp_sync_state(encoder, crtc_state);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 0a8a2395c8aca..bb1d2b19be151 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -12933,18 +12933,16 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
+ 	readout_plane_state(dev_priv);
+ 
+ 	for_each_intel_encoder(dev, encoder) {
++		struct intel_crtc_state *crtc_state = NULL;
++
+ 		pipe = 0;
+ 
+ 		if (encoder->get_hw_state(encoder, &pipe)) {
+-			struct intel_crtc_state *crtc_state;
+-
+ 			crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
+ 			crtc_state = to_intel_crtc_state(crtc->base.state);
+ 
+ 			encoder->base.crtc = &crtc->base;
+ 			intel_encoder_get_config(encoder, crtc_state);
+-			if (encoder->sync_state)
+-				encoder->sync_state(encoder, crtc_state);
+ 
+ 			/* read out to slave crtc as well for bigjoiner */
+ 			if (crtc_state->bigjoiner) {
+@@ -12959,6 +12957,9 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
+ 			encoder->base.crtc = NULL;
+ 		}
+ 
++		if (encoder->sync_state)
++			encoder->sync_state(encoder, crtc_state);
++
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "[ENCODER:%d:%s] hw state readout: %s, pipe %c\n",
+ 			    encoder->base.base.id, encoder->base.name,
+@@ -13241,17 +13242,6 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
+ 	intel_modeset_readout_hw_state(dev);
+ 
+ 	/* HW state is read out, now we need to sanitize this mess. */
+-
+-	/* Sanitize the TypeC port mode upfront, encoders depend on this */
+-	for_each_intel_encoder(dev, encoder) {
+-		enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
+-
+-		/* We need to sanitize only the MST primary port. */
+-		if (encoder->type != INTEL_OUTPUT_DP_MST &&
+-		    intel_phy_is_tc(dev_priv, phy))
+-			intel_tc_port_sanitize(enc_to_dig_port(encoder));
+-	}
+-
+ 	get_encoder_power_domains(dev_priv);
+ 
+ 	if (HAS_PCH_IBX(dev_priv))
+diff --git a/drivers/gpu/drm/i915/display/intel_vbt_defs.h b/drivers/gpu/drm/i915/display/intel_vbt_defs.h
+index dbe24d7e73759..cf1ffe4a0e46a 100644
+--- a/drivers/gpu/drm/i915/display/intel_vbt_defs.h
++++ b/drivers/gpu/drm/i915/display/intel_vbt_defs.h
+@@ -814,6 +814,11 @@ struct lfp_brightness_level {
+ 	u16 reserved;
+ } __packed;
+ 
++#define EXP_BDB_LFP_BL_DATA_SIZE_REV_191 \
++	offsetof(struct bdb_lfp_backlight_data, brightness_level)
++#define EXP_BDB_LFP_BL_DATA_SIZE_REV_234 \
++	offsetof(struct bdb_lfp_backlight_data, brightness_precision_bits)
++
+ struct bdb_lfp_backlight_data {
+ 	u8 entry_size;
+ 	struct lfp_backlight_data_entry data[16];
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+index e382b7f2353b8..5ab136ffdeb2d 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+@@ -118,7 +118,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww,
+ 	intel_wakeref_t wakeref = 0;
+ 	unsigned long count = 0;
+ 	unsigned long scanned = 0;
+-	int err;
++	int err = 0;
+ 
+ 	/* CHV + VTD workaround use stop_machine(); need to trylock vm->mutex */
+ 	bool trylock_vm = !ww && intel_vm_no_concurrent_access_wa(i915);
+@@ -242,12 +242,15 @@ skip:
+ 		list_splice_tail(&still_in_list, phase->list);
+ 		spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
+ 		if (err)
+-			return err;
++			break;
+ 	}
+ 
+ 	if (shrink & I915_SHRINK_BOUND)
+ 		intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+ 
++	if (err)
++		return err;
++
+ 	if (nr_scanned)
+ 		*nr_scanned += scanned;
+ 	return count;
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 476bb3b9ad11a..5aa5ddefd22d2 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -8113,6 +8113,7 @@ enum {
+ # define CHICKEN3_DGMG_DONE_FIX_DISABLE		(1 << 2)
+ 
+ #define CHICKEN_PAR1_1			_MMIO(0x42080)
++#define  IGNORE_KVMR_PIPE_A		REG_BIT(23)
+ #define  KBL_ARB_FILL_SPARE_22		REG_BIT(22)
+ #define  DIS_RAM_BYPASS_PSR2_MAN_TRACK	(1 << 16)
+ #define  SKL_DE_COMPRESSED_HASH_MODE	(1 << 15)
+@@ -8150,6 +8151,11 @@ enum {
+ #define  HSW_SPR_STRETCH_MAX_X1		REG_FIELD_PREP(HSW_SPR_STRETCH_MAX_MASK, 3)
+ #define  HSW_FBCQ_DIS			(1 << 22)
+ #define  BDW_DPRS_MASK_VBLANK_SRD	(1 << 0)
++#define  SKL_PLANE1_STRETCH_MAX_MASK	REG_GENMASK(1, 0)
++#define  SKL_PLANE1_STRETCH_MAX_X8	REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 0)
++#define  SKL_PLANE1_STRETCH_MAX_X4	REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 1)
++#define  SKL_PLANE1_STRETCH_MAX_X2	REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 2)
++#define  SKL_PLANE1_STRETCH_MAX_X1	REG_FIELD_PREP(SKL_PLANE1_STRETCH_MAX_MASK, 3)
+ #define CHICKEN_PIPESL_1(pipe) _MMIO_PIPE(pipe, _CHICKEN_PIPESL_1_A, _CHICKEN_PIPESL_1_B)
+ 
+ #define _CHICKEN_TRANS_A	0x420c0
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 45fefa0ed1607..28959e67c00ee 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -76,6 +76,8 @@ struct intel_wm_config {
+ 
+ static void gen9_init_clock_gating(struct drm_i915_private *dev_priv)
+ {
++	enum pipe pipe;
++
+ 	if (HAS_LLC(dev_priv)) {
+ 		/*
+ 		 * WaCompressedResourceDisplayNewHashMode:skl,kbl
+@@ -89,6 +91,16 @@ static void gen9_init_clock_gating(struct drm_i915_private *dev_priv)
+ 			   SKL_DE_COMPRESSED_HASH_MODE);
+ 	}
+ 
++	for_each_pipe(dev_priv, pipe) {
++		/*
++		 * "Plane N strech max must be programmed to 11b (x1)
++		 *  when Async flips are enabled on that plane."
++		 */
++		if (!IS_GEMINILAKE(dev_priv) && intel_vtd_active())
++			intel_uncore_rmw(&dev_priv->uncore, CHICKEN_PIPESL_1(pipe),
++					 SKL_PLANE1_STRETCH_MAX_MASK, SKL_PLANE1_STRETCH_MAX_X1);
++	}
++
+ 	/* See Bspec note for PSR2_CTL bit 31, Wa#828:skl,bxt,kbl,cfl */
+ 	intel_uncore_write(&dev_priv->uncore, CHICKEN_PAR1_1,
+ 		   intel_uncore_read(&dev_priv->uncore, CHICKEN_PAR1_1) | SKL_EDP_PSR_FIX_RDWRAP);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.c b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+index b8c31b697797e..66f32d965c723 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/crc.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+@@ -704,6 +704,7 @@ static const struct file_operations nv50_crc_flip_threshold_fops = {
+ 	.open = nv50_crc_debugfs_flip_threshold_open,
+ 	.read = seq_read,
+ 	.write = nv50_crc_debugfs_flip_threshold_set,
++	.release = single_release,
+ };
+ 
+ int nv50_head_crc_late_register(struct nv50_head *head)
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index d66f97280282a..72099d1e48169 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -52,6 +52,7 @@ nv50_head_flush_clr(struct nv50_head *head,
+ void
+ nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh)
+ {
++	if (asyh->set.curs   ) head->func->curs_set(head, asyh);
+ 	if (asyh->set.olut   ) {
+ 		asyh->olut.offset = nv50_lut_load(&head->olut,
+ 						  asyh->olut.buffer,
+@@ -67,7 +68,6 @@ nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
+ 	if (asyh->set.view   ) head->func->view    (head, asyh);
+ 	if (asyh->set.mode   ) head->func->mode    (head, asyh);
+ 	if (asyh->set.core   ) head->func->core_set(head, asyh);
+-	if (asyh->set.curs   ) head->func->curs_set(head, asyh);
+ 	if (asyh->set.base   ) head->func->base    (head, asyh);
+ 	if (asyh->set.ovly   ) head->func->ovly    (head, asyh);
+ 	if (asyh->set.dither ) head->func->dither  (head, asyh);
+diff --git a/drivers/gpu/drm/nouveau/include/nvif/class.h b/drivers/gpu/drm/nouveau/include/nvif/class.h
+index c68cc957248e2..a582c0cb0cb0d 100644
+--- a/drivers/gpu/drm/nouveau/include/nvif/class.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/class.h
+@@ -71,6 +71,7 @@
+ #define PASCAL_CHANNEL_GPFIFO_A                       /* cla06f.h */ 0x0000c06f
+ #define VOLTA_CHANNEL_GPFIFO_A                        /* clc36f.h */ 0x0000c36f
+ #define TURING_CHANNEL_GPFIFO_A                       /* clc36f.h */ 0x0000c46f
++#define AMPERE_CHANNEL_GPFIFO_B                       /* clc36f.h */ 0x0000c76f
+ 
+ #define NV50_DISP                                     /* cl5070.h */ 0x00005070
+ #define G82_DISP                                      /* cl5070.h */ 0x00008270
+@@ -200,6 +201,7 @@
+ #define PASCAL_DMA_COPY_B                                            0x0000c1b5
+ #define VOLTA_DMA_COPY_A                                             0x0000c3b5
+ #define TURING_DMA_COPY_A                                            0x0000c5b5
++#define AMPERE_DMA_COPY_B                                            0x0000c7b5
+ 
+ #define FERMI_DECOMPRESS                                             0x000090b8
+ 
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h b/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h
+index 54fab7cc36c1b..64ee82c7c1be5 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h
+@@ -77,4 +77,5 @@ int gp100_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct
+ int gp10b_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
+ int gv100_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
+ int tu102_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
++int ga102_fifo_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fifo **);
+ #endif
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 6d07e653f82d5..c58bcdba2c7aa 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -844,6 +844,7 @@ nouveau_bo_move_init(struct nouveau_drm *drm)
+ 			    struct ttm_resource *, struct ttm_resource *);
+ 		int (*init)(struct nouveau_channel *, u32 handle);
+ 	} _methods[] = {
++		{  "COPY", 4, 0xc7b5, nve0_bo_move_copy, nve0_bo_move_init },
+ 		{  "COPY", 4, 0xc5b5, nve0_bo_move_copy, nve0_bo_move_init },
+ 		{  "GRCE", 0, 0xc5b5, nve0_bo_move_copy, nvc0_bo_move_init },
+ 		{  "COPY", 4, 0xc3b5, nve0_bo_move_copy, nve0_bo_move_init },
+diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c
+index 80099ef757022..ea7769135b0dc 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
++++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
+@@ -250,7 +250,8 @@ static int
+ nouveau_channel_ind(struct nouveau_drm *drm, struct nvif_device *device,
+ 		    u64 runlist, bool priv, struct nouveau_channel **pchan)
+ {
+-	static const u16 oclasses[] = { TURING_CHANNEL_GPFIFO_A,
++	static const u16 oclasses[] = { AMPERE_CHANNEL_GPFIFO_B,
++					TURING_CHANNEL_GPFIFO_A,
+ 					VOLTA_CHANNEL_GPFIFO_A,
+ 					PASCAL_CHANNEL_GPFIFO_A,
+ 					MAXWELL_CHANNEL_GPFIFO_A,
+@@ -386,7 +387,8 @@ nouveau_channel_init(struct nouveau_channel *chan, u32 vram, u32 gart)
+ 
+ 	nvif_object_map(&chan->user, NULL, 0);
+ 
+-	if (chan->user.oclass >= FERMI_CHANNEL_GPFIFO) {
++	if (chan->user.oclass >= FERMI_CHANNEL_GPFIFO &&
++	    chan->user.oclass < AMPERE_CHANNEL_GPFIFO_B) {
+ 		ret = nvif_notify_ctor(&chan->user, "abi16ChanKilled",
+ 				       nouveau_channel_killed,
+ 				       true, NV906F_V0_NTFY_KILLED,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index c2bc05eb2e54a..1cbe01048b930 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -207,6 +207,7 @@ static const struct file_operations nouveau_pstate_fops = {
+ 	.open = nouveau_debugfs_pstate_open,
+ 	.read = seq_read,
+ 	.write = nouveau_debugfs_pstate_set,
++	.release = single_release,
+ };
+ 
+ static struct drm_info_list nouveau_debugfs_list[] = {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index ba4cd5f837259..9b0084e4bbcfb 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -345,6 +345,9 @@ nouveau_accel_gr_init(struct nouveau_drm *drm)
+ 	u32 arg0, arg1;
+ 	int ret;
+ 
++	if (device->info.family >= NV_DEVICE_INFO_V0_AMPERE)
++		return;
++
+ 	/* Allocate channel that has access to the graphics engine. */
+ 	if (device->info.family >= NV_DEVICE_INFO_V0_KEPLER) {
+ 		arg0 = nvif_fifo_runlist(device, NV_DEVICE_HOST_RUNLIST_ENGINES_GR);
+@@ -469,6 +472,7 @@ nouveau_accel_init(struct nouveau_drm *drm)
+ 		case PASCAL_CHANNEL_GPFIFO_A:
+ 		case VOLTA_CHANNEL_GPFIFO_A:
+ 		case TURING_CHANNEL_GPFIFO_A:
++		case AMPERE_CHANNEL_GPFIFO_B:
+ 			ret = nvc0_fence_create(drm);
+ 			break;
+ 		default:
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index 5b27845075a1c..8c2ecc2827232 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -247,10 +247,8 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain,
+ 	}
+ 
+ 	ret = nouveau_bo_init(nvbo, size, align, domain, NULL, NULL);
+-	if (ret) {
+-		nouveau_bo_ref(NULL, &nvbo);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	/* we restrict allowed domains on nv50+ to only the types
+ 	 * that were requested at creation time.  not possibly on
+diff --git a/drivers/gpu/drm/nouveau/nv84_fence.c b/drivers/gpu/drm/nouveau/nv84_fence.c
+index 7c9c928c31966..c3526a8622e3e 100644
+--- a/drivers/gpu/drm/nouveau/nv84_fence.c
++++ b/drivers/gpu/drm/nouveau/nv84_fence.c
+@@ -204,7 +204,7 @@ nv84_fence_create(struct nouveau_drm *drm)
+ 	priv->base.context_new = nv84_fence_context_new;
+ 	priv->base.context_del = nv84_fence_context_del;
+ 
+-	priv->base.uevent = true;
++	priv->base.uevent = drm->client.device.info.family < NV_DEVICE_INFO_V0_AMPERE;
+ 
+ 	mutex_init(&priv->mutex);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index 93ddf63d11140..ca75c5f6ecaf8 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -2602,6 +2602,7 @@ nv172_chipset = {
+ 	.top      = { 0x00000001, ga100_top_new },
+ 	.disp     = { 0x00000001, ga102_disp_new },
+ 	.dma      = { 0x00000001, gv100_dma_new },
++	.fifo     = { 0x00000001, ga102_fifo_new },
+ };
+ 
+ static const struct nvkm_device_chip
+@@ -2622,6 +2623,7 @@ nv174_chipset = {
+ 	.top      = { 0x00000001, ga100_top_new },
+ 	.disp     = { 0x00000001, ga102_disp_new },
+ 	.dma      = { 0x00000001, gv100_dma_new },
++	.fifo     = { 0x00000001, ga102_fifo_new },
+ };
+ 
+ static const struct nvkm_device_chip
+@@ -2642,6 +2644,7 @@ nv177_chipset = {
+ 	.top      = { 0x00000001, ga100_top_new },
+ 	.disp     = { 0x00000001, ga102_disp_new },
+ 	.dma      = { 0x00000001, gv100_dma_new },
++	.fifo     = { 0x00000001, ga102_fifo_new },
+ };
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild
+index 3209eb7af65fb..5e831d347a957 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild
+@@ -18,6 +18,7 @@ nvkm-y += nvkm/engine/fifo/gp100.o
+ nvkm-y += nvkm/engine/fifo/gp10b.o
+ nvkm-y += nvkm/engine/fifo/gv100.o
+ nvkm-y += nvkm/engine/fifo/tu102.o
++nvkm-y += nvkm/engine/fifo/ga102.o
+ 
+ nvkm-y += nvkm/engine/fifo/chan.o
+ nvkm-y += nvkm/engine/fifo/channv50.o
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
+new file mode 100644
+index 0000000000000..c630dbd2911ae
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
+@@ -0,0 +1,311 @@
++/*
++ * Copyright 2021 Red Hat Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++#define ga102_fifo(p) container_of((p), struct ga102_fifo, base.engine)
++#define ga102_chan(p) container_of((p), struct ga102_chan, object)
++#include <engine/fifo.h>
++#include "user.h"
++
++#include <core/memory.h>
++#include <subdev/mmu.h>
++#include <subdev/timer.h>
++#include <subdev/top.h>
++
++#include <nvif/cl0080.h>
++#include <nvif/clc36f.h>
++#include <nvif/class.h>
++
++struct ga102_fifo {
++	struct nvkm_fifo base;
++};
++
++struct ga102_chan {
++	struct nvkm_object object;
++
++	struct {
++		u32 runl;
++		u32 chan;
++	} ctrl;
++
++	struct nvkm_memory *mthd;
++	struct nvkm_memory *inst;
++	struct nvkm_memory *user;
++	struct nvkm_memory *runl;
++
++	struct nvkm_vmm *vmm;
++};
++
++static int
++ga102_chan_sclass(struct nvkm_object *object, int index, struct nvkm_oclass *oclass)
++{
++	if (index == 0) {
++		oclass->ctor = nvkm_object_new;
++		oclass->base = (struct nvkm_sclass) { -1, -1, AMPERE_DMA_COPY_B };
++		return 0;
++	}
++
++	return -EINVAL;
++}
++
++static int
++ga102_chan_map(struct nvkm_object *object, void *argv, u32 argc,
++	       enum nvkm_object_map *type, u64 *addr, u64 *size)
++{
++	struct ga102_chan *chan = ga102_chan(object);
++	struct nvkm_device *device = chan->object.engine->subdev.device;
++	u64 bar2 = nvkm_memory_bar2(chan->user);
++
++	if (bar2 == ~0ULL)
++		return -EFAULT;
++
++	*type = NVKM_OBJECT_MAP_IO;
++	*addr = device->func->resource_addr(device, 3) + bar2;
++	*size = 0x1000;
++	return 0;
++}
++
++static int
++ga102_chan_fini(struct nvkm_object *object, bool suspend)
++{
++	struct ga102_chan *chan = ga102_chan(object);
++	struct nvkm_device *device = chan->object.engine->subdev.device;
++
++	nvkm_wr32(device, chan->ctrl.chan, 0x00000003);
++
++	nvkm_wr32(device, chan->ctrl.runl + 0x098, 0x01000000);
++	nvkm_msec(device, 2000,
++		if (!(nvkm_rd32(device, chan->ctrl.runl + 0x098) & 0x00100000))
++			break;
++	);
++
++	nvkm_wr32(device, chan->ctrl.runl + 0x088, 0);
++
++	nvkm_wr32(device, chan->ctrl.chan, 0xffffffff);
++	return 0;
++}
++
++static int
++ga102_chan_init(struct nvkm_object *object)
++{
++	struct ga102_chan *chan = ga102_chan(object);
++	struct nvkm_device *device = chan->object.engine->subdev.device;
++
++	nvkm_mask(device, chan->ctrl.runl + 0x300, 0x80000000, 0x80000000);
++
++	nvkm_wr32(device, chan->ctrl.runl + 0x080, lower_32_bits(nvkm_memory_addr(chan->runl)));
++	nvkm_wr32(device, chan->ctrl.runl + 0x084, upper_32_bits(nvkm_memory_addr(chan->runl)));
++	nvkm_wr32(device, chan->ctrl.runl + 0x088, 2);
++
++	nvkm_wr32(device, chan->ctrl.chan, 0x00000002);
++	nvkm_wr32(device, chan->ctrl.runl + 0x0090, 0);
++	return 0;
++}
++
++static void *
++ga102_chan_dtor(struct nvkm_object *object)
++{
++	struct ga102_chan *chan = ga102_chan(object);
++
++	if (chan->vmm) {
++		nvkm_vmm_part(chan->vmm, chan->inst);
++		nvkm_vmm_unref(&chan->vmm);
++	}
++
++	nvkm_memory_unref(&chan->runl);
++	nvkm_memory_unref(&chan->user);
++	nvkm_memory_unref(&chan->inst);
++	nvkm_memory_unref(&chan->mthd);
++	return chan;
++}
++
++static const struct nvkm_object_func
++ga102_chan = {
++	.dtor = ga102_chan_dtor,
++	.init = ga102_chan_init,
++	.fini = ga102_chan_fini,
++	.map = ga102_chan_map,
++	.sclass = ga102_chan_sclass,
++};
++
++static int
++ga102_chan_new(struct nvkm_device *device,
++	       const struct nvkm_oclass *oclass, void *argv, u32 argc, struct nvkm_object **pobject)
++{
++	struct volta_channel_gpfifo_a_v0 *args = argv;
++	struct nvkm_top_device *tdev;
++	struct nvkm_vmm *vmm;
++	struct ga102_chan *chan;
++	int ret;
++
++	if (argc != sizeof(*args))
++		return -ENOSYS;
++
++	vmm = nvkm_uvmm_search(oclass->client, args->vmm);
++	if (IS_ERR(vmm))
++		return PTR_ERR(vmm);
++
++	if (!(chan = kzalloc(sizeof(*chan), GFP_KERNEL)))
++		return -ENOMEM;
++
++	nvkm_object_ctor(&ga102_chan, oclass, &chan->object);
++	*pobject = &chan->object;
++
++	list_for_each_entry(tdev, &device->top->device, head) {
++		if (tdev->type == NVKM_ENGINE_CE) {
++			chan->ctrl.runl = tdev->runlist;
++			break;
++		}
++	}
++
++	if (!chan->ctrl.runl)
++		return -ENODEV;
++
++	chan->ctrl.chan = nvkm_rd32(device, chan->ctrl.runl + 0x004) & 0xfffffff0;
++
++	args->chid = 0;
++	args->inst = 0;
++	args->token = nvkm_rd32(device, chan->ctrl.runl + 0x008) & 0xffff0000;
++
++	ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->mthd);
++	if (ret)
++		return ret;
++
++	ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->inst);
++	if (ret)
++		return ret;
++
++	nvkm_kmap(chan->inst);
++	nvkm_wo32(chan->inst, 0x010, 0x0000face);
++	nvkm_wo32(chan->inst, 0x030, 0x7ffff902);
++	nvkm_wo32(chan->inst, 0x048, lower_32_bits(args->ioffset));
++	nvkm_wo32(chan->inst, 0x04c, upper_32_bits(args->ioffset) |
++				     (order_base_2(args->ilength / 8) << 16));
++	nvkm_wo32(chan->inst, 0x084, 0x20400000);
++	nvkm_wo32(chan->inst, 0x094, 0x30000001);
++	nvkm_wo32(chan->inst, 0x0ac, 0x00020000);
++	nvkm_wo32(chan->inst, 0x0e4, 0x00000000);
++	nvkm_wo32(chan->inst, 0x0e8, 0);
++	nvkm_wo32(chan->inst, 0x0f4, 0x00001000);
++	nvkm_wo32(chan->inst, 0x0f8, 0x10003080);
++	nvkm_mo32(chan->inst, 0x218, 0x00000000, 0x00000000);
++	nvkm_wo32(chan->inst, 0x220, lower_32_bits(nvkm_memory_bar2(chan->mthd)));
++	nvkm_wo32(chan->inst, 0x224, upper_32_bits(nvkm_memory_bar2(chan->mthd)));
++	nvkm_done(chan->inst);
++
++	ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->user);
++	if (ret)
++		return ret;
++
++	ret = nvkm_memory_new(device, NVKM_MEM_TARGET_INST, 0x1000, 0x1000, true, &chan->runl);
++	if (ret)
++		return ret;
++
++	nvkm_kmap(chan->runl);
++	nvkm_wo32(chan->runl, 0x00, 0x80030001);
++	nvkm_wo32(chan->runl, 0x04, 1);
++	nvkm_wo32(chan->runl, 0x08, 0);
++	nvkm_wo32(chan->runl, 0x0c, 0x00000000);
++	nvkm_wo32(chan->runl, 0x10, lower_32_bits(nvkm_memory_addr(chan->user)));
++	nvkm_wo32(chan->runl, 0x14, upper_32_bits(nvkm_memory_addr(chan->user)));
++	nvkm_wo32(chan->runl, 0x18, lower_32_bits(nvkm_memory_addr(chan->inst)));
++	nvkm_wo32(chan->runl, 0x1c, upper_32_bits(nvkm_memory_addr(chan->inst)));
++	nvkm_done(chan->runl);
++
++	ret = nvkm_vmm_join(vmm, chan->inst);
++	if (ret)
++		return ret;
++
++	chan->vmm = nvkm_vmm_ref(vmm);
++	return 0;
++}
++
++static const struct nvkm_device_oclass
++ga102_chan_oclass = {
++	.ctor = ga102_chan_new,
++};
++
++static int
++ga102_user_new(struct nvkm_device *device,
++	       const struct nvkm_oclass *oclass, void *argv, u32 argc, struct nvkm_object **pobject)
++{
++	return tu102_fifo_user_new(oclass, argv, argc, pobject);
++}
++
++static const struct nvkm_device_oclass
++ga102_user_oclass = {
++	.ctor = ga102_user_new,
++};
++
++static int
++ga102_fifo_sclass(struct nvkm_oclass *oclass, int index, const struct nvkm_device_oclass **class)
++{
++	if (index == 0) {
++		oclass->base = (struct nvkm_sclass) { -1, -1, VOLTA_USERMODE_A };
++		*class = &ga102_user_oclass;
++		return 0;
++	} else
++	if (index == 1) {
++		oclass->base = (struct nvkm_sclass) { 0, 0, AMPERE_CHANNEL_GPFIFO_B };
++		*class = &ga102_chan_oclass;
++		return 0;
++	}
++
++	return 2;
++}
++
++static int
++ga102_fifo_info(struct nvkm_engine *engine, u64 mthd, u64 *data)
++{
++	switch (mthd) {
++	case NV_DEVICE_HOST_CHANNELS: *data = 1; return 0;
++	default:
++		break;
++	}
++
++	return -ENOSYS;
++}
++
++static void *
++ga102_fifo_dtor(struct nvkm_engine *engine)
++{
++	return ga102_fifo(engine);
++}
++
++static const struct nvkm_engine_func
++ga102_fifo = {
++	.dtor = ga102_fifo_dtor,
++	.info = ga102_fifo_info,
++	.base.sclass = ga102_fifo_sclass,
++};
++
++int
++ga102_fifo_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst,
++	       struct nvkm_fifo **pfifo)
++{
++	struct ga102_fifo *fifo;
++
++	if (!(fifo = kzalloc(sizeof(*fifo), GFP_KERNEL)))
++		return -ENOMEM;
++
++	nvkm_engine_ctor(&ga102_fifo, device, type, inst, true, &fifo->base.engine);
++	*pfifo = &fifo->base;
++	return 0;
++}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/top/ga100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/top/ga100.c
+index 31933f3e5a076..c982d834c8d98 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/top/ga100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/top/ga100.c
+@@ -54,7 +54,7 @@ ga100_top_oneinit(struct nvkm_top *top)
+ 			info->reset   = (data & 0x0000001f);
+ 			break;
+ 		case 2:
+-			info->runlist = (data & 0x0000fc00) >> 10;
++			info->runlist = (data & 0x00fffc00);
+ 			info->engine  = (data & 0x00000003);
+ 			break;
+ 		default:
+@@ -85,9 +85,10 @@ ga100_top_oneinit(struct nvkm_top *top)
+ 		}
+ 
+ 		nvkm_debug(subdev, "%02x.%d (%8s): addr %06x fault %2d "
+-				   "runlist %2d engine %2d reset %2d\n", type, inst,
++				   "runlist %6x engine %2d reset %2d\n", type, inst,
+ 			   info->type == NVKM_SUBDEV_NR ? "????????" : nvkm_subdev_type[info->type],
+-			   info->addr, info->fault, info->runlist, info->engine, info->reset);
++			   info->addr, info->fault, info->runlist < 0 ? 0 : info->runlist,
++			   info->engine, info->reset);
+ 		info = NULL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panel/panel-abt-y030xx067a.c b/drivers/gpu/drm/panel/panel-abt-y030xx067a.c
+index 2d8794d495d08..3d8a9ab47cae2 100644
+--- a/drivers/gpu/drm/panel/panel-abt-y030xx067a.c
++++ b/drivers/gpu/drm/panel/panel-abt-y030xx067a.c
+@@ -146,8 +146,8 @@ static const struct reg_sequence y030xx067a_init_sequence[] = {
+ 	{ 0x09, REG09_SUB_BRIGHT_R(0x20) },
+ 	{ 0x0a, REG0A_SUB_BRIGHT_B(0x20) },
+ 	{ 0x0b, REG0B_HD_FREERUN | REG0B_VD_FREERUN },
+-	{ 0x0c, REG0C_CONTRAST_R(0x10) },
+-	{ 0x0d, REG0D_CONTRAST_G(0x10) },
++	{ 0x0c, REG0C_CONTRAST_R(0x00) },
++	{ 0x0d, REG0D_CONTRAST_G(0x00) },
+ 	{ 0x0e, REG0E_CONTRAST_B(0x10) },
+ 	{ 0x0f, 0 },
+ 	{ 0x10, REG10_BRIGHT(0x7f) },
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index f75fb157f2ff7..016b877051dab 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -216,11 +216,13 @@ static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master,
+ 		goto err_disable_clk_tmds;
+ 	}
+ 
++	ret = sun8i_hdmi_phy_init(hdmi->phy);
++	if (ret)
++		goto err_disable_clk_tmds;
++
+ 	drm_encoder_helper_add(encoder, &sun8i_dw_hdmi_encoder_helper_funcs);
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+ 
+-	sun8i_hdmi_phy_init(hdmi->phy);
+-
+ 	plat_data->mode_valid = hdmi->quirks->mode_valid;
+ 	plat_data->use_drm_infoframe = hdmi->quirks->use_drm_infoframe;
+ 	sun8i_hdmi_phy_set_ops(hdmi->phy, plat_data);
+@@ -262,6 +264,7 @@ static void sun8i_dw_hdmi_unbind(struct device *dev, struct device *master,
+ 	struct sun8i_dw_hdmi *hdmi = dev_get_drvdata(dev);
+ 
+ 	dw_hdmi_unbind(hdmi->hdmi);
++	sun8i_hdmi_phy_deinit(hdmi->phy);
+ 	clk_disable_unprepare(hdmi->clk_tmds);
+ 	reset_control_assert(hdmi->rst_ctrl);
+ 	gpiod_set_value(hdmi->ddc_en, 0);
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+index 74f6ed0e25709..bffe1b9cd3dcb 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
+@@ -169,6 +169,7 @@ struct sun8i_hdmi_phy {
+ 	struct clk			*clk_phy;
+ 	struct clk			*clk_pll0;
+ 	struct clk			*clk_pll1;
++	struct device			*dev;
+ 	unsigned int			rcal;
+ 	struct regmap			*regs;
+ 	struct reset_control		*rst_phy;
+@@ -205,7 +206,8 @@ encoder_to_sun8i_dw_hdmi(struct drm_encoder *encoder)
+ 
+ int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node);
+ 
+-void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
++int sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy);
++void sun8i_hdmi_phy_deinit(struct sun8i_hdmi_phy *phy);
+ void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
+ 			    struct dw_hdmi_plat_data *plat_data);
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index c9239708d398c..b64d93da651d2 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -506,9 +506,60 @@ static void sun8i_hdmi_phy_init_h3(struct sun8i_hdmi_phy *phy)
+ 	phy->rcal = (val & SUN8I_HDMI_PHY_ANA_STS_RCAL_MASK) >> 2;
+ }
+ 
+-void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy)
++int sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy)
+ {
++	int ret;
++
++	ret = reset_control_deassert(phy->rst_phy);
++	if (ret) {
++		dev_err(phy->dev, "Cannot deassert phy reset control: %d\n", ret);
++		return ret;
++	}
++
++	ret = clk_prepare_enable(phy->clk_bus);
++	if (ret) {
++		dev_err(phy->dev, "Cannot enable bus clock: %d\n", ret);
++		goto err_assert_rst_phy;
++	}
++
++	ret = clk_prepare_enable(phy->clk_mod);
++	if (ret) {
++		dev_err(phy->dev, "Cannot enable mod clock: %d\n", ret);
++		goto err_disable_clk_bus;
++	}
++
++	if (phy->variant->has_phy_clk) {
++		ret = sun8i_phy_clk_create(phy, phy->dev,
++					   phy->variant->has_second_pll);
++		if (ret) {
++			dev_err(phy->dev, "Couldn't create the PHY clock\n");
++			goto err_disable_clk_mod;
++		}
++
++		clk_prepare_enable(phy->clk_phy);
++	}
++
+ 	phy->variant->phy_init(phy);
++
++	return 0;
++
++err_disable_clk_mod:
++	clk_disable_unprepare(phy->clk_mod);
++err_disable_clk_bus:
++	clk_disable_unprepare(phy->clk_bus);
++err_assert_rst_phy:
++	reset_control_assert(phy->rst_phy);
++
++	return ret;
++}
++
++void sun8i_hdmi_phy_deinit(struct sun8i_hdmi_phy *phy)
++{
++	clk_disable_unprepare(phy->clk_mod);
++	clk_disable_unprepare(phy->clk_bus);
++	clk_disable_unprepare(phy->clk_phy);
++
++	reset_control_assert(phy->rst_phy);
+ }
+ 
+ void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
+@@ -638,6 +689,7 @@ static int sun8i_hdmi_phy_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	phy->variant = (struct sun8i_hdmi_phy_variant *)match->data;
++	phy->dev = dev;
+ 
+ 	ret = of_address_to_resource(node, 0, &res);
+ 	if (ret) {
+@@ -696,47 +748,10 @@ static int sun8i_hdmi_phy_probe(struct platform_device *pdev)
+ 		goto err_put_clk_pll1;
+ 	}
+ 
+-	ret = reset_control_deassert(phy->rst_phy);
+-	if (ret) {
+-		dev_err(dev, "Cannot deassert phy reset control: %d\n", ret);
+-		goto err_put_rst_phy;
+-	}
+-
+-	ret = clk_prepare_enable(phy->clk_bus);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable bus clock: %d\n", ret);
+-		goto err_deassert_rst_phy;
+-	}
+-
+-	ret = clk_prepare_enable(phy->clk_mod);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable mod clock: %d\n", ret);
+-		goto err_disable_clk_bus;
+-	}
+-
+-	if (phy->variant->has_phy_clk) {
+-		ret = sun8i_phy_clk_create(phy, dev,
+-					   phy->variant->has_second_pll);
+-		if (ret) {
+-			dev_err(dev, "Couldn't create the PHY clock\n");
+-			goto err_disable_clk_mod;
+-		}
+-
+-		clk_prepare_enable(phy->clk_phy);
+-	}
+-
+ 	platform_set_drvdata(pdev, phy);
+ 
+ 	return 0;
+ 
+-err_disable_clk_mod:
+-	clk_disable_unprepare(phy->clk_mod);
+-err_disable_clk_bus:
+-	clk_disable_unprepare(phy->clk_bus);
+-err_deassert_rst_phy:
+-	reset_control_assert(phy->rst_phy);
+-err_put_rst_phy:
+-	reset_control_put(phy->rst_phy);
+ err_put_clk_pll1:
+ 	clk_put(phy->clk_pll1);
+ err_put_clk_pll0:
+@@ -753,12 +768,6 @@ static int sun8i_hdmi_phy_remove(struct platform_device *pdev)
+ {
+ 	struct sun8i_hdmi_phy *phy = platform_get_drvdata(pdev);
+ 
+-	clk_disable_unprepare(phy->clk_mod);
+-	clk_disable_unprepare(phy->clk_bus);
+-	clk_disable_unprepare(phy->clk_phy);
+-
+-	reset_control_assert(phy->rst_phy);
+-
+ 	reset_control_put(phy->rst_phy);
+ 
+ 	clk_put(phy->clk_pll0);
+diff --git a/drivers/i2c/busses/i2c-mlxcpld.c b/drivers/i2c/busses/i2c-mlxcpld.c
+index 4e0b7c2882ced..015e11c4663f3 100644
+--- a/drivers/i2c/busses/i2c-mlxcpld.c
++++ b/drivers/i2c/busses/i2c-mlxcpld.c
+@@ -49,7 +49,7 @@
+ #define MLXCPLD_LPCI2C_NACK_IND		2
+ 
+ #define MLXCPLD_I2C_FREQ_1000KHZ_SET	0x04
+-#define MLXCPLD_I2C_FREQ_400KHZ_SET	0x0f
++#define MLXCPLD_I2C_FREQ_400KHZ_SET	0x0c
+ #define MLXCPLD_I2C_FREQ_100KHZ_SET	0x42
+ 
+ enum mlxcpld_i2c_frequency {
+@@ -495,7 +495,7 @@ mlxcpld_i2c_set_frequency(struct mlxcpld_i2c_priv *priv,
+ 		return err;
+ 
+ 	/* Set frequency only if it is not 100KHz, which is default. */
+-	switch ((data->reg & data->mask) >> data->bit) {
++	switch ((regval & data->mask) >> data->bit) {
+ 	case MLXCPLD_I2C_FREQ_1000KHZ:
+ 		freq = MLXCPLD_I2C_FREQ_1000KHZ_SET;
+ 		break;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 477480d1de6bd..7d4b3eb7077ad 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -41,6 +41,8 @@
+ #define I2C_HANDSHAKE_RST		0x0020
+ #define I2C_FIFO_ADDR_CLR		0x0001
+ #define I2C_DELAY_LEN			0x0002
++#define I2C_ST_START_CON		0x8001
++#define I2C_FS_START_CON		0x1800
+ #define I2C_TIME_CLR_VALUE		0x0000
+ #define I2C_TIME_DEFAULT_VALUE		0x0003
+ #define I2C_WRRD_TRANAC_VALUE		0x0002
+@@ -480,6 +482,7 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ {
+ 	u16 control_reg;
+ 	u16 intr_stat_reg;
++	u16 ext_conf_val;
+ 
+ 	mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START);
+ 	intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT);
+@@ -518,8 +521,13 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ 	if (i2c->dev_comp->ltiming_adjust)
+ 		mtk_i2c_writew(i2c, i2c->ltiming_reg, OFFSET_LTIMING);
+ 
++	if (i2c->speed_hz <= I2C_MAX_STANDARD_MODE_FREQ)
++		ext_conf_val = I2C_ST_START_CON;
++	else
++		ext_conf_val = I2C_FS_START_CON;
++
+ 	if (i2c->dev_comp->timing_adjust) {
+-		mtk_i2c_writew(i2c, i2c->ac_timing.ext, OFFSET_EXT_CONF);
++		ext_conf_val = i2c->ac_timing.ext;
+ 		mtk_i2c_writew(i2c, i2c->ac_timing.inter_clk_div,
+ 			       OFFSET_CLOCK_DIV);
+ 		mtk_i2c_writew(i2c, I2C_SCL_MIS_COMP_VALUE,
+@@ -544,6 +552,7 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ 				       OFFSET_HS_STA_STO_AC_TIMING);
+ 		}
+ 	}
++	mtk_i2c_writew(i2c, ext_conf_val, OFFSET_EXT_CONF);
+ 
+ 	/* If use i2c pin from PMIC mt6397 side, need set PATH_DIR first */
+ 	if (i2c->have_pmic)
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 6f0aa0ed3241e..74925621f2395 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -422,6 +422,7 @@ static int i2c_acpi_notify(struct notifier_block *nb, unsigned long value,
+ 			break;
+ 
+ 		i2c_acpi_register_device(adapter, adev, &info);
++		put_device(&adapter->dev);
+ 		break;
+ 	case ACPI_RECONFIG_DEVICE_REMOVE:
+ 		if (!acpi_device_enumerated(adev))
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 3f28eb4d17fe7..8f36536cb1b6d 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -746,7 +746,7 @@ static void meson_mmc_desc_chain_transfer(struct mmc_host *mmc, u32 cmd_cfg)
+ 	writel(start, host->regs + SD_EMMC_START);
+ }
+ 
+-/* local sg copy to buffer version with _to/fromio usage for dram_access_quirk */
++/* local sg copy for dram_access_quirk */
+ static void meson_mmc_copy_buffer(struct meson_host *host, struct mmc_data *data,
+ 				  size_t buflen, bool to_buffer)
+ {
+@@ -764,21 +764,27 @@ static void meson_mmc_copy_buffer(struct meson_host *host, struct mmc_data *data
+ 	sg_miter_start(&miter, sgl, nents, sg_flags);
+ 
+ 	while ((offset < buflen) && sg_miter_next(&miter)) {
+-		unsigned int len;
++		unsigned int buf_offset = 0;
++		unsigned int len, left;
++		u32 *buf = miter.addr;
+ 
+ 		len = min(miter.length, buflen - offset);
++		left = len;
+ 
+-		/* When dram_access_quirk, the bounce buffer is a iomem mapping */
+-		if (host->dram_access_quirk) {
+-			if (to_buffer)
+-				memcpy_toio(host->bounce_iomem_buf + offset, miter.addr, len);
+-			else
+-				memcpy_fromio(miter.addr, host->bounce_iomem_buf + offset, len);
++		if (to_buffer) {
++			do {
++				writel(*buf++, host->bounce_iomem_buf + offset + buf_offset);
++
++				buf_offset += 4;
++				left -= 4;
++			} while (left);
+ 		} else {
+-			if (to_buffer)
+-				memcpy(host->bounce_buf + offset, miter.addr, len);
+-			else
+-				memcpy(miter.addr, host->bounce_buf + offset, len);
++			do {
++				*buf++ = readl(host->bounce_iomem_buf + offset + buf_offset);
++
++				buf_offset += 4;
++				left -= 4;
++			} while (left);
+ 		}
+ 
+ 		offset += len;
+@@ -830,7 +836,11 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ 		if (data->flags & MMC_DATA_WRITE) {
+ 			cmd_cfg |= CMD_CFG_DATA_WR;
+ 			WARN_ON(xfer_bytes > host->bounce_buf_size);
+-			meson_mmc_copy_buffer(host, data, xfer_bytes, true);
++			if (host->dram_access_quirk)
++				meson_mmc_copy_buffer(host, data, xfer_bytes, true);
++			else
++				sg_copy_to_buffer(data->sg, data->sg_len,
++						  host->bounce_buf, xfer_bytes);
+ 			dma_wmb();
+ 		}
+ 
+@@ -849,12 +859,43 @@ static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd)
+ 	writel(cmd->arg, host->regs + SD_EMMC_CMD_ARG);
+ }
+ 
++static int meson_mmc_validate_dram_access(struct mmc_host *mmc, struct mmc_data *data)
++{
++	struct scatterlist *sg;
++	int i;
++
++	/* Reject request if any element offset or size is not 32bit aligned */
++	for_each_sg(data->sg, sg, data->sg_len, i) {
++		if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
++		    !IS_ALIGNED(sg->length, sizeof(u32))) {
++			dev_err(mmc_dev(mmc), "unaligned sg offset %u len %u\n",
++				data->sg->offset, data->sg->length);
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
++
+ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ {
+ 	struct meson_host *host = mmc_priv(mmc);
+ 	bool needs_pre_post_req = mrq->data &&
+ 			!(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
+ 
++	/*
++	 * The memory at the end of the controller used as bounce buffer for
++	 * the dram_access_quirk only accepts 32bit read/write access,
++	 * check the aligment and length of the data before starting the request.
++	 */
++	if (host->dram_access_quirk && mrq->data) {
++		mrq->cmd->error = meson_mmc_validate_dram_access(mmc, mrq->data);
++		if (mrq->cmd->error) {
++			mmc_request_done(mmc, mrq);
++			return;
++		}
++	}
++
+ 	if (needs_pre_post_req) {
+ 		meson_mmc_get_transfer_mode(mmc, mrq);
+ 		if (!meson_mmc_desc_chain_mode(mrq->data))
+@@ -999,7 +1040,11 @@ static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id)
+ 	if (meson_mmc_bounce_buf_read(data)) {
+ 		xfer_bytes = data->blksz * data->blocks;
+ 		WARN_ON(xfer_bytes > host->bounce_buf_size);
+-		meson_mmc_copy_buffer(host, data, xfer_bytes, false);
++		if (host->dram_access_quirk)
++			meson_mmc_copy_buffer(host, data, xfer_bytes, false);
++		else
++			sg_copy_from_buffer(data->sg, data->sg_len,
++					    host->bounce_buf, xfer_bytes);
+ 	}
+ 
+ 	next_cmd = meson_mmc_get_next_command(cmd);
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index 5564d7b23e7cd..d1a1c548c515f 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -11,6 +11,7 @@
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/slot-gpio.h>
+@@ -61,7 +62,6 @@ static void sdhci_at91_set_force_card_detect(struct sdhci_host *host)
+ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ {
+ 	u16 clk;
+-	unsigned long timeout;
+ 
+ 	host->mmc->actual_clock = 0;
+ 
+@@ -86,16 +86,11 @@ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
+ 
+ 	/* Wait max 20 ms */
+-	timeout = 20;
+-	while (!((clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL))
+-		& SDHCI_CLOCK_INT_STABLE)) {
+-		if (timeout == 0) {
+-			pr_err("%s: Internal clock never stabilised.\n",
+-			       mmc_hostname(host->mmc));
+-			return;
+-		}
+-		timeout--;
+-		mdelay(1);
++	if (read_poll_timeout(sdhci_readw, clk, (clk & SDHCI_CLOCK_INT_STABLE),
++			      1000, 20000, false, host, SDHCI_CLOCK_CONTROL)) {
++		pr_err("%s: Internal clock never stabilised.\n",
++		       mmc_hostname(host->mmc));
++		return;
+ 	}
+ 
+ 	clk |= SDHCI_CLOCK_CARD_EN;
+@@ -114,6 +109,7 @@ static void sdhci_at91_reset(struct sdhci_host *host, u8 mask)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_at91_priv *priv = sdhci_pltfm_priv(pltfm_host);
++	unsigned int tmp;
+ 
+ 	sdhci_reset(host, mask);
+ 
+@@ -126,6 +122,10 @@ static void sdhci_at91_reset(struct sdhci_host *host, u8 mask)
+ 
+ 		sdhci_writel(host, calcr | SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
+ 			     SDMMC_CALCR);
++
++		if (read_poll_timeout(sdhci_readl, tmp, !(tmp & SDMMC_CALCR_EN),
++				      10, 20000, false, host, SDMMC_CALCR))
++			dev_err(mmc_dev(host->mmc), "Failed to calibrate\n");
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index 1d3188e8e3b3c..92dc18a4bcc41 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -780,7 +780,7 @@ struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
+ 				    gve_num_tx_qpls(priv));
+ 
+ 	/* we are out of rx qpls */
+-	if (id == priv->qpl_cfg.qpl_map_size)
++	if (id == gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv))
+ 		return NULL;
+ 
+ 	set_bit(id, priv->qpl_cfg.qpl_id_map);
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 099a2bc5ae670..bf8a4a7c43f78 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -41,6 +41,7 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ {
+ 	struct gve_priv *priv = netdev_priv(dev);
+ 	unsigned int start;
++	u64 packets, bytes;
+ 	int ring;
+ 
+ 	if (priv->rx) {
+@@ -48,10 +49,12 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ 			do {
+ 				start =
+ 				  u64_stats_fetch_begin(&priv->rx[ring].statss);
+-				s->rx_packets += priv->rx[ring].rpackets;
+-				s->rx_bytes += priv->rx[ring].rbytes;
++				packets = priv->rx[ring].rpackets;
++				bytes = priv->rx[ring].rbytes;
+ 			} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+ 						       start));
++			s->rx_packets += packets;
++			s->rx_bytes += bytes;
+ 		}
+ 	}
+ 	if (priv->tx) {
+@@ -59,10 +62,12 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+ 			do {
+ 				start =
+ 				  u64_stats_fetch_begin(&priv->tx[ring].statss);
+-				s->tx_packets += priv->tx[ring].pkt_done;
+-				s->tx_bytes += priv->tx[ring].bytes_done;
++				packets = priv->tx[ring].pkt_done;
++				bytes = priv->tx[ring].bytes_done;
+ 			} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
+ 						       start));
++			s->tx_packets += packets;
++			s->tx_bytes += bytes;
+ 		}
+ 	}
+ }
+@@ -82,6 +87,9 @@ static int gve_alloc_counter_array(struct gve_priv *priv)
+ 
+ static void gve_free_counter_array(struct gve_priv *priv)
+ {
++	if (!priv->counter_array)
++		return;
++
+ 	dma_free_coherent(&priv->pdev->dev,
+ 			  priv->num_event_counters *
+ 			  sizeof(*priv->counter_array),
+@@ -142,6 +150,9 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
+ 
+ static void gve_free_stats_report(struct gve_priv *priv)
+ {
++	if (!priv->stats_report)
++		return;
++
+ 	del_timer_sync(&priv->stats_report_timer);
+ 	dma_free_coherent(&priv->pdev->dev, priv->stats_report_len,
+ 			  priv->stats_report, priv->stats_report_bus);
+@@ -370,18 +381,19 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
+ {
+ 	int i;
+ 
+-	if (priv->msix_vectors) {
+-		/* Free the irqs */
+-		for (i = 0; i < priv->num_ntfy_blks; i++) {
+-			struct gve_notify_block *block = &priv->ntfy_blocks[i];
+-			int msix_idx = i;
++	if (!priv->msix_vectors)
++		return;
+ 
+-			irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+-					      NULL);
+-			free_irq(priv->msix_vectors[msix_idx].vector, block);
+-		}
+-		free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
++	/* Free the irqs */
++	for (i = 0; i < priv->num_ntfy_blks; i++) {
++		struct gve_notify_block *block = &priv->ntfy_blocks[i];
++		int msix_idx = i;
++
++		irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
++				      NULL);
++		free_irq(priv->msix_vectors[msix_idx].vector, block);
+ 	}
++	free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+ 	dma_free_coherent(&priv->pdev->dev,
+ 			  priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
+ 			  priv->ntfy_blocks, priv->ntfy_block_bus);
+@@ -1185,9 +1197,10 @@ static void gve_handle_reset(struct gve_priv *priv)
+ 
+ void gve_handle_report_stats(struct gve_priv *priv)
+ {
+-	int idx, stats_idx = 0, tx_bytes;
+-	unsigned int start = 0;
+ 	struct stats *stats = priv->stats_report->stats;
++	int idx, stats_idx = 0;
++	unsigned int start = 0;
++	u64 tx_bytes;
+ 
+ 	if (!gve_get_report_stats(priv))
+ 		return;
+diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
+index bb82613682502..94941d4e47449 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx.c
++++ b/drivers/net/ethernet/google/gve/gve_rx.c
+@@ -104,8 +104,14 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
+ 	if (!rx->data.page_info)
+ 		return -ENOMEM;
+ 
+-	if (!rx->data.raw_addressing)
++	if (!rx->data.raw_addressing) {
+ 		rx->data.qpl = gve_assign_rx_qpl(priv);
++		if (!rx->data.qpl) {
++			kvfree(rx->data.page_info);
++			rx->data.page_info = NULL;
++			return -ENOMEM;
++		}
++	}
+ 	for (i = 0; i < slots; i++) {
+ 		if (!rx->data.raw_addressing) {
+ 			struct page *page = rx->data.qpl->pages[i];
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 1d1f52756a932..5d3d6b1dae7b0 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4868,7 +4868,8 @@ static void i40e_clear_interrupt_scheme(struct i40e_pf *pf)
+ {
+ 	int i;
+ 
+-	i40e_free_misc_vector(pf);
++	if (test_bit(__I40E_MISC_IRQ_REQUESTED, pf->state))
++		i40e_free_misc_vector(pf);
+ 
+ 	i40e_put_lump(pf->irq_pile, pf->iwarp_base_vector,
+ 		      I40E_IWARP_IRQ_PILE_ID);
+@@ -10110,7 +10111,7 @@ static int i40e_get_capabilities(struct i40e_pf *pf,
+ 		if (pf->hw.aq.asq_last_status == I40E_AQ_RC_ENOMEM) {
+ 			/* retry with a larger buffer */
+ 			buf_len = data_size;
+-		} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK) {
++		} else if (pf->hw.aq.asq_last_status != I40E_AQ_RC_OK || err) {
+ 			dev_info(&pf->pdev->dev,
+ 				 "capability discovery failed, err %s aq_err %s\n",
+ 				 i40e_stat_str(&pf->hw, err),
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 23762a7ef740b..cada4e0e40b48 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1965,7 +1965,6 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+-		mutex_unlock(&adapter->crit_lock);
+ 		queue_delayed_work(iavf_wq,
+ 				   &adapter->watchdog_task,
+ 				   msecs_to_jiffies(10));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 3f67efbe12fc5..dcbdf746be35c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -863,6 +863,7 @@ struct mlx5e_priv {
+ 	struct mlx5e_channel_stats channel_stats[MLX5E_MAX_NUM_CHANNELS];
+ 	struct mlx5e_channel_stats trap_stats;
+ 	struct mlx5e_ptp_stats     ptp_stats;
++	u16                        stats_nch;
+ 	u16                        max_nch;
+ 	u8                         max_opened_tc;
+ 	bool                       tx_ptp_opened;
+@@ -1156,12 +1157,6 @@ int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv,
+ 				 struct ethtool_pauseparam *pauseparam);
+ 
+ /* mlx5e generic netdev management API */
+-static inline unsigned int
+-mlx5e_calc_max_nch(struct mlx5e_priv *priv, const struct mlx5e_profile *profile)
+-{
+-	return priv->netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
+-}
+-
+ static inline bool
+ mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
+ {
+@@ -1170,11 +1165,13 @@ mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
+ }
+ 
+ int mlx5e_priv_init(struct mlx5e_priv *priv,
++		    const struct mlx5e_profile *profile,
+ 		    struct net_device *netdev,
+ 		    struct mlx5_core_dev *mdev);
+ void mlx5e_priv_cleanup(struct mlx5e_priv *priv);
+ struct net_device *
+-mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int rxqs);
++mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
++		    unsigned int txqs, unsigned int rxqs);
+ int mlx5e_attach_netdev(struct mlx5e_priv *priv);
+ void mlx5e_detach_netdev(struct mlx5e_priv *priv);
+ void mlx5e_destroy_netdev(struct mlx5e_priv *priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/hv_vhca_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en/hv_vhca_stats.c
+index ac44bbe95c5c1..d290d7276b8d9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/hv_vhca_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/hv_vhca_stats.c
+@@ -35,7 +35,7 @@ static void mlx5e_hv_vhca_fill_stats(struct mlx5e_priv *priv, void *data,
+ {
+ 	int ch, i = 0;
+ 
+-	for (ch = 0; ch < priv->max_nch; ch++) {
++	for (ch = 0; ch < priv->stats_nch; ch++) {
+ 		void *buf = data + i;
+ 
+ 		if (WARN_ON_ONCE(buf +
+@@ -51,7 +51,7 @@ static void mlx5e_hv_vhca_fill_stats(struct mlx5e_priv *priv, void *data,
+ static int mlx5e_hv_vhca_stats_buf_size(struct mlx5e_priv *priv)
+ {
+ 	return (sizeof(struct mlx5e_hv_vhca_per_ring_stats) *
+-		priv->max_nch);
++		priv->stats_nch);
+ }
+ 
+ static void mlx5e_hv_vhca_stats_work(struct work_struct *work)
+@@ -100,7 +100,7 @@ static void mlx5e_hv_vhca_stats_control(struct mlx5_hv_vhca_agent *agent,
+ 	sagent = &priv->stats_agent;
+ 
+ 	block->version = MLX5_HV_VHCA_STATS_VERSION;
+-	block->rings   = priv->max_nch;
++	block->rings   = priv->stats_nch;
+ 
+ 	if (!block->command) {
+ 		cancel_delayed_work_sync(&priv->stats_agent.work);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+index efef4adce086a..93a8fa31255d0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+@@ -13,8 +13,6 @@ struct mlx5e_ptp_fs {
+ 	bool valid;
+ };
+ 
+-#define MLX5E_PTP_CHANNEL_IX 0
+-
+ struct mlx5e_ptp_params {
+ 	struct mlx5e_params params;
+ 	struct mlx5e_sq_param txq_sq_param;
+@@ -505,6 +503,7 @@ static int mlx5e_init_ptp_rq(struct mlx5e_ptp *c, struct mlx5e_params *params,
+ 	rq->mdev         = mdev;
+ 	rq->hw_mtu       = MLX5E_SW2HW_MTU(params, params->sw_mtu);
+ 	rq->stats        = &c->priv->ptp_stats.rq;
++	rq->ix           = MLX5E_PTP_CHANNEL_IX;
+ 	rq->ptp_cyc2time = mlx5_rq_ts_translator(mdev);
+ 	err = mlx5e_rq_set_handlers(rq, params, false);
+ 	if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h
+index c96668bd701cd..a71a32e00ebb9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h
+@@ -8,6 +8,8 @@
+ #include "en_stats.h"
+ #include <linux/ptp_classify.h>
+ 
++#define MLX5E_PTP_CHANNEL_IX 0
++
+ struct mlx5e_ptpsq {
+ 	struct mlx5e_txqsq       txqsq;
+ 	struct mlx5e_cq          ts_cq;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index fa718e71db2d4..548e8e7fc956e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3515,7 +3515,7 @@ void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < priv->max_nch; i++) {
++	for (i = 0; i < priv->stats_nch; i++) {
+ 		struct mlx5e_channel_stats *channel_stats = &priv->channel_stats[i];
+ 		struct mlx5e_rq_stats *xskrq_stats = &channel_stats->xskrq;
+ 		struct mlx5e_rq_stats *rq_stats = &channel_stats->rq;
+@@ -4661,8 +4661,6 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	u8 rx_cq_period_mode;
+ 
+-	priv->max_nch = mlx5e_calc_max_nch(priv, priv->profile);
+-
+ 	params->sw_mtu = mtu;
+ 	params->hard_mtu = MLX5E_ETH_HARD_MTU;
+ 	params->num_channels = min_t(unsigned int, MLX5E_MAX_NUM_CHANNELS / 2,
+@@ -5203,8 +5201,35 @@ static const struct mlx5e_profile mlx5e_nic_profile = {
+ 	.rx_ptp_support    = true,
+ };
+ 
++static unsigned int
++mlx5e_calc_max_nch(struct mlx5_core_dev *mdev, struct net_device *netdev,
++		   const struct mlx5e_profile *profile)
++
++{
++	unsigned int max_nch, tmp;
++
++	/* core resources */
++	max_nch = mlx5e_get_max_num_channels(mdev);
++
++	/* netdev rx queues */
++	tmp = netdev->num_rx_queues / max_t(u8, profile->rq_groups, 1);
++	max_nch = min_t(unsigned int, max_nch, tmp);
++
++	/* netdev tx queues */
++	tmp = netdev->num_tx_queues;
++	if (mlx5_qos_is_supported(mdev))
++		tmp -= mlx5e_qos_max_leaf_nodes(mdev);
++	if (MLX5_CAP_GEN(mdev, ts_cqe_to_dest_cqn))
++		tmp -= profile->max_tc;
++	tmp = tmp / profile->max_tc;
++	max_nch = min_t(unsigned int, max_nch, tmp);
++
++	return max_nch;
++}
++
+ /* mlx5e generic netdev management API (move to en_common.c) */
+ int mlx5e_priv_init(struct mlx5e_priv *priv,
++		    const struct mlx5e_profile *profile,
+ 		    struct net_device *netdev,
+ 		    struct mlx5_core_dev *mdev)
+ {
+@@ -5212,6 +5237,8 @@ int mlx5e_priv_init(struct mlx5e_priv *priv,
+ 	priv->mdev        = mdev;
+ 	priv->netdev      = netdev;
+ 	priv->msglevel    = MLX5E_MSG_LEVEL;
++	priv->max_nch     = mlx5e_calc_max_nch(mdev, netdev, profile);
++	priv->stats_nch   = priv->max_nch;
+ 	priv->max_opened_tc = 1;
+ 
+ 	if (!alloc_cpumask_var(&priv->scratchpad.cpumask, GFP_KERNEL))
+@@ -5255,7 +5282,8 @@ void mlx5e_priv_cleanup(struct mlx5e_priv *priv)
+ }
+ 
+ struct net_device *
+-mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int rxqs)
++mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
++		    unsigned int txqs, unsigned int rxqs)
+ {
+ 	struct net_device *netdev;
+ 	int err;
+@@ -5266,7 +5294,7 @@ mlx5e_create_netdev(struct mlx5_core_dev *mdev, unsigned int txqs, unsigned int
+ 		return NULL;
+ 	}
+ 
+-	err = mlx5e_priv_init(netdev_priv(netdev), netdev, mdev);
++	err = mlx5e_priv_init(netdev_priv(netdev), profile, netdev, mdev);
+ 	if (err) {
+ 		mlx5_core_err(mdev, "mlx5e_priv_init failed, err=%d\n", err);
+ 		goto err_free_netdev;
+@@ -5308,7 +5336,7 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
+ 	clear_bit(MLX5E_STATE_DESTROYING, &priv->state);
+ 
+ 	/* max number of channels may have changed */
+-	max_nch = mlx5e_get_max_num_channels(priv->mdev);
++	max_nch = mlx5e_calc_max_nch(priv->mdev, priv->netdev, profile);
+ 	if (priv->channels.params.num_channels > max_nch) {
+ 		mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch);
+ 		/* Reducing the number of channels - RXFH has to be reset, and
+@@ -5317,6 +5345,13 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
+ 		priv->netdev->priv_flags &= ~IFF_RXFH_CONFIGURED;
+ 		priv->channels.params.num_channels = max_nch;
+ 	}
++	if (max_nch != priv->max_nch) {
++		mlx5_core_warn(priv->mdev,
++			       "MLX5E: Updating max number of channels from %u to %u\n",
++			       priv->max_nch, max_nch);
++		priv->max_nch = max_nch;
++	}
++
+ 	/* 1. Set the real number of queues in the kernel the first time.
+ 	 * 2. Set our default XPS cpumask.
+ 	 * 3. Build the RQT.
+@@ -5381,7 +5416,7 @@ mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mde
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
+ 	int err;
+ 
+-	err = mlx5e_priv_init(priv, netdev, mdev);
++	err = mlx5e_priv_init(priv, new_profile, netdev, mdev);
+ 	if (err) {
+ 		mlx5_core_err(mdev, "mlx5e_priv_init failed, err=%d\n", err);
+ 		return err;
+@@ -5407,20 +5442,12 @@ priv_cleanup:
+ int mlx5e_netdev_change_profile(struct mlx5e_priv *priv,
+ 				const struct mlx5e_profile *new_profile, void *new_ppriv)
+ {
+-	unsigned int new_max_nch = mlx5e_calc_max_nch(priv, new_profile);
+ 	const struct mlx5e_profile *orig_profile = priv->profile;
+ 	struct net_device *netdev = priv->netdev;
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	void *orig_ppriv = priv->ppriv;
+ 	int err, rollback_err;
+ 
+-	/* sanity */
+-	if (new_max_nch != priv->max_nch) {
+-		netdev_warn(netdev, "%s: Replacing profile with different max channels\n",
+-			    __func__);
+-		return -EINVAL;
+-	}
+-
+ 	/* cleanup old profile */
+ 	mlx5e_detach_netdev(priv);
+ 	priv->profile->cleanup(priv);
+@@ -5516,7 +5543,7 @@ static int mlx5e_probe(struct auxiliary_device *adev,
+ 	nch = mlx5e_get_max_num_channels(mdev);
+ 	txqs = nch * profile->max_tc + ptp_txqs + qos_sqs;
+ 	rxqs = nch * profile->rq_groups;
+-	netdev = mlx5e_create_netdev(mdev, txqs, rxqs);
++	netdev = mlx5e_create_netdev(mdev, profile, txqs, rxqs);
+ 	if (!netdev) {
+ 		mlx5_core_err(mdev, "mlx5e_create_netdev failed\n");
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index bf94bcb6fa5d2..bec1d344481cd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -561,7 +561,6 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
+ 					 MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
+ 					 MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
+ 
+-	priv->max_nch = mlx5e_calc_max_nch(priv, priv->profile);
+ 	params = &priv->channels.params;
+ 
+ 	params->num_channels = MLX5E_REP_PARAMS_DEF_NUM_CHANNELS;
+@@ -1151,7 +1150,7 @@ mlx5e_vport_vf_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
+ 	nch = mlx5e_get_max_num_channels(dev);
+ 	txqs = nch * profile->max_tc;
+ 	rxqs = nch * profile->rq_groups;
+-	netdev = mlx5e_create_netdev(dev, txqs, rxqs);
++	netdev = mlx5e_create_netdev(dev, profile, txqs, rxqs);
+ 	if (!netdev) {
+ 		mlx5_core_warn(dev,
+ 			       "Failed to create representor netdev for vport %d\n",
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 3c65fd0bcf31c..29a6586ef28dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1001,14 +1001,9 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 		goto csum_unnecessary;
+ 
+ 	if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) {
+-		u8 ipproto = get_ip_proto(skb, network_depth, proto);
+-
+-		if (unlikely(ipproto == IPPROTO_SCTP))
++		if (unlikely(get_ip_proto(skb, network_depth, proto) == IPPROTO_SCTP))
+ 			goto csum_unnecessary;
+ 
+-		if (unlikely(mlx5_ipsec_is_rx_flow(cqe)))
+-			goto csum_none;
+-
+ 		stats->csum_complete++;
+ 		skb->ip_summed = CHECKSUM_COMPLETE;
+ 		skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index e4f5b63951482..e1dd17019030e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -34,6 +34,7 @@
+ #include "en.h"
+ #include "en_accel/tls.h"
+ #include "en_accel/en_accel.h"
++#include "en/ptp.h"
+ 
+ static unsigned int stats_grps_num(struct mlx5e_priv *priv)
+ {
+@@ -450,7 +451,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw)
+ 
+ 	memset(s, 0, sizeof(*s));
+ 
+-	for (i = 0; i < priv->max_nch; i++) {
++	for (i = 0; i < priv->stats_nch; i++) {
+ 		struct mlx5e_channel_stats *channel_stats =
+ 			&priv->channel_stats[i];
+ 		int j;
+@@ -2076,7 +2077,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(ptp)
+ 	if (priv->rx_ptp_opened) {
+ 		for (i = 0; i < NUM_PTP_RQ_STATS; i++)
+ 			sprintf(data + (idx++) * ETH_GSTRING_LEN,
+-				ptp_rq_stats_desc[i].format);
++				ptp_rq_stats_desc[i].format, MLX5E_PTP_CHANNEL_IX);
+ 	}
+ 	return idx;
+ }
+@@ -2119,7 +2120,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ptp) { return; }
+ 
+ static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(channels)
+ {
+-	int max_nch = priv->max_nch;
++	int max_nch = priv->stats_nch;
+ 
+ 	return (NUM_RQ_STATS * max_nch) +
+ 	       (NUM_CH_STATS * max_nch) +
+@@ -2133,7 +2134,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(channels)
+ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels)
+ {
+ 	bool is_xsk = priv->xsk.ever_used;
+-	int max_nch = priv->max_nch;
++	int max_nch = priv->stats_nch;
+ 	int i, j, tc;
+ 
+ 	for (i = 0; i < max_nch; i++)
+@@ -2175,7 +2176,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels)
+ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(channels)
+ {
+ 	bool is_xsk = priv->xsk.ever_used;
+-	int max_nch = priv->max_nch;
++	int max_nch = priv->stats_nch;
+ 	int i, j, tc;
+ 
+ 	for (i = 0; i < max_nch; i++)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+index 0399a396d1662..60a73990017c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+@@ -79,12 +79,16 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
+ 	int dest_num = 0;
+ 	int err = 0;
+ 
+-	if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
++	if (vport->egress.legacy.drop_counter) {
++		drop_counter = vport->egress.legacy.drop_counter;
++	} else if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
+ 		drop_counter = mlx5_fc_create(esw->dev, false);
+-		if (IS_ERR(drop_counter))
++		if (IS_ERR(drop_counter)) {
+ 			esw_warn(esw->dev,
+ 				 "vport[%d] configure egress drop rule counter err(%ld)\n",
+ 				 vport->vport, PTR_ERR(drop_counter));
++			drop_counter = NULL;
++		}
+ 		vport->egress.legacy.drop_counter = drop_counter;
+ 	}
+ 
+@@ -123,7 +127,7 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
+ 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
+ 
+ 	/* Attach egress drop flow counter */
+-	if (!IS_ERR_OR_NULL(drop_counter)) {
++	if (drop_counter) {
+ 		flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
+ 		drop_ctr_dst.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+ 		drop_ctr_dst.counter_id = mlx5_fc_id(drop_counter);
+@@ -162,7 +166,7 @@ void esw_acl_egress_lgcy_cleanup(struct mlx5_eswitch *esw,
+ 	esw_acl_egress_table_destroy(vport);
+ 
+ clean_drop_counter:
+-	if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_counter)) {
++	if (vport->egress.legacy.drop_counter) {
+ 		mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+ 		vport->egress.legacy.drop_counter = NULL;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+index f75b86abaf1cd..b1a5199260f69 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+@@ -160,7 +160,9 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
+ 
+ 	esw_acl_ingress_lgcy_rules_destroy(vport);
+ 
+-	if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
++	if (vport->ingress.legacy.drop_counter) {
++		counter = vport->ingress.legacy.drop_counter;
++	} else if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
+ 		counter = mlx5_fc_create(esw->dev, false);
+ 		if (IS_ERR(counter)) {
+ 			esw_warn(esw->dev,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 620d638e1e8ff..1c9de6eddef86 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -113,7 +113,7 @@ static void mlx5i_grp_sw_update_stats(struct mlx5e_priv *priv)
+ 	struct mlx5e_sw_stats s = { 0 };
+ 	int i, j;
+ 
+-	for (i = 0; i < priv->max_nch; i++) {
++	for (i = 0; i < priv->stats_nch; i++) {
+ 		struct mlx5e_channel_stats *channel_stats;
+ 		struct mlx5e_rq_stats *rq_stats;
+ 
+@@ -729,7 +729,7 @@ static int mlx5_rdma_setup_rn(struct ib_device *ibdev, u32 port_num,
+ 			goto destroy_ht;
+ 	}
+ 
+-	err = mlx5e_priv_init(epriv, netdev, mdev);
++	err = mlx5e_priv_init(epriv, prof, netdev, mdev);
+ 	if (err)
+ 		goto destroy_mdev_resources;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index ce696d5234931..c009ccc88df49 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -448,22 +448,20 @@ static u64 find_target_cycles(struct mlx5_core_dev *mdev, s64 target_ns)
+ 	return cycles_now + cycles_delta;
+ }
+ 
+-static u64 perout_conf_internal_timer(struct mlx5_core_dev *mdev,
+-				      s64 sec, u32 nsec)
++static u64 perout_conf_internal_timer(struct mlx5_core_dev *mdev, s64 sec)
+ {
+-	struct timespec64 ts;
++	struct timespec64 ts = {};
+ 	s64 target_ns;
+ 
+ 	ts.tv_sec = sec;
+-	ts.tv_nsec = nsec;
+ 	target_ns = timespec64_to_ns(&ts);
+ 
+ 	return find_target_cycles(mdev, target_ns);
+ }
+ 
+-static u64 perout_conf_real_time(s64 sec, u32 nsec)
++static u64 perout_conf_real_time(s64 sec)
+ {
+-	return (u64)nsec | (u64)sec << 32;
++	return (u64)sec << 32;
+ }
+ 
+ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+@@ -474,6 +472,7 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ 			container_of(ptp, struct mlx5_clock, ptp_info);
+ 	struct mlx5_core_dev *mdev =
+ 			container_of(clock, struct mlx5_core_dev, clock);
++	bool rt_mode = mlx5_real_time_mode(mdev);
+ 	u32 in[MLX5_ST_SZ_DW(mtpps_reg)] = {0};
+ 	struct timespec64 ts;
+ 	u32 field_select = 0;
+@@ -501,8 +500,10 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ 
+ 	if (on) {
+ 		bool rt_mode = mlx5_real_time_mode(mdev);
+-		u32 nsec;
+-		s64 sec;
++		s64 sec = rq->perout.start.sec;
++
++		if (rq->perout.start.nsec)
++			return -EINVAL;
+ 
+ 		pin_mode = MLX5_PIN_MODE_OUT;
+ 		pattern = MLX5_OUT_PATTERN_PERIODIC;
+@@ -513,14 +514,11 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ 		if ((ns >> 1) != 500000000LL)
+ 			return -EINVAL;
+ 
+-		nsec = rq->perout.start.nsec;
+-		sec = rq->perout.start.sec;
+-
+ 		if (rt_mode && sec > U32_MAX)
+ 			return -EINVAL;
+ 
+-		time_stamp = rt_mode ? perout_conf_real_time(sec, nsec) :
+-				       perout_conf_internal_timer(mdev, sec, nsec);
++		time_stamp = rt_mode ? perout_conf_real_time(sec) :
++				       perout_conf_internal_timer(mdev, sec);
+ 
+ 		field_select |= MLX5_MTPPS_FS_PIN_MODE |
+ 				MLX5_MTPPS_FS_PATTERN |
+@@ -538,6 +536,9 @@ static int mlx5_perout_configure(struct ptp_clock_info *ptp,
+ 	if (err)
+ 		return err;
+ 
++	if (rt_mode)
++		return 0;
++
+ 	return mlx5_set_mtppse(mdev, pin, 0,
+ 			       MLX5_EVENT_MODE_REPETETIVE & on);
+ }
+@@ -705,20 +706,14 @@ static void ts_next_sec(struct timespec64 *ts)
+ static u64 perout_conf_next_event_timer(struct mlx5_core_dev *mdev,
+ 					struct mlx5_clock *clock)
+ {
+-	bool rt_mode = mlx5_real_time_mode(mdev);
+ 	struct timespec64 ts;
+ 	s64 target_ns;
+ 
+-	if (rt_mode)
+-		ts = mlx5_ptp_gettimex_real_time(mdev, NULL);
+-	else
+-		mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL);
+-
++	mlx5_ptp_gettimex(&clock->ptp_info, &ts, NULL);
+ 	ts_next_sec(&ts);
+ 	target_ns = timespec64_to_ns(&ts);
+ 
+-	return rt_mode ? perout_conf_real_time(ts.tv_sec, ts.tv_nsec) :
+-			 find_target_cycles(mdev, target_ns);
++	return find_target_cycles(mdev, target_ns);
+ }
+ 
+ static int mlx5_pps_event(struct notifier_block *nb,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+index 3465b363fc2fe..d9345c9ebbff1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+@@ -13,8 +13,8 @@
+ #endif
+ 
+ #define MLX5_MAX_IRQ_NAME (32)
+-/* max irq_index is 255. three chars */
+-#define MLX5_MAX_IRQ_IDX_CHARS (3)
++/* max irq_index is 2047, so four chars */
++#define MLX5_MAX_IRQ_IDX_CHARS (4)
+ 
+ #define MLX5_SFS_PER_CTRL_IRQ 64
+ #define MLX5_IRQ_CTRL_SF_MAX 8
+@@ -610,8 +610,9 @@ void mlx5_irq_table_destroy(struct mlx5_core_dev *dev)
+ int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table)
+ {
+ 	if (table->sf_comp_pool)
+-		return table->sf_comp_pool->xa_num_irqs.max -
+-			table->sf_comp_pool->xa_num_irqs.min + 1;
++		return min_t(int, num_online_cpus(),
++			     table->sf_comp_pool->xa_num_irqs.max -
++			     table->sf_comp_pool->xa_num_irqs.min + 1);
+ 	else
+ 		return mlx5_irq_table_get_num_comp(table);
+ }
+diff --git a/drivers/net/ethernet/mscc/ocelot_vcap.c b/drivers/net/ethernet/mscc/ocelot_vcap.c
+index 7945393a06557..99d7376a70a74 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vcap.c
++++ b/drivers/net/ethernet/mscc/ocelot_vcap.c
+@@ -998,8 +998,8 @@ ocelot_vcap_block_find_filter_by_index(struct ocelot_vcap_block *block,
+ }
+ 
+ struct ocelot_vcap_filter *
+-ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int cookie,
+-				    bool tc_offload)
++ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block,
++				    unsigned long cookie, bool tc_offload)
+ {
+ 	struct ocelot_vcap_filter *filter;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+index ed817011a94a0..6924a6aacbd53 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+@@ -21,6 +21,7 @@
+ #include <linux/delay.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
++#include <linux/pm_runtime.h>
+ 
+ #include "stmmac_platform.h"
+ 
+@@ -1528,6 +1529,8 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv)
+ 		return ret;
+ 	}
+ 
++	pm_runtime_get_sync(dev);
++
+ 	if (bsp_priv->integrated_phy)
+ 		rk_gmac_integrated_phy_powerup(bsp_priv);
+ 
+@@ -1539,6 +1542,8 @@ static void rk_gmac_powerdown(struct rk_priv_data *gmac)
+ 	if (gmac->integrated_phy)
+ 		rk_gmac_integrated_phy_powerdown(gmac);
+ 
++	pm_runtime_put_sync(&gmac->pdev->dev);
++
+ 	phy_power_on(gmac, false);
+ 	gmac_clk_enable(gmac, false);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 86151a817b79a..6b2a5e5769e89 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -477,6 +477,10 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
+ 			stmmac_lpi_entry_timer_config(priv, 0);
+ 			del_timer_sync(&priv->eee_ctrl_timer);
+ 			stmmac_set_eee_timer(priv, priv->hw, 0, eee_tw_timer);
++			if (priv->hw->xpcs)
++				xpcs_config_eee(priv->hw->xpcs,
++						priv->plat->mult_fact_100ns,
++						false);
+ 		}
+ 		mutex_unlock(&priv->lock);
+ 		return false;
+@@ -1038,7 +1042,7 @@ static void stmmac_mac_link_down(struct phylink_config *config,
+ 	stmmac_mac_set(priv, priv->ioaddr, false);
+ 	priv->eee_active = false;
+ 	priv->tx_lpi_enabled = false;
+-	stmmac_eee_init(priv);
++	priv->eee_enabled = stmmac_eee_init(priv);
+ 	stmmac_set_eee_pls(priv, priv->hw, false);
+ 
+ 	if (priv->dma_cap.fpesel)
+diff --git a/drivers/net/pcs/pcs-xpcs.c b/drivers/net/pcs/pcs-xpcs.c
+index 4bd61339823ce..d4ab03a92fb59 100644
+--- a/drivers/net/pcs/pcs-xpcs.c
++++ b/drivers/net/pcs/pcs-xpcs.c
+@@ -662,6 +662,10 @@ int xpcs_config_eee(struct dw_xpcs *xpcs, int mult_fact_100ns, int enable)
+ {
+ 	int ret;
+ 
++	ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL0);
++	if (ret < 0)
++		return ret;
++
+ 	if (enable) {
+ 	/* Enable EEE */
+ 		ret = DW_VR_MII_EEE_LTX_EN | DW_VR_MII_EEE_LRX_EN |
+@@ -669,9 +673,6 @@ int xpcs_config_eee(struct dw_xpcs *xpcs, int mult_fact_100ns, int enable)
+ 		      DW_VR_MII_EEE_TX_EN_CTRL | DW_VR_MII_EEE_RX_EN_CTRL |
+ 		      mult_fact_100ns << DW_VR_MII_EEE_MULT_FACT_100NS_SHIFT;
+ 	} else {
+-		ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL0);
+-		if (ret < 0)
+-			return ret;
+ 		ret &= ~(DW_VR_MII_EEE_LTX_EN | DW_VR_MII_EEE_LRX_EN |
+ 		       DW_VR_MII_EEE_TX_QUIET_EN | DW_VR_MII_EEE_RX_QUIET_EN |
+ 		       DW_VR_MII_EEE_TX_EN_CTRL | DW_VR_MII_EEE_RX_EN_CTRL |
+@@ -686,21 +687,28 @@ int xpcs_config_eee(struct dw_xpcs *xpcs, int mult_fact_100ns, int enable)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret |= DW_VR_MII_EEE_TRN_LPI;
++	if (enable)
++		ret |= DW_VR_MII_EEE_TRN_LPI;
++	else
++		ret &= ~DW_VR_MII_EEE_TRN_LPI;
++
+ 	return xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_EEE_MCTRL1, ret);
+ }
+ EXPORT_SYMBOL_GPL(xpcs_config_eee);
+ 
+ static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
+ {
+-	int ret;
++	int ret, mdio_ctrl;
+ 
+ 	/* For AN for C37 SGMII mode, the settings are :-
+-	 * 1) VR_MII_AN_CTRL Bit(2:1)[PCS_MODE] = 10b (SGMII AN)
+-	 * 2) VR_MII_AN_CTRL Bit(3) [TX_CONFIG] = 0b (MAC side SGMII)
++	 * 1) VR_MII_MMD_CTRL Bit(12) [AN_ENABLE] = 0b (Disable SGMII AN in case
++	      it is already enabled)
++	 * 2) VR_MII_AN_CTRL Bit(2:1)[PCS_MODE] = 10b (SGMII AN)
++	 * 3) VR_MII_AN_CTRL Bit(3) [TX_CONFIG] = 0b (MAC side SGMII)
+ 	 *    DW xPCS used with DW EQoS MAC is always MAC side SGMII.
+-	 * 3) VR_MII_DIG_CTRL1 Bit(9) [MAC_AUTO_SW] = 1b (Automatic
++	 * 4) VR_MII_DIG_CTRL1 Bit(9) [MAC_AUTO_SW] = 1b (Automatic
+ 	 *    speed/duplex mode change by HW after SGMII AN complete)
++	 * 5) VR_MII_MMD_CTRL Bit(12) [AN_ENABLE] = 1b (Enable SGMII AN)
+ 	 *
+ 	 * Note: Since it is MAC side SGMII, there is no need to set
+ 	 *	 SR_MII_AN_ADV. MAC side SGMII receives AN Tx Config from
+@@ -708,6 +716,17 @@ static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
+ 	 *	 between PHY and Link Partner. There is also no need to
+ 	 *	 trigger AN restart for MAC-side SGMII.
+ 	 */
++	mdio_ctrl = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL);
++	if (mdio_ctrl < 0)
++		return mdio_ctrl;
++
++	if (mdio_ctrl & AN_CL37_EN) {
++		ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL,
++				 mdio_ctrl & ~AN_CL37_EN);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_AN_CTRL);
+ 	if (ret < 0)
+ 		return ret;
+@@ -732,7 +751,15 @@ static int xpcs_config_aneg_c37_sgmii(struct dw_xpcs *xpcs, unsigned int mode)
+ 	else
+ 		ret &= ~DW_VR_MII_DIG_CTRL1_MAC_AUTO_SW;
+ 
+-	return xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_DIG_CTRL1, ret);
++	ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_DIG_CTRL1, ret);
++	if (ret < 0)
++		return ret;
++
++	if (phylink_autoneg_inband(mode))
++		ret = xpcs_write(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_CTRL,
++				 mdio_ctrl | AN_CL37_EN);
++
++	return ret;
+ }
+ 
+ static int xpcs_config_2500basex(struct dw_xpcs *xpcs)
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index ee8313a4ac713..6865d9319197f 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -538,6 +538,13 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	bus->dev.groups = NULL;
+ 	dev_set_name(&bus->dev, "%s", bus->id);
+ 
++	/* We need to set state to MDIOBUS_UNREGISTERED to correctly release
++	 * the device in mdiobus_free()
++	 *
++	 * State will be updated later in this function in case of success
++	 */
++	bus->state = MDIOBUS_UNREGISTERED;
++
+ 	err = device_register(&bus->dev);
+ 	if (err) {
+ 		pr_err("mii_bus %s failed to register\n", bus->id);
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 34e90216bd2cb..ab77a9f439ef9 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -134,7 +134,7 @@ static const char * const sm_state_strings[] = {
+ 	[SFP_S_LINK_UP] = "link_up",
+ 	[SFP_S_TX_FAULT] = "tx_fault",
+ 	[SFP_S_REINIT] = "reinit",
+-	[SFP_S_TX_DISABLE] = "rx_disable",
++	[SFP_S_TX_DISABLE] = "tx_disable",
+ };
+ 
+ static const char *sm_state_to_str(unsigned short sm_state)
+diff --git a/drivers/net/wireless/ath/ath5k/Kconfig b/drivers/net/wireless/ath/ath5k/Kconfig
+index f35cd8de228e4..6914b37bb0fbc 100644
+--- a/drivers/net/wireless/ath/ath5k/Kconfig
++++ b/drivers/net/wireless/ath/ath5k/Kconfig
+@@ -3,9 +3,7 @@ config ATH5K
+ 	tristate "Atheros 5xxx wireless cards support"
+ 	depends on (PCI || ATH25) && MAC80211
+ 	select ATH_COMMON
+-	select MAC80211_LEDS
+-	select LEDS_CLASS
+-	select NEW_LEDS
++	select MAC80211_LEDS if LEDS_CLASS=y || LEDS_CLASS=MAC80211
+ 	select ATH5K_AHB if ATH25
+ 	select ATH5K_PCI if !ATH25
+ 	help
+diff --git a/drivers/net/wireless/ath/ath5k/led.c b/drivers/net/wireless/ath/ath5k/led.c
+index 6a2a168567630..33e9928af3635 100644
+--- a/drivers/net/wireless/ath/ath5k/led.c
++++ b/drivers/net/wireless/ath/ath5k/led.c
+@@ -89,7 +89,8 @@ static const struct pci_device_id ath5k_led_devices[] = {
+ 
+ void ath5k_led_enable(struct ath5k_hw *ah)
+ {
+-	if (test_bit(ATH_STAT_LEDSOFT, ah->status)) {
++	if (IS_ENABLED(CONFIG_MAC80211_LEDS) &&
++	    test_bit(ATH_STAT_LEDSOFT, ah->status)) {
+ 		ath5k_hw_set_gpio_output(ah, ah->led_pin);
+ 		ath5k_led_off(ah);
+ 	}
+@@ -104,7 +105,8 @@ static void ath5k_led_on(struct ath5k_hw *ah)
+ 
+ void ath5k_led_off(struct ath5k_hw *ah)
+ {
+-	if (!test_bit(ATH_STAT_LEDSOFT, ah->status))
++	if (!IS_ENABLED(CONFIG_MAC80211_LEDS) ||
++	    !test_bit(ATH_STAT_LEDSOFT, ah->status))
+ 		return;
+ 	ath5k_hw_set_gpio(ah, ah->led_pin, !ah->led_on);
+ }
+@@ -146,7 +148,7 @@ ath5k_register_led(struct ath5k_hw *ah, struct ath5k_led *led,
+ static void
+ ath5k_unregister_led(struct ath5k_led *led)
+ {
+-	if (!led->ah)
++	if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !led->ah)
+ 		return;
+ 	led_classdev_unregister(&led->led_dev);
+ 	ath5k_led_off(led->ah);
+@@ -169,7 +171,7 @@ int ath5k_init_leds(struct ath5k_hw *ah)
+ 	char name[ATH5K_LED_MAX_NAME_LEN + 1];
+ 	const struct pci_device_id *match;
+ 
+-	if (!ah->pdev)
++	if (!IS_ENABLED(CONFIG_MAC80211_LEDS) || !ah->pdev)
+ 		return 0;
+ 
+ #ifdef CONFIG_ATH5K_AHB
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 24b658a3098aa..3ae727bc4e944 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -652,12 +652,13 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
+ 					u32 *uid)
+ {
+ 	u32 id;
+-	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
++	struct iwl_mvm_vif *mvmvif;
+ 	enum nl80211_iftype iftype;
+ 
+ 	if (!te_data->vif)
+ 		return false;
+ 
++	mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
+ 	iftype = te_data->vif->type;
+ 
+ 	/*
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 6f49950a5f6d1..be3ad42813532 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -547,6 +547,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0x43F0, 0x0074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x0078, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x007C, iwl_ax201_cfg_qu_hr, NULL),
++	IWL_DEV_INFO(0x43F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650s_name),
++	IWL_DEV_INFO(0x43F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, iwl_ax201_killer_1650i_name),
+ 	IWL_DEV_INFO(0x43F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x43F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x0070, iwl_ax201_cfg_qu_hr, NULL),
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 48e941f99558e..073ea7cd007bb 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -36,6 +36,7 @@ LIST_HEAD(aliases_lookup);
+ struct device_node *of_root;
+ EXPORT_SYMBOL(of_root);
+ struct device_node *of_chosen;
++EXPORT_SYMBOL(of_chosen);
+ struct device_node *of_aliases;
+ struct device_node *of_stdout;
+ static const char *of_stdout_options;
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index a53bd8728d0d3..fc1a29acadbbf 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -3229,9 +3229,17 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 		return 0;
+ 
+ 	if (!keep_devs) {
+-		/* Delete any children which might still exist. */
++		struct list_head removed;
++
++		/* Move all present children to the list on stack */
++		INIT_LIST_HEAD(&removed);
+ 		spin_lock_irqsave(&hbus->device_list_lock, flags);
+-		list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry) {
++		list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry)
++			list_move_tail(&hpdev->list_entry, &removed);
++		spin_unlock_irqrestore(&hbus->device_list_lock, flags);
++
++		/* Remove all children in the list */
++		list_for_each_entry_safe(hpdev, tmp, &removed, list_entry) {
+ 			list_del(&hpdev->list_entry);
+ 			if (hpdev->pci_slot)
+ 				pci_destroy_slot(hpdev->pci_slot);
+@@ -3239,7 +3247,6 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 			put_pcichild(hpdev);
+ 			put_pcichild(hpdev);
+ 		}
+-		spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ 	}
+ 
+ 	ret = hv_send_resources_released(hdev);
+diff --git a/drivers/ptp/ptp_pch.c b/drivers/ptp/ptp_pch.c
+index a17e8cc642c5f..8070f3fd98f01 100644
+--- a/drivers/ptp/ptp_pch.c
++++ b/drivers/ptp/ptp_pch.c
+@@ -644,6 +644,7 @@ static const struct pci_device_id pch_ieee1588_pcidev_id[] = {
+ 	 },
+ 	{0}
+ };
++MODULE_DEVICE_TABLE(pci, pch_ieee1588_pcidev_id);
+ 
+ static SIMPLE_DEV_PM_OPS(pch_pm_ops, pch_suspend, pch_resume);
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 4683c183e9d41..5bc91d34df634 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -2281,11 +2281,6 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		return FAILED;
+ 	}
+ 
+-	conn = session->leadconn;
+-	iscsi_get_conn(conn->cls_conn);
+-	conn->eh_abort_cnt++;
+-	age = session->age;
+-
+ 	spin_lock(&session->back_lock);
+ 	task = (struct iscsi_task *)sc->SCp.ptr;
+ 	if (!task || !task->sc) {
+@@ -2293,8 +2288,16 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		ISCSI_DBG_EH(session, "sc completed while abort in progress\n");
+ 
+ 		spin_unlock(&session->back_lock);
+-		goto success;
++		spin_unlock_bh(&session->frwd_lock);
++		mutex_unlock(&session->eh_mutex);
++		return SUCCESS;
+ 	}
++
++	conn = session->leadconn;
++	iscsi_get_conn(conn->cls_conn);
++	conn->eh_abort_cnt++;
++	age = session->age;
++
+ 	ISCSI_DBG_EH(session, "aborting [sc %p itt 0x%x]\n", sc, task->itt);
+ 	__iscsi_get_task(task);
+ 	spin_unlock(&session->back_lock);
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index a3f5af088122e..b073b514dcc48 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -6365,27 +6365,6 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status)
+ 	return retval;
+ }
+ 
+-struct ctm_info {
+-	struct ufs_hba	*hba;
+-	unsigned long	pending;
+-	unsigned int	ncpl;
+-};
+-
+-static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
+-{
+-	struct ctm_info *const ci = priv;
+-	struct completion *c;
+-
+-	WARN_ON_ONCE(reserved);
+-	if (test_bit(req->tag, &ci->pending))
+-		return true;
+-	ci->ncpl++;
+-	c = req->end_io_data;
+-	if (c)
+-		complete(c);
+-	return true;
+-}
+-
+ /**
+  * ufshcd_tmc_handler - handle task management function completion
+  * @hba: per adapter instance
+@@ -6396,18 +6375,24 @@ static bool ufshcd_compl_tm(struct request *req, void *priv, bool reserved)
+  */
+ static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba)
+ {
+-	unsigned long flags;
+-	struct request_queue *q = hba->tmf_queue;
+-	struct ctm_info ci = {
+-		.hba	 = hba,
+-	};
++	unsigned long flags, pending, issued;
++	irqreturn_t ret = IRQ_NONE;
++	int tag;
++
++	pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
+ 
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+-	ci.pending = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL);
+-	blk_mq_tagset_busy_iter(q->tag_set, ufshcd_compl_tm, &ci);
++	issued = hba->outstanding_tasks & ~pending;
++	for_each_set_bit(tag, &issued, hba->nutmrs) {
++		struct request *req = hba->tmf_rqs[tag];
++		struct completion *c = req->end_io_data;
++
++		complete(c);
++		ret = IRQ_HANDLED;
++	}
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 
+-	return ci.ncpl ? IRQ_HANDLED : IRQ_NONE;
++	return ret;
+ }
+ 
+ /**
+@@ -6530,9 +6515,9 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	ufshcd_hold(hba, false);
+ 
+ 	spin_lock_irqsave(host->host_lock, flags);
+-	blk_mq_start_request(req);
+ 
+ 	task_tag = req->tag;
++	hba->tmf_rqs[req->tag] = req;
+ 	treq->upiu_req.req_header.dword_0 |= cpu_to_be32(task_tag);
+ 
+ 	memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq));
+@@ -6576,6 +6561,7 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
+ 	}
+ 
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
++	hba->tmf_rqs[req->tag] = NULL;
+ 	__clear_bit(task_tag, &hba->outstanding_tasks);
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 
+@@ -9568,6 +9554,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ 		err = PTR_ERR(hba->tmf_queue);
+ 		goto free_tmf_tag_set;
+ 	}
++	hba->tmf_rqs = devm_kcalloc(hba->dev, hba->nutmrs,
++				    sizeof(*hba->tmf_rqs), GFP_KERNEL);
++	if (!hba->tmf_rqs) {
++		err = -ENOMEM;
++		goto free_tmf_queue;
++	}
+ 
+ 	/* Reset the attached device */
+ 	ufshcd_device_reset(hba);
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index aa95deffb873a..6069e37ec983c 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -780,6 +780,7 @@ struct ufs_hba {
+ 
+ 	struct blk_mq_tag_set tmf_tag_set;
+ 	struct request_queue *tmf_queue;
++	struct request **tmf_rqs;
+ 
+ 	struct uic_command *active_uic_cmd;
+ 	struct mutex uic_cmd_mutex;
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index eba7f76f9d61a..6034cd8992b0e 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -98,7 +98,7 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len)
+ 	if (ehdr->e_phnum < 2)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	if (phdrs[0].p_type == PT_LOAD || phdrs[1].p_type == PT_LOAD)
++	if (phdrs[0].p_type == PT_LOAD)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if ((phdrs[1].p_flags & QCOM_MDT_TYPE_MASK) != QCOM_MDT_TYPE_HASH)
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index b2f049faa3dfa..a6cffd57d3c7b 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -628,7 +628,7 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
+ 	/* Feed the soc specific unique data into entropy pool */
+ 	add_device_randomness(info, item_size);
+ 
+-	platform_set_drvdata(pdev, qs->soc_dev);
++	platform_set_drvdata(pdev, qs);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
+index ea64e187854eb..f32e1cbbe8c52 100644
+--- a/drivers/soc/ti/omap_prm.c
++++ b/drivers/soc/ti/omap_prm.c
+@@ -825,25 +825,28 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev,
+ 	writel_relaxed(v, reset->prm->base + reset->prm->data->rstctrl);
+ 	spin_unlock_irqrestore(&reset->lock, flags);
+ 
+-	if (!has_rstst)
+-		goto exit;
++	/* wait for the reset bit to clear */
++	ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
++						reset->prm->data->rstctrl,
++						v, !(v & BIT(id)), 1,
++						OMAP_RESET_MAX_WAIT);
++	if (ret)
++		pr_err("%s: timedout waiting for %s:%lu\n", __func__,
++		       reset->prm->data->name, id);
+ 
+ 	/* wait for the status to be set */
+-	ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
++	if (has_rstst) {
++		ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
+ 						 reset->prm->data->rstst,
+ 						 v, v & BIT(st_bit), 1,
+ 						 OMAP_RESET_MAX_WAIT);
+-	if (ret)
+-		pr_err("%s: timedout waiting for %s:%lu\n", __func__,
+-		       reset->prm->data->name, id);
++		if (ret)
++			pr_err("%s: timedout waiting for %s:%lu\n", __func__,
++			       reset->prm->data->name, id);
++	}
+ 
+-exit:
+-	if (reset->clkdm) {
+-		/* At least dra7 iva needs a delay before clkdm idle */
+-		if (has_rstst)
+-			udelay(1);
++	if (reset->clkdm)
+ 		pdata->clkdm_allow_idle(reset->clkdm);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 8b7bc10b6e8b4..f1d100671ee6a 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -420,11 +420,16 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ 	data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0);
+ 	if (IS_ERR(data->phy)) {
+ 		ret = PTR_ERR(data->phy);
+-		/* Return -EINVAL if no usbphy is available */
+-		if (ret == -ENODEV)
+-			data->phy = NULL;
+-		else
+-			goto err_clk;
++		if (ret == -ENODEV) {
++			data->phy = devm_usb_get_phy_by_phandle(dev, "phys", 0);
++			if (IS_ERR(data->phy)) {
++				ret = PTR_ERR(data->phy);
++				if (ret == -ENODEV)
++					data->phy = NULL;
++				else
++					goto err_clk;
++			}
++		}
+ 	}
+ 
+ 	pdata.usb_phy = data->phy;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 5b90d0979c607..8913c58c306ef 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -340,6 +340,9 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ 			acm->iocount.overrun++;
+ 		spin_unlock_irqrestore(&acm->read_lock, flags);
+ 
++		if (newctrl & ACM_CTRL_BRK)
++			tty_flip_buffer_push(&acm->port);
++
+ 		if (difference)
+ 			wake_up_all(&acm->wioctl);
+ 
+@@ -475,11 +478,16 @@ static int acm_submit_read_urbs(struct acm *acm, gfp_t mem_flags)
+ 
+ static void acm_process_read_urb(struct acm *acm, struct urb *urb)
+ {
++	unsigned long flags;
++
+ 	if (!urb->actual_length)
+ 		return;
+ 
++	spin_lock_irqsave(&acm->read_lock, flags);
+ 	tty_insert_flip_string(&acm->port, urb->transfer_buffer,
+ 			urb->actual_length);
++	spin_unlock_irqrestore(&acm->read_lock, flags);
++
+ 	tty_flip_buffer_push(&acm->port);
+ }
+ 
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 35d5908b5478a..fdf79bcf7eb09 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -824,7 +824,7 @@ static struct usb_class_driver wdm_class = {
+ };
+ 
+ /* --- WWAN framework integration --- */
+-#ifdef CONFIG_WWAN_CORE
++#ifdef CONFIG_WWAN
+ static int wdm_wwan_port_start(struct wwan_port *port)
+ {
+ 	struct wdm_device *desc = wwan_port_get_drvdata(port);
+@@ -963,11 +963,11 @@ static void wdm_wwan_rx(struct wdm_device *desc, int length)
+ 	/* inbuf has been copied, it is safe to check for outstanding data */
+ 	schedule_work(&desc->service_outs_intr);
+ }
+-#else /* CONFIG_WWAN_CORE */
++#else /* CONFIG_WWAN */
+ static void wdm_wwan_init(struct wdm_device *desc) {}
+ static void wdm_wwan_deinit(struct wdm_device *desc) {}
+ static void wdm_wwan_rx(struct wdm_device *desc, int length) {}
+-#endif /* CONFIG_WWAN_CORE */
++#endif /* CONFIG_WWAN */
+ 
+ /* --- error handling --- */
+ static void wdm_rxwork(struct work_struct *work)
+diff --git a/drivers/usb/common/Kconfig b/drivers/usb/common/Kconfig
+index 5e8a04e3dd3c8..b856622431a73 100644
+--- a/drivers/usb/common/Kconfig
++++ b/drivers/usb/common/Kconfig
+@@ -6,8 +6,7 @@ config USB_COMMON
+ 
+ config USB_LED_TRIG
+ 	bool "USB LED Triggers"
+-	depends on LEDS_CLASS && LEDS_TRIGGERS
+-	select USB_COMMON
++	depends on LEDS_CLASS && USB_COMMON && LEDS_TRIGGERS
+ 	help
+ 	  This option adds LED triggers for USB host and/or gadget activity.
+ 
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index 37c94031af1ed..53cb6b2637a09 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -593,11 +593,17 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
+ 		ssize = uac2_opts->c_ssize;
+ 	}
+ 
+-	if (!is_playback && (uac2_opts->c_sync == USB_ENDPOINT_SYNC_ASYNC))
++	if (!is_playback && (uac2_opts->c_sync == USB_ENDPOINT_SYNC_ASYNC)) {
++	  // Win10 requires max packet size + 1 frame
+ 		srate = srate * (1000 + uac2_opts->fb_max) / 1000;
+-
+-	max_size_bw = num_channels(chmask) * ssize *
+-		DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
++		// updated srate is always bigger, therefore DIV_ROUND_UP always yields +1
++		max_size_bw = num_channels(chmask) * ssize *
++			(DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1))));
++	} else {
++		// adding 1 frame provision for Win10
++		max_size_bw = num_channels(chmask) * ssize *
++			(DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1))) + 1);
++	}
+ 	ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
+ 						    max_size_ep));
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 9858716698dfe..c15eec9cc460a 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -696,7 +696,7 @@ irqreturn_t tcpci_irq(struct tcpci *tcpci)
+ 		tcpm_pd_receive(tcpci->port, &msg);
+ 	}
+ 
+-	if (status & TCPC_ALERT_EXTENDED_STATUS) {
++	if (tcpci->data->vbus_vsafe0v && (status & TCPC_ALERT_EXTENDED_STATUS)) {
+ 		ret = regmap_read(tcpci->regmap, TCPC_EXTENDED_STATUS, &raw);
+ 		if (!ret && (raw & TCPC_EXTENDED_STATUS_VSAFE0V))
+ 			tcpm_vbus_change(tcpci->port);
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 5d05de6665974..686b9245d6d67 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4846,6 +4846,7 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ 			tcpm_set_state(port, SRC_ATTACH_WAIT, 0);
+ 		break;
+ 	case SRC_ATTACHED:
++	case SRC_STARTUP:
+ 	case SRC_SEND_CAPABILITIES:
+ 	case SRC_READY:
+ 		if (tcpm_port_is_disconnected(port) ||
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index 21b3ae25c76d2..ea4cc0a6e40cc 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -625,10 +625,6 @@ static int tps6598x_probe(struct i2c_client *client)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	fwnode = device_get_named_child_node(&client->dev, "connector");
+-	if (!fwnode)
+-		return -ENODEV;
+-
+ 	/*
+ 	 * This fwnode has a "compatible" property, but is never populated as a
+ 	 * struct device. Instead we simply parse it to read the properties.
+@@ -636,7 +632,9 @@ static int tps6598x_probe(struct i2c_client *client)
+ 	 * with existing DT files, we work around this by deleting any
+ 	 * fwnode_links to/from this fwnode.
+ 	 */
+-	fw_devlink_purge_absent_suppliers(fwnode);
++	fwnode = device_get_named_child_node(&client->dev, "connector");
++	if (fwnode)
++		fw_devlink_purge_absent_suppliers(fwnode);
+ 
+ 	tps->role_sw = fwnode_usb_role_switch_get(fwnode);
+ 	if (IS_ERR(tps->role_sw)) {
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index d33c5cd684c0b..f71668367caf1 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2191,8 +2191,9 @@ config FB_HYPERV
+ 	  This framebuffer driver supports Microsoft Hyper-V Synthetic Video.
+ 
+ config FB_SIMPLE
+-	bool "Simple framebuffer support"
+-	depends on (FB = y) && !DRM_SIMPLEDRM
++	tristate "Simple framebuffer support"
++	depends on FB
++	depends on !DRM_SIMPLEDRM
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
+index c5b99a4861e87..6b4d5a7f3e152 100644
+--- a/drivers/video/fbdev/gbefb.c
++++ b/drivers/video/fbdev/gbefb.c
+@@ -1267,7 +1267,7 @@ static struct platform_device *gbefb_device;
+ static int __init gbefb_init(void)
+ {
+ 	int ret = platform_driver_register(&gbefb_driver);
+-	if (!ret) {
++	if (IS_ENABLED(CONFIG_SGI_IP32) && !ret) {
+ 		gbefb_device = platform_device_alloc("gbefb", 0);
+ 		if (gbefb_device) {
+ 			ret = platform_device_add(gbefb_device);
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 43ebfe36ac276..3a50f097ed3ed 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -491,12 +491,12 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
+ }
+ 
+ /*
+- * Stop waiting if either state is not BP_EAGAIN and ballooning action is
+- * needed, or if the credit has changed while state is BP_EAGAIN.
++ * Stop waiting if either state is BP_DONE and ballooning action is
++ * needed, or if the credit has changed while state is not BP_DONE.
+  */
+ static bool balloon_thread_cond(enum bp_state state, long credit)
+ {
+-	if (state != BP_EAGAIN)
++	if (state == BP_DONE)
+ 		credit = 0;
+ 
+ 	return current_credit() != credit || kthread_should_stop();
+@@ -516,10 +516,19 @@ static int balloon_thread(void *unused)
+ 
+ 	set_freezable();
+ 	for (;;) {
+-		if (state == BP_EAGAIN)
+-			timeout = balloon_stats.schedule_delay * HZ;
+-		else
++		switch (state) {
++		case BP_DONE:
++		case BP_ECANCELED:
+ 			timeout = 3600 * HZ;
++			break;
++		case BP_EAGAIN:
++			timeout = balloon_stats.schedule_delay * HZ;
++			break;
++		case BP_WAIT:
++			timeout = HZ;
++			break;
++		}
++
+ 		credit = current_credit();
+ 
+ 		wait_event_freezable_timeout(balloon_thread_wq,
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index 720a7b7abd46d..fe8df32bb612b 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -803,11 +803,12 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ 		unsigned int domid =
+ 			(xdata.flags & XENMEM_rsrc_acq_caller_owned) ?
+ 			DOMID_SELF : kdata.dom;
+-		int num;
++		int num, *errs = (int *)pfns;
+ 
++		BUILD_BUG_ON(sizeof(*errs) > sizeof(*pfns));
+ 		num = xen_remap_domain_mfn_array(vma,
+ 						 kdata.addr & PAGE_MASK,
+-						 pfns, kdata.num, (int *)pfns,
++						 pfns, kdata.num, errs,
+ 						 vma->vm_page_prot,
+ 						 domid,
+ 						 vma->vm_private_data);
+@@ -817,7 +818,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ 			unsigned int i;
+ 
+ 			for (i = 0; i < num; i++) {
+-				rc = pfns[i];
++				rc = errs[i];
+ 				if (rc < 0)
+ 					break;
+ 			}
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 2dfe3b3a53d69..f24370f5c7744 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -974,8 +974,7 @@ int afs_launder_page(struct page *page)
+ 		iov_iter_bvec(&iter, WRITE, bv, 1, bv[0].bv_len);
+ 
+ 		trace_afs_page_dirty(vnode, tracepoint_string("launder"), page);
+-		ret = afs_store_data(vnode, &iter, (loff_t)page->index * PAGE_SIZE,
+-				     true);
++		ret = afs_store_data(vnode, &iter, page_offset(page) + f, true);
+ 	}
+ 
+ 	trace_afs_page_dirty(vnode, tracepoint_string("laundered"), page);
+diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
+index 0b6cd3b8734c6..994ec22d40402 100644
+--- a/fs/netfs/read_helper.c
++++ b/fs/netfs/read_helper.c
+@@ -150,7 +150,7 @@ static void netfs_clear_unread(struct netfs_read_subrequest *subreq)
+ {
+ 	struct iov_iter iter;
+ 
+-	iov_iter_xarray(&iter, WRITE, &subreq->rreq->mapping->i_pages,
++	iov_iter_xarray(&iter, READ, &subreq->rreq->mapping->i_pages,
+ 			subreq->start + subreq->transferred,
+ 			subreq->len   - subreq->transferred);
+ 	iov_iter_zero(iov_iter_count(&iter), &iter);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 7abeccb975b22..cf030ebe28275 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3544,15 +3544,18 @@ nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
+ 		goto fail;
+ 	cd->rd_maxcount -= entry_bytes;
+ 	/*
+-	 * RFC 3530 14.2.24 describes rd_dircount as only a "hint", so
+-	 * let's always let through the first entry, at least:
++	 * RFC 3530 14.2.24 describes rd_dircount as only a "hint", and
++	 * notes that it could be zero. If it is zero, then the server
++	 * should enforce only the rd_maxcount value.
+ 	 */
+-	if (!cd->rd_dircount)
+-		goto fail;
+-	name_and_cookie = 4 + 4 * XDR_QUADLEN(namlen) + 8;
+-	if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
+-		goto fail;
+-	cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
++	if (cd->rd_dircount) {
++		name_and_cookie = 4 + 4 * XDR_QUADLEN(namlen) + 8;
++		if (name_and_cookie > cd->rd_dircount && cd->cookie_offset)
++			goto fail;
++		cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie);
++		if (!cd->rd_dircount)
++			cd->rd_maxcount = 0;
++	}
+ 
+ 	cd->cookie_offset = cookie_offset;
+ skip_entry:
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index c2c3d9077dc58..09ae1a0873d05 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1545,7 +1545,7 @@ static int __init init_nfsd(void)
+ 		goto out_free_all;
+ 	return 0;
+ out_free_all:
+-	unregister_pernet_subsys(&nfsd_net_ops);
++	unregister_filesystem(&nfsd_fs_type);
+ out_free_exports:
+ 	remove_proc_entry("fs/nfs/exports", NULL);
+ 	remove_proc_entry("fs/nfs", NULL);
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 7c1850adec288..a4ad25f1f797f 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -1215,9 +1215,13 @@ static int ovl_rename(struct user_namespace *mnt_userns, struct inode *olddir,
+ 				goto out_dput;
+ 		}
+ 	} else {
+-		if (!d_is_negative(newdentry) &&
+-		    (!new_opaque || !ovl_is_whiteout(newdentry)))
+-			goto out_dput;
++		if (!d_is_negative(newdentry)) {
++			if (!new_opaque || !ovl_is_whiteout(newdentry))
++				goto out_dput;
++		} else {
++			if (flags & RENAME_EXCHANGE)
++				goto out_dput;
++		}
+ 	}
+ 
+ 	if (olddentry == trap)
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index d081faa55e830..c88ac571593dc 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -296,6 +296,12 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = -EINVAL;
++	if (iocb->ki_flags & IOCB_DIRECT &&
++	    (!real.file->f_mapping->a_ops ||
++	     !real.file->f_mapping->a_ops->direct_IO))
++		goto out_fdput;
++
+ 	old_cred = ovl_override_creds(file_inode(file)->i_sb);
+ 	if (is_sync_kiocb(iocb)) {
+ 		ret = vfs_iter_read(real.file, iter, &iocb->ki_pos,
+@@ -320,7 +326,7 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ out:
+ 	revert_creds(old_cred);
+ 	ovl_file_accessed(file);
+-
++out_fdput:
+ 	fdput(real);
+ 
+ 	return ret;
+@@ -349,6 +355,12 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (ret)
+ 		goto out_unlock;
+ 
++	ret = -EINVAL;
++	if (iocb->ki_flags & IOCB_DIRECT &&
++	    (!real.file->f_mapping->a_ops ||
++	     !real.file->f_mapping->a_ops->direct_IO))
++		goto out_fdput;
++
+ 	if (!ovl_should_sync(OVL_FS(inode->i_sb)))
+ 		ifl &= ~(IOCB_DSYNC | IOCB_SYNC);
+ 
+@@ -384,6 +396,7 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	}
+ out:
+ 	revert_creds(old_cred);
++out_fdput:
+ 	fdput(real);
+ 
+ out_unlock:
+diff --git a/include/net/netfilter/ipv6/nf_defrag_ipv6.h b/include/net/netfilter/ipv6/nf_defrag_ipv6.h
+index 0fd8a4159662d..ceadf8ba25a44 100644
+--- a/include/net/netfilter/ipv6/nf_defrag_ipv6.h
++++ b/include/net/netfilter/ipv6/nf_defrag_ipv6.h
+@@ -17,7 +17,6 @@ struct inet_frags_ctl;
+ struct nft_ct_frag6_pernet {
+ 	struct ctl_table_header *nf_frag_frags_hdr;
+ 	struct fqdir	*fqdir;
+-	unsigned int users;
+ };
+ 
+ #endif /* _NF_DEFRAG_IPV6_H */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 148f5d8ee5ab3..a16171c5fd9eb 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1202,7 +1202,7 @@ struct nft_object *nft_obj_lookup(const struct net *net,
+ 
+ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 		    struct nft_object *obj, u32 portid, u32 seq,
+-		    int event, int family, int report, gfp_t gfp);
++		    int event, u16 flags, int family, int report, gfp_t gfp);
+ 
+ /**
+  *	struct nft_object_type - stateful object type
+diff --git a/include/net/netns/netfilter.h b/include/net/netns/netfilter.h
+index 15e2b13fb0c0f..71343b969cd31 100644
+--- a/include/net/netns/netfilter.h
++++ b/include/net/netns/netfilter.h
+@@ -28,5 +28,11 @@ struct netns_nf {
+ #if IS_ENABLED(CONFIG_DECNET)
+ 	struct nf_hook_entries __rcu *hooks_decnet[NF_DN_NUMHOOKS];
+ #endif
++#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)
++	unsigned int defrag_ipv4_users;
++#endif
++#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
++	unsigned int defrag_ipv6_users;
++#endif
+ };
+ #endif
+diff --git a/include/soc/mscc/ocelot_vcap.h b/include/soc/mscc/ocelot_vcap.h
+index 25fd525aaf928..4869ebbd438d9 100644
+--- a/include/soc/mscc/ocelot_vcap.h
++++ b/include/soc/mscc/ocelot_vcap.h
+@@ -694,7 +694,7 @@ int ocelot_vcap_filter_add(struct ocelot *ocelot,
+ int ocelot_vcap_filter_del(struct ocelot *ocelot,
+ 			   struct ocelot_vcap_filter *rule);
+ struct ocelot_vcap_filter *
+-ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id,
+-				    bool tc_offload);
++ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block,
++				    unsigned long cookie, bool tc_offload);
+ 
+ #endif /* _OCELOT_VCAP_H_ */
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 6fbc2abe9c916..2553caf4f74a3 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -63,7 +63,8 @@ static inline int stack_map_data_size(struct bpf_map *map)
+ 
+ static int prealloc_elems_and_freelist(struct bpf_stack_map *smap)
+ {
+-	u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size;
++	u64 elem_size = sizeof(struct stack_map_bucket) +
++			(u64)smap->map.value_size;
+ 	int err;
+ 
+ 	smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries,
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index 8642e56059fb2..2abfbd4b8a15e 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -1657,7 +1657,8 @@ static size_t br_get_linkxstats_size(const struct net_device *dev, int attr)
+ 	}
+ 
+ 	return numvls * nla_total_size(sizeof(struct bridge_vlan_xstats)) +
+-	       nla_total_size(sizeof(struct br_mcast_stats)) +
++	       nla_total_size_64bit(sizeof(struct br_mcast_stats)) +
++	       (p ? nla_total_size_64bit(sizeof(p->stp_xstats)) : 0) +
+ 	       nla_total_size(0);
+ }
+ 
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 662eb1c37f47b..10e2a0e4804b4 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -5265,7 +5265,7 @@ nla_put_failure:
+ static size_t if_nlmsg_stats_size(const struct net_device *dev,
+ 				  u32 filter_mask)
+ {
+-	size_t size = 0;
++	size_t size = NLMSG_ALIGN(sizeof(struct if_stats_msg));
+ 
+ 	if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_64, 0))
+ 		size += nla_total_size_64bit(sizeof(struct rtnl_link_stats64));
+diff --git a/net/dsa/tag_dsa.c b/net/dsa/tag_dsa.c
+index a822355afc903..26c6768b6b0c5 100644
+--- a/net/dsa/tag_dsa.c
++++ b/net/dsa/tag_dsa.c
+@@ -176,7 +176,7 @@ static struct sk_buff *dsa_rcv_ll(struct sk_buff *skb, struct net_device *dev,
+ 	case DSA_CMD_FORWARD:
+ 		skb->offload_fwd_mark = 1;
+ 
+-		trunk = !!(dsa_header[1] & 7);
++		trunk = !!(dsa_header[1] & 4);
+ 		break;
+ 
+ 	case DSA_CMD_TO_CPU:
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 80aeaf9e6e16e..bfb522e513461 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -242,8 +242,10 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ 
+ 		if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
+ 			return -1;
++		score =  sk->sk_bound_dev_if ? 2 : 1;
+ 
+-		score = sk->sk_family == PF_INET ? 2 : 1;
++		if (sk->sk_family == PF_INET)
++			score++;
+ 		if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 			score++;
+ 	}
+diff --git a/net/ipv4/netfilter/nf_defrag_ipv4.c b/net/ipv4/netfilter/nf_defrag_ipv4.c
+index 613432a36f0a7..e61ea428ea187 100644
+--- a/net/ipv4/netfilter/nf_defrag_ipv4.c
++++ b/net/ipv4/netfilter/nf_defrag_ipv4.c
+@@ -20,13 +20,8 @@
+ #endif
+ #include <net/netfilter/nf_conntrack_zones.h>
+ 
+-static unsigned int defrag4_pernet_id __read_mostly;
+ static DEFINE_MUTEX(defrag4_mutex);
+ 
+-struct defrag4_pernet {
+-	unsigned int users;
+-};
+-
+ static int nf_ct_ipv4_gather_frags(struct net *net, struct sk_buff *skb,
+ 				   u_int32_t user)
+ {
+@@ -111,19 +106,15 @@ static const struct nf_hook_ops ipv4_defrag_ops[] = {
+ 
+ static void __net_exit defrag4_net_exit(struct net *net)
+ {
+-	struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
+-
+-	if (nf_defrag->users) {
++	if (net->nf.defrag_ipv4_users) {
+ 		nf_unregister_net_hooks(net, ipv4_defrag_ops,
+ 					ARRAY_SIZE(ipv4_defrag_ops));
+-		nf_defrag->users = 0;
++		net->nf.defrag_ipv4_users = 0;
+ 	}
+ }
+ 
+ static struct pernet_operations defrag4_net_ops = {
+ 	.exit = defrag4_net_exit,
+-	.id   = &defrag4_pernet_id,
+-	.size = sizeof(struct defrag4_pernet),
+ };
+ 
+ static int __init nf_defrag_init(void)
+@@ -138,24 +129,23 @@ static void __exit nf_defrag_fini(void)
+ 
+ int nf_defrag_ipv4_enable(struct net *net)
+ {
+-	struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
+ 	int err = 0;
+ 
+ 	mutex_lock(&defrag4_mutex);
+-	if (nf_defrag->users == UINT_MAX) {
++	if (net->nf.defrag_ipv4_users == UINT_MAX) {
+ 		err = -EOVERFLOW;
+ 		goto out_unlock;
+ 	}
+ 
+-	if (nf_defrag->users) {
+-		nf_defrag->users++;
++	if (net->nf.defrag_ipv4_users) {
++		net->nf.defrag_ipv4_users++;
+ 		goto out_unlock;
+ 	}
+ 
+ 	err = nf_register_net_hooks(net, ipv4_defrag_ops,
+ 				    ARRAY_SIZE(ipv4_defrag_ops));
+ 	if (err == 0)
+-		nf_defrag->users = 1;
++		net->nf.defrag_ipv4_users = 1;
+ 
+  out_unlock:
+ 	mutex_unlock(&defrag4_mutex);
+@@ -165,12 +155,10 @@ EXPORT_SYMBOL_GPL(nf_defrag_ipv4_enable);
+ 
+ void nf_defrag_ipv4_disable(struct net *net)
+ {
+-	struct defrag4_pernet *nf_defrag = net_generic(net, defrag4_pernet_id);
+-
+ 	mutex_lock(&defrag4_mutex);
+-	if (nf_defrag->users) {
+-		nf_defrag->users--;
+-		if (nf_defrag->users == 0)
++	if (net->nf.defrag_ipv4_users) {
++		net->nf.defrag_ipv4_users--;
++		if (net->nf.defrag_ipv4_users == 0)
+ 			nf_unregister_net_hooks(net, ipv4_defrag_ops,
+ 						ARRAY_SIZE(ipv4_defrag_ops));
+ 	}
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 915ea635b2d5a..cbc7907f79b84 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -390,7 +390,8 @@ static int compute_score(struct sock *sk, struct net *net,
+ 					dif, sdif);
+ 	if (!dev_match)
+ 		return -1;
+-	score += 4;
++	if (sk->sk_bound_dev_if)
++		score += 4;
+ 
+ 	if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 		score++;
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 55c290d556059..67c9114835c84 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -106,7 +106,7 @@ static inline int compute_score(struct sock *sk, struct net *net,
+ 		if (!inet_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif))
+ 			return -1;
+ 
+-		score = 1;
++		score =  sk->sk_bound_dev_if ? 2 : 1;
+ 		if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 			score++;
+ 	}
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index a0108415275fe..5c47be29b9ee9 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -33,7 +33,7 @@
+ 
+ static const char nf_frags_cache_name[] = "nf-frags";
+ 
+-unsigned int nf_frag_pernet_id __read_mostly;
++static unsigned int nf_frag_pernet_id __read_mostly;
+ static struct inet_frags nf_frags;
+ 
+ static struct nft_ct_frag6_pernet *nf_frag_pernet(struct net *net)
+diff --git a/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c b/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
+index e8a59d8bf2adf..cb4eb1d2c620b 100644
+--- a/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
++++ b/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
+@@ -25,8 +25,6 @@
+ #include <net/netfilter/nf_conntrack_zones.h>
+ #include <net/netfilter/ipv6/nf_defrag_ipv6.h>
+ 
+-extern unsigned int nf_frag_pernet_id;
+-
+ static DEFINE_MUTEX(defrag6_mutex);
+ 
+ static enum ip6_defrag_users nf_ct6_defrag_user(unsigned int hooknum,
+@@ -91,12 +89,10 @@ static const struct nf_hook_ops ipv6_defrag_ops[] = {
+ 
+ static void __net_exit defrag6_net_exit(struct net *net)
+ {
+-	struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
+-
+-	if (nf_frag->users) {
++	if (net->nf.defrag_ipv6_users) {
+ 		nf_unregister_net_hooks(net, ipv6_defrag_ops,
+ 					ARRAY_SIZE(ipv6_defrag_ops));
+-		nf_frag->users = 0;
++		net->nf.defrag_ipv6_users = 0;
+ 	}
+ }
+ 
+@@ -134,24 +130,23 @@ static void __exit nf_defrag_fini(void)
+ 
+ int nf_defrag_ipv6_enable(struct net *net)
+ {
+-	struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
+ 	int err = 0;
+ 
+ 	mutex_lock(&defrag6_mutex);
+-	if (nf_frag->users == UINT_MAX) {
++	if (net->nf.defrag_ipv6_users == UINT_MAX) {
+ 		err = -EOVERFLOW;
+ 		goto out_unlock;
+ 	}
+ 
+-	if (nf_frag->users) {
+-		nf_frag->users++;
++	if (net->nf.defrag_ipv6_users) {
++		net->nf.defrag_ipv6_users++;
+ 		goto out_unlock;
+ 	}
+ 
+ 	err = nf_register_net_hooks(net, ipv6_defrag_ops,
+ 				    ARRAY_SIZE(ipv6_defrag_ops));
+ 	if (err == 0)
+-		nf_frag->users = 1;
++		net->nf.defrag_ipv6_users = 1;
+ 
+  out_unlock:
+ 	mutex_unlock(&defrag6_mutex);
+@@ -161,12 +156,10 @@ EXPORT_SYMBOL_GPL(nf_defrag_ipv6_enable);
+ 
+ void nf_defrag_ipv6_disable(struct net *net)
+ {
+-	struct nft_ct_frag6_pernet *nf_frag = net_generic(net, nf_frag_pernet_id);
+-
+ 	mutex_lock(&defrag6_mutex);
+-	if (nf_frag->users) {
+-		nf_frag->users--;
+-		if (nf_frag->users == 0)
++	if (net->nf.defrag_ipv6_users) {
++		net->nf.defrag_ipv6_users--;
++		if (net->nf.defrag_ipv6_users == 0)
+ 			nf_unregister_net_hooks(net, ipv6_defrag_ops,
+ 						ARRAY_SIZE(ipv6_defrag_ops));
+ 	}
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 80ae024d13c8c..ba77955d75fbd 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -133,7 +133,8 @@ static int compute_score(struct sock *sk, struct net *net,
+ 	dev_match = udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif);
+ 	if (!dev_match)
+ 		return -1;
+-	score++;
++	if (sk->sk_bound_dev_if)
++		score++;
+ 
+ 	if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+ 		score++;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index b9546defdc280..c0851fec11d46 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -780,6 +780,7 @@ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct sk_buff *skb;
++	u16 flags = 0;
+ 	int err;
+ 
+ 	if (!ctx->report &&
+@@ -790,8 +791,11 @@ static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
+ 	if (skb == NULL)
+ 		goto err;
+ 
++	if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
++		flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
++
+ 	err = nf_tables_fill_table_info(skb, ctx->net, ctx->portid, ctx->seq,
+-					event, 0, ctx->family, ctx->table);
++					event, flags, ctx->family, ctx->table);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -1563,6 +1567,7 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct sk_buff *skb;
++	u16 flags = 0;
+ 	int err;
+ 
+ 	if (!ctx->report &&
+@@ -1573,8 +1578,11 @@ static void nf_tables_chain_notify(const struct nft_ctx *ctx, int event)
+ 	if (skb == NULL)
+ 		goto err;
+ 
++	if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
++		flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
++
+ 	err = nf_tables_fill_chain_info(skb, ctx->net, ctx->portid, ctx->seq,
+-					event, 0, ctx->family, ctx->table,
++					event, flags, ctx->family, ctx->table,
+ 					ctx->chain);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+@@ -2866,8 +2874,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 				    u32 flags, int family,
+ 				    const struct nft_table *table,
+ 				    const struct nft_chain *chain,
+-				    const struct nft_rule *rule,
+-				    const struct nft_rule *prule)
++				    const struct nft_rule *rule, u64 handle)
+ {
+ 	struct nlmsghdr *nlh;
+ 	const struct nft_expr *expr, *next;
+@@ -2887,9 +2894,8 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_RULE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event != NFT_MSG_DELRULE && prule) {
+-		if (nla_put_be64(skb, NFTA_RULE_POSITION,
+-				 cpu_to_be64(prule->handle),
++	if (event != NFT_MSG_DELRULE && handle) {
++		if (nla_put_be64(skb, NFTA_RULE_POSITION, cpu_to_be64(handle),
+ 				 NFTA_RULE_PAD))
+ 			goto nla_put_failure;
+ 	}
+@@ -2925,7 +2931,10 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ 				  const struct nft_rule *rule, int event)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(ctx->net);
++	const struct nft_rule *prule;
+ 	struct sk_buff *skb;
++	u64 handle = 0;
++	u16 flags = 0;
+ 	int err;
+ 
+ 	if (!ctx->report &&
+@@ -2936,9 +2945,20 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ 	if (skb == NULL)
+ 		goto err;
+ 
++	if (event == NFT_MSG_NEWRULE &&
++	    !list_is_first(&rule->list, &ctx->chain->rules) &&
++	    !list_is_last(&rule->list, &ctx->chain->rules)) {
++		prule = list_prev_entry(rule, list);
++		handle = prule->handle;
++	}
++	if (ctx->flags & (NLM_F_APPEND | NLM_F_REPLACE))
++		flags |= NLM_F_APPEND;
++	if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
++		flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
++
+ 	err = nf_tables_fill_rule_info(skb, ctx->net, ctx->portid, ctx->seq,
+-				       event, 0, ctx->family, ctx->table,
+-				       ctx->chain, rule, NULL);
++				       event, flags, ctx->family, ctx->table,
++				       ctx->chain, rule, handle);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -2964,6 +2984,7 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 	struct net *net = sock_net(skb->sk);
+ 	const struct nft_rule *rule, *prule;
+ 	unsigned int s_idx = cb->args[0];
++	u64 handle;
+ 
+ 	prule = NULL;
+ 	list_for_each_entry_rcu(rule, &chain->rules, list) {
+@@ -2975,12 +2996,17 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 			memset(&cb->args[1], 0,
+ 					sizeof(cb->args) - sizeof(cb->args[0]));
+ 		}
++		if (prule)
++			handle = prule->handle;
++		else
++			handle = 0;
++
+ 		if (nf_tables_fill_rule_info(skb, net, NETLINK_CB(cb->skb).portid,
+ 					cb->nlh->nlmsg_seq,
+ 					NFT_MSG_NEWRULE,
+ 					NLM_F_MULTI | NLM_F_APPEND,
+ 					table->family,
+-					table, chain, rule, prule) < 0)
++					table, chain, rule, handle) < 0)
+ 			return 1;
+ 
+ 		nl_dump_check_consistent(cb, nlmsg_hdr(skb));
+@@ -3143,7 +3169,7 @@ static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 
+ 	err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
+ 				       info->nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
+-				       family, table, chain, rule, NULL);
++				       family, table, chain, rule, 0);
+ 	if (err < 0)
+ 		goto err_fill_rule_info;
+ 
+@@ -3403,17 +3429,15 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	}
+ 
+ 	if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
++		err = nft_delrule(&ctx, old_rule);
++		if (err < 0)
++			goto err_destroy_flow_rule;
++
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+ 		if (trans == NULL) {
+ 			err = -ENOMEM;
+ 			goto err_destroy_flow_rule;
+ 		}
+-		err = nft_delrule(&ctx, old_rule);
+-		if (err < 0) {
+-			nft_trans_destroy(trans);
+-			goto err_destroy_flow_rule;
+-		}
+-
+ 		list_add_tail_rcu(&rule->list, &old_rule->list);
+ 	} else {
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+@@ -3943,8 +3967,9 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
+ 			         gfp_t gfp_flags)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(ctx->net);
+-	struct sk_buff *skb;
+ 	u32 portid = ctx->portid;
++	struct sk_buff *skb;
++	u16 flags = 0;
+ 	int err;
+ 
+ 	if (!ctx->report &&
+@@ -3955,7 +3980,10 @@ static void nf_tables_set_notify(const struct nft_ctx *ctx,
+ 	if (skb == NULL)
+ 		goto err;
+ 
+-	err = nf_tables_fill_set(skb, ctx, set, event, 0);
++	if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
++		flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
++
++	err = nf_tables_fill_set(skb, ctx, set, event, flags);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -5231,12 +5259,13 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
+ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
+ 				     const struct nft_set *set,
+ 				     const struct nft_set_elem *elem,
+-				     int event, u16 flags)
++				     int event)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct net *net = ctx->net;
+ 	u32 portid = ctx->portid;
+ 	struct sk_buff *skb;
++	u16 flags = 0;
+ 	int err;
+ 
+ 	if (!ctx->report && !nfnetlink_has_listeners(net, NFNLGRP_NFTABLES))
+@@ -5246,6 +5275,9 @@ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
+ 	if (skb == NULL)
+ 		goto err;
+ 
++	if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
++		flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
++
+ 	err = nf_tables_fill_setelem_info(skb, ctx, 0, portid, event, flags,
+ 					  set, elem);
+ 	if (err < 0) {
+@@ -6921,7 +6953,7 @@ static int nf_tables_delobj(struct sk_buff *skb, const struct nfnl_info *info,
+ 
+ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 		    struct nft_object *obj, u32 portid, u32 seq, int event,
+-		    int family, int report, gfp_t gfp)
++		    u16 flags, int family, int report, gfp_t gfp)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	struct sk_buff *skb;
+@@ -6946,8 +6978,9 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 	if (skb == NULL)
+ 		goto err;
+ 
+-	err = nf_tables_fill_obj_info(skb, net, portid, seq, event, 0, family,
+-				      table, obj, false);
++	err = nf_tables_fill_obj_info(skb, net, portid, seq, event,
++				      flags & (NLM_F_CREATE | NLM_F_EXCL),
++				      family, table, obj, false);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -6964,7 +6997,7 @@ static void nf_tables_obj_notify(const struct nft_ctx *ctx,
+ 				 struct nft_object *obj, int event)
+ {
+ 	nft_obj_notify(ctx->net, ctx->table, obj, ctx->portid, ctx->seq, event,
+-		       ctx->family, ctx->report, GFP_KERNEL);
++		       ctx->flags, ctx->family, ctx->report, GFP_KERNEL);
+ }
+ 
+ /*
+@@ -7745,6 +7778,7 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(ctx->net);
+ 	struct sk_buff *skb;
++	u16 flags = 0;
+ 	int err;
+ 
+ 	if (!ctx->report &&
+@@ -7755,8 +7789,11 @@ static void nf_tables_flowtable_notify(struct nft_ctx *ctx,
+ 	if (skb == NULL)
+ 		goto err;
+ 
++	if (ctx->flags & (NLM_F_CREATE | NLM_F_EXCL))
++		flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
++
+ 	err = nf_tables_fill_flowtable_info(skb, ctx->net, ctx->portid,
+-					    ctx->seq, event, 0,
++					    ctx->seq, event, flags,
+ 					    ctx->family, flowtable, hook_list);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+@@ -8634,7 +8671,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			nft_setelem_activate(net, te->set, &te->elem);
+ 			nf_tables_setelem_notify(&trans->ctx, te->set,
+ 						 &te->elem,
+-						 NFT_MSG_NEWSETELEM, 0);
++						 NFT_MSG_NEWSETELEM);
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_DELSETELEM:
+@@ -8642,7 +8679,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 
+ 			nf_tables_setelem_notify(&trans->ctx, te->set,
+ 						 &te->elem,
+-						 NFT_MSG_DELSETELEM, 0);
++						 NFT_MSG_DELSETELEM);
+ 			nft_setelem_remove(net, te->set, &te->elem);
+ 			if (!nft_setelem_is_catchall(te->set, &te->elem)) {
+ 				atomic_dec(&te->set->nelems);
+diff --git a/net/netfilter/nft_quota.c b/net/netfilter/nft_quota.c
+index 0363f533a42b8..c4d1389f7185a 100644
+--- a/net/netfilter/nft_quota.c
++++ b/net/netfilter/nft_quota.c
+@@ -60,7 +60,7 @@ static void nft_quota_obj_eval(struct nft_object *obj,
+ 	if (overquota &&
+ 	    !test_and_set_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
+ 		nft_obj_notify(nft_net(pkt), obj->key.table, obj, 0, 0,
+-			       NFT_MSG_NEWOBJ, nft_pf(pkt), 0, GFP_ATOMIC);
++			       NFT_MSG_NEWOBJ, 0, nft_pf(pkt), 0, GFP_ATOMIC);
+ }
+ 
+ static int nft_quota_do_init(const struct nlattr * const tb[],
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 24b7cf447bc55..ada47e59647a0 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -594,7 +594,10 @@ static int netlink_insert(struct sock *sk, u32 portid)
+ 
+ 	/* We need to ensure that the socket is hashed and visible. */
+ 	smp_wmb();
+-	nlk_sk(sk)->bound = portid;
++	/* Paired with lockless reads from netlink_bind(),
++	 * netlink_connect() and netlink_sendmsg().
++	 */
++	WRITE_ONCE(nlk_sk(sk)->bound, portid);
+ 
+ err:
+ 	release_sock(sk);
+@@ -1012,7 +1015,8 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+ 	if (nlk->ngroups < BITS_PER_LONG)
+ 		groups &= (1UL << nlk->ngroups) - 1;
+ 
+-	bound = nlk->bound;
++	/* Paired with WRITE_ONCE() in netlink_insert() */
++	bound = READ_ONCE(nlk->bound);
+ 	if (bound) {
+ 		/* Ensure nlk->portid is up-to-date. */
+ 		smp_rmb();
+@@ -1098,8 +1102,9 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
+ 
+ 	/* No need for barriers here as we return to user-space without
+ 	 * using any of the bound attributes.
++	 * Paired with WRITE_ONCE() in netlink_insert().
+ 	 */
+-	if (!nlk->bound)
++	if (!READ_ONCE(nlk->bound))
+ 		err = netlink_autobind(sock);
+ 
+ 	if (err == 0) {
+@@ -1888,7 +1893,8 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		dst_group = nlk->dst_group;
+ 	}
+ 
+-	if (!nlk->bound) {
++	/* Paired with WRITE_ONCE() in netlink_insert() */
++	if (!READ_ONCE(nlk->bound)) {
+ 		err = netlink_autobind(sock);
+ 		if (err)
+ 			goto out;
+diff --git a/net/sched/sch_fifo.c b/net/sched/sch_fifo.c
+index a579a4131d22d..e1040421b7979 100644
+--- a/net/sched/sch_fifo.c
++++ b/net/sched/sch_fifo.c
+@@ -233,6 +233,9 @@ int fifo_set_limit(struct Qdisc *q, unsigned int limit)
+ 	if (strncmp(q->ops->id + 1, "fifo", 4) != 0)
+ 		return 0;
+ 
++	if (!q->ops->change)
++		return 0;
++
+ 	nla = kmalloc(nla_attr_size(sizeof(struct tc_fifo_qopt)), GFP_KERNEL);
+ 	if (nla) {
+ 		nla->nla_type = RTM_NEWQDISC;
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 1ab2fc933a214..b9fd18d986464 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1641,6 +1641,10 @@ static void taprio_destroy(struct Qdisc *sch)
+ 	list_del(&q->taprio_list);
+ 	spin_unlock(&taprio_list_lock);
+ 
++	/* Note that taprio_reset() might not be called if an error
++	 * happens in qdisc_create(), after taprio_init() has been called.
++	 */
++	hrtimer_cancel(&q->advance_timer);
+ 
+ 	taprio_disable_offload(dev, q, NULL);
+ 
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 3d685fe328fad..9fd35a60de6ce 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -643,7 +643,7 @@ static bool gss_check_seq_num(const struct svc_rqst *rqstp, struct rsc *rsci,
+ 		}
+ 		__set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win);
+ 		goto ok;
+-	} else if (seq_num <= sd->sd_max - GSS_SEQ_WIN) {
++	} else if (seq_num + GSS_SEQ_WIN <= sd->sd_max) {
+ 		goto toolow;
+ 	}
+ 	if (__test_and_set_bit(seq_num % GSS_SEQ_WIN, sd->sd_win))
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index d27e017ebfbea..f1bc09e606cd1 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -8115,7 +8115,8 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
+ 
+ 	if (obj->gen_loader) {
+ 		/* reset FDs */
+-		btf__set_fd(obj->btf, -1);
++		if (obj->btf)
++			btf__set_fd(obj->btf, -1);
+ 		for (i = 0; i < obj->nr_maps; i++)
+ 			obj->maps[i].fd = -1;
+ 		if (!err)
+diff --git a/tools/lib/bpf/strset.c b/tools/lib/bpf/strset.c
+index 1fb8b49de1d62..ea655318153f2 100644
+--- a/tools/lib/bpf/strset.c
++++ b/tools/lib/bpf/strset.c
+@@ -88,6 +88,7 @@ void strset__free(struct strset *set)
+ 
+ 	hashmap__free(set->strs_hash);
+ 	free(set->strs_data);
++	free(set);
+ }
+ 
+ size_t strset__data_size(const struct strset *set)
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index bc821056aba90..0893436cc09f8 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -684,7 +684,7 @@ static int elf_add_alternative(struct elf *elf,
+ 	sec = find_section_by_name(elf, ".altinstructions");
+ 	if (!sec) {
+ 		sec = elf_create_section(elf, ".altinstructions",
+-					 SHF_ALLOC, size, 0);
++					 SHF_ALLOC, 0, 0);
+ 
+ 		if (!sec) {
+ 			WARN_ELF("elf_create_section");
+diff --git a/tools/objtool/special.c b/tools/objtool/special.c
+index f1428e32a5052..83d5f969bcb00 100644
+--- a/tools/objtool/special.c
++++ b/tools/objtool/special.c
+@@ -58,22 +58,11 @@ void __weak arch_handle_alternative(unsigned short feature, struct special_alt *
+ {
+ }
+ 
+-static bool reloc2sec_off(struct reloc *reloc, struct section **sec, unsigned long *off)
++static void reloc_to_sec_off(struct reloc *reloc, struct section **sec,
++			     unsigned long *off)
+ {
+-	switch (reloc->sym->type) {
+-	case STT_FUNC:
+-		*sec = reloc->sym->sec;
+-		*off = reloc->sym->offset + reloc->addend;
+-		return true;
+-
+-	case STT_SECTION:
+-		*sec = reloc->sym->sec;
+-		*off = reloc->addend;
+-		return true;
+-
+-	default:
+-		return false;
+-	}
++	*sec = reloc->sym->sec;
++	*off = reloc->sym->offset + reloc->addend;
+ }
+ 
+ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+@@ -109,13 +98,8 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 		WARN_FUNC("can't find orig reloc", sec, offset + entry->orig);
+ 		return -1;
+ 	}
+-	if (!reloc2sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off)) {
+-		WARN_FUNC("don't know how to handle reloc symbol type %d: %s",
+-			   sec, offset + entry->orig,
+-			   orig_reloc->sym->type,
+-			   orig_reloc->sym->name);
+-		return -1;
+-	}
++
++	reloc_to_sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off);
+ 
+ 	if (!entry->group || alt->new_len) {
+ 		new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new);
+@@ -133,13 +117,7 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry,
+ 		if (arch_is_retpoline(new_reloc->sym))
+ 			return 1;
+ 
+-		if (!reloc2sec_off(new_reloc, &alt->new_sec, &alt->new_off)) {
+-			WARN_FUNC("don't know how to handle reloc symbol type %d: %s",
+-				  sec, offset + entry->new,
+-				  new_reloc->sym->type,
+-				  new_reloc->sym->name);
+-			return -1;
+-		}
++		reloc_to_sec_off(new_reloc, &alt->new_sec, &alt->new_off);
+ 
+ 		/* _ASM_EXTABLE_EX hack */
+ 		if (alt->new_off >= 0x7ffffff0)
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index 9604446f8360b..8b536117e154f 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -1284,6 +1284,7 @@ int main(int argc, char *argv[])
+ 	}
+ 
+ 	free_arch_std_events();
++	free_sys_event_tables();
+ 	free(mapfile);
+ 	return 0;
+ 
+@@ -1305,6 +1306,7 @@ err_close_eventsfp:
+ 		create_empty_mapping(output_file);
+ err_out:
+ 	free_arch_std_events();
++	free_sys_event_tables();
+ 	free(mapfile);
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-17 13:10 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-17 13:10 UTC (permalink / raw
  To: gentoo-commits

commit:     00a8c482d8731d5743bd512614cd63509309e907
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 17 13:09:21 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Oct 17 13:10:03 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=00a8c482

Linux patch 5.14.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-5.14.13.patch | 1320 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1324 insertions(+)

diff --git a/0000_README b/0000_README
index 4456b48..31ed9a4 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1011_linux-5.14.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.12
 
+Patch:  1012_linux-5.14.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.14.13.patch b/1012_linux-5.14.13.patch
new file mode 100644
index 0000000..c94e95e
--- /dev/null
+++ b/1012_linux-5.14.13.patch
@@ -0,0 +1,1320 @@
+diff --git a/Makefile b/Makefile
+index 02cde08f4978e..7bdca9dc0e61b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
+index 5df6193fc4304..8d741f71377f4 100644
+--- a/arch/arm64/kvm/hyp/nvhe/Makefile
++++ b/arch/arm64/kvm/hyp/nvhe/Makefile
+@@ -54,7 +54,7 @@ $(obj)/kvm_nvhe.tmp.o: $(obj)/hyp.lds $(addprefix $(obj)/,$(hyp-obj)) FORCE
+ #    runtime. Because the hypervisor is part of the kernel binary, relocations
+ #    produce a kernel VA. We enumerate relocations targeting hyp at build time
+ #    and convert the kernel VAs at those positions to hyp VAs.
+-$(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel
++$(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel FORCE
+ 	$(call if_changed,hyprel)
+ 
+ # 5) Compile hyp-reloc.S and link it into the existing partially linked object.
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index 8f215e79e70e6..cd11eb101eacd 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -447,7 +447,7 @@ static inline void save_fpu_state(struct sigcontext *sc, struct pt_regs *regs)
+ 
+ 	if (CPU_IS_060 ? sc->sc_fpstate[2] : sc->sc_fpstate[0]) {
+ 		fpu_version = sc->sc_fpstate[0];
+-		if (CPU_IS_020_OR_030 &&
++		if (CPU_IS_020_OR_030 && !regs->stkadj &&
+ 		    regs->vector >= (VEC_FPBRUC * 4) &&
+ 		    regs->vector <= (VEC_FPNAN * 4)) {
+ 			/* Clear pending exception in 68882 idle frame */
+@@ -510,7 +510,7 @@ static inline int rt_save_fpu_state(struct ucontext __user *uc, struct pt_regs *
+ 		if (!(CPU_IS_060 || CPU_IS_COLDFIRE))
+ 			context_size = fpstate[1];
+ 		fpu_version = fpstate[0];
+-		if (CPU_IS_020_OR_030 &&
++		if (CPU_IS_020_OR_030 && !regs->stkadj &&
+ 		    regs->vector >= (VEC_FPBRUC * 4) &&
+ 		    regs->vector <= (VEC_FPNAN * 4)) {
+ 			/* Clear pending exception in 68882 idle frame */
+@@ -832,18 +832,24 @@ badframe:
+ 	return 0;
+ }
+ 
++static inline struct pt_regs *rte_regs(struct pt_regs *regs)
++{
++	return (void *)regs + regs->stkadj;
++}
++
+ static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
+ 			     unsigned long mask)
+ {
++	struct pt_regs *tregs = rte_regs(regs);
+ 	sc->sc_mask = mask;
+ 	sc->sc_usp = rdusp();
+ 	sc->sc_d0 = regs->d0;
+ 	sc->sc_d1 = regs->d1;
+ 	sc->sc_a0 = regs->a0;
+ 	sc->sc_a1 = regs->a1;
+-	sc->sc_sr = regs->sr;
+-	sc->sc_pc = regs->pc;
+-	sc->sc_formatvec = regs->format << 12 | regs->vector;
++	sc->sc_sr = tregs->sr;
++	sc->sc_pc = tregs->pc;
++	sc->sc_formatvec = tregs->format << 12 | tregs->vector;
+ 	save_a5_state(sc, regs);
+ 	save_fpu_state(sc, regs);
+ }
+@@ -851,6 +857,7 @@ static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
+ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
+ {
+ 	struct switch_stack *sw = (struct switch_stack *)regs - 1;
++	struct pt_regs *tregs = rte_regs(regs);
+ 	greg_t __user *gregs = uc->uc_mcontext.gregs;
+ 	int err = 0;
+ 
+@@ -871,9 +878,9 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
+ 	err |= __put_user(sw->a5, &gregs[13]);
+ 	err |= __put_user(sw->a6, &gregs[14]);
+ 	err |= __put_user(rdusp(), &gregs[15]);
+-	err |= __put_user(regs->pc, &gregs[16]);
+-	err |= __put_user(regs->sr, &gregs[17]);
+-	err |= __put_user((regs->format << 12) | regs->vector, &uc->uc_formatvec);
++	err |= __put_user(tregs->pc, &gregs[16]);
++	err |= __put_user(tregs->sr, &gregs[17]);
++	err |= __put_user((tregs->format << 12) | tregs->vector, &uc->uc_formatvec);
+ 	err |= rt_save_fpu_state(uc, regs);
+ 	return err;
+ }
+@@ -890,13 +897,14 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 			struct pt_regs *regs)
+ {
+ 	struct sigframe __user *frame;
+-	int fsize = frame_extra_sizes(regs->format);
++	struct pt_regs *tregs = rte_regs(regs);
++	int fsize = frame_extra_sizes(tregs->format);
+ 	struct sigcontext context;
+ 	int err = 0, sig = ksig->sig;
+ 
+ 	if (fsize < 0) {
+ 		pr_debug("setup_frame: Unknown frame format %#x\n",
+-			 regs->format);
++			 tregs->format);
+ 		return -EFAULT;
+ 	}
+ 
+@@ -907,7 +915,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 
+ 	err |= __put_user(sig, &frame->sig);
+ 
+-	err |= __put_user(regs->vector, &frame->code);
++	err |= __put_user(tregs->vector, &frame->code);
+ 	err |= __put_user(&frame->sc, &frame->psc);
+ 
+ 	if (_NSIG_WORDS > 1)
+@@ -933,34 +941,28 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
+ 
+ 	push_cache ((unsigned long) &frame->retcode);
+ 
+-	/*
+-	 * Set up registers for signal handler.  All the state we are about
+-	 * to destroy is successfully copied to sigframe.
+-	 */
+-	wrusp ((unsigned long) frame);
+-	regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
+-	adjustformat(regs);
+-
+ 	/*
+ 	 * This is subtle; if we build more than one sigframe, all but the
+ 	 * first one will see frame format 0 and have fsize == 0, so we won't
+ 	 * screw stkadj.
+ 	 */
+-	if (fsize)
++	if (fsize) {
+ 		regs->stkadj = fsize;
+-
+-	/* Prepare to skip over the extra stuff in the exception frame.  */
+-	if (regs->stkadj) {
+-		struct pt_regs *tregs =
+-			(struct pt_regs *)((ulong)regs + regs->stkadj);
++		tregs = rte_regs(regs);
+ 		pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
+-		/* This must be copied with decreasing addresses to
+-                   handle overlaps.  */
+ 		tregs->vector = 0;
+ 		tregs->format = 0;
+-		tregs->pc = regs->pc;
+ 		tregs->sr = regs->sr;
+ 	}
++
++	/*
++	 * Set up registers for signal handler.  All the state we are about
++	 * to destroy is successfully copied to sigframe.
++	 */
++	wrusp ((unsigned long) frame);
++	tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
++	adjustformat(regs);
++
+ 	return 0;
+ }
+ 
+@@ -968,7 +970,8 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 			   struct pt_regs *regs)
+ {
+ 	struct rt_sigframe __user *frame;
+-	int fsize = frame_extra_sizes(regs->format);
++	struct pt_regs *tregs = rte_regs(regs);
++	int fsize = frame_extra_sizes(tregs->format);
+ 	int err = 0, sig = ksig->sig;
+ 
+ 	if (fsize < 0) {
+@@ -1018,34 +1021,27 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ 
+ 	push_cache ((unsigned long) &frame->retcode);
+ 
+-	/*
+-	 * Set up registers for signal handler.  All the state we are about
+-	 * to destroy is successfully copied to sigframe.
+-	 */
+-	wrusp ((unsigned long) frame);
+-	regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
+-	adjustformat(regs);
+-
+ 	/*
+ 	 * This is subtle; if we build more than one sigframe, all but the
+ 	 * first one will see frame format 0 and have fsize == 0, so we won't
+ 	 * screw stkadj.
+ 	 */
+-	if (fsize)
++	if (fsize) {
+ 		regs->stkadj = fsize;
+-
+-	/* Prepare to skip over the extra stuff in the exception frame.  */
+-	if (regs->stkadj) {
+-		struct pt_regs *tregs =
+-			(struct pt_regs *)((ulong)regs + regs->stkadj);
++		tregs = rte_regs(regs);
+ 		pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
+-		/* This must be copied with decreasing addresses to
+-                   handle overlaps.  */
+ 		tregs->vector = 0;
+ 		tregs->format = 0;
+-		tregs->pc = regs->pc;
+ 		tregs->sr = regs->sr;
+ 	}
++
++	/*
++	 * Set up registers for signal handler.  All the state we are about
++	 * to destroy is successfully copied to sigframe.
++	 */
++	wrusp ((unsigned long) frame);
++	tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
++	adjustformat(regs);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 4523df2785d63..5b6317bf97511 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -1094,6 +1094,8 @@ static int gmc_v10_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	gmc_v10_0_gart_disable(adev);
++
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		/* full access mode, so don't touch any GMC register */
+ 		DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
+@@ -1102,7 +1104,6 @@ static int gmc_v10_0_hw_fini(void *handle)
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+-	gmc_v10_0_gart_disable(adev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 7eb70d69f7605..f3cd2b3fb4cc0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1764,6 +1764,8 @@ static int gmc_v9_0_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	gmc_v9_0_gart_disable(adev);
++
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		/* full access mode, so don't touch any GMC register */
+ 		DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
+@@ -1772,7 +1774,6 @@ static int gmc_v9_0_hw_fini(void *handle)
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+-	gmc_v9_0_gart_disable(adev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index dc6bd4299c546..87edcd4ce07c2 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -322,12 +322,19 @@ static int apple_event(struct hid_device *hdev, struct hid_field *field,
+ 
+ /*
+  * MacBook JIS keyboard has wrong logical maximum
++ * Magic Keyboard JIS has wrong logical maximum
+  */
+ static __u8 *apple_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 		unsigned int *rsize)
+ {
+ 	struct apple_sc *asc = hid_get_drvdata(hdev);
+ 
++	if(*rsize >=71 && rdesc[70] == 0x65 && rdesc[64] == 0x65) {
++		hid_info(hdev,
++			 "fixing up Magic Keyboard JIS report descriptor\n");
++		rdesc[64] = rdesc[70] = 0xe7;
++	}
++
+ 	if ((asc->quirks & APPLE_RDESC_JIS) && *rsize >= 60 &&
+ 			rdesc[53] == 0x65 && rdesc[59] == 0x65) {
+ 		hid_info(hdev,
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 81ba642adcb74..528d94ccd76fe 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -4720,6 +4720,12 @@ static const struct wacom_features wacom_features_0x393 =
+ 	{ "Wacom Intuos Pro S", 31920, 19950, 8191, 63,
+ 	  INTUOSP2S_BT, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 7,
+ 	  .touch_max = 10 };
++static const struct wacom_features wacom_features_0x3c6 =
++	{ "Wacom Intuos BT S", 15200, 9500, 4095, 63,
++	  INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
++static const struct wacom_features wacom_features_0x3c8 =
++	{ "Wacom Intuos BT M", 21600, 13500, 4095, 63,
++	  INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
+ 
+ static const struct wacom_features wacom_features_HID_ANY_ID =
+ 	{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
+@@ -4893,6 +4899,8 @@ const struct hid_device_id wacom_ids[] = {
+ 	{ USB_DEVICE_WACOM(0x37A) },
+ 	{ USB_DEVICE_WACOM(0x37B) },
+ 	{ BT_DEVICE_WACOM(0x393) },
++	{ BT_DEVICE_WACOM(0x3c6) },
++	{ BT_DEVICE_WACOM(0x3c8) },
+ 	{ USB_DEVICE_WACOM(0x4001) },
+ 	{ USB_DEVICE_WACOM(0x4004) },
+ 	{ USB_DEVICE_WACOM(0x5000) },
+diff --git a/drivers/hwmon/ltc2947-core.c b/drivers/hwmon/ltc2947-core.c
+index bb3f7749a0b00..5423466de697a 100644
+--- a/drivers/hwmon/ltc2947-core.c
++++ b/drivers/hwmon/ltc2947-core.c
+@@ -989,8 +989,12 @@ static int ltc2947_setup(struct ltc2947_data *st)
+ 		return ret;
+ 
+ 	/* check external clock presence */
+-	extclk = devm_clk_get(st->dev, NULL);
+-	if (!IS_ERR(extclk)) {
++	extclk = devm_clk_get_optional(st->dev, NULL);
++	if (IS_ERR(extclk))
++		return dev_err_probe(st->dev, PTR_ERR(extclk),
++				     "Failed to get external clock\n");
++
++	if (extclk) {
+ 		unsigned long rate_hz;
+ 		u8 pre = 0, div, tbctl;
+ 		u64 aux;
+diff --git a/drivers/hwmon/pmbus/ibm-cffps.c b/drivers/hwmon/pmbus/ibm-cffps.c
+index df712ce4b164d..53f7d1418bc90 100644
+--- a/drivers/hwmon/pmbus/ibm-cffps.c
++++ b/drivers/hwmon/pmbus/ibm-cffps.c
+@@ -171,8 +171,14 @@ static ssize_t ibm_cffps_debugfs_read(struct file *file, char __user *buf,
+ 		cmd = CFFPS_SN_CMD;
+ 		break;
+ 	case CFFPS_DEBUGFS_MAX_POWER_OUT:
+-		rc = i2c_smbus_read_word_swapped(psu->client,
+-						 CFFPS_MAX_POWER_OUT_CMD);
++		if (psu->version == cffps1) {
++			rc = i2c_smbus_read_word_swapped(psu->client,
++					CFFPS_MAX_POWER_OUT_CMD);
++		} else {
++			rc = i2c_smbus_read_word_data(psu->client,
++					CFFPS_MAX_POWER_OUT_CMD);
++		}
++
+ 		if (rc < 0)
+ 			return rc;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bgmac-platform.c b/drivers/net/ethernet/broadcom/bgmac-platform.c
+index 4ab5bf64d353e..df8ff839cc621 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-platform.c
++++ b/drivers/net/ethernet/broadcom/bgmac-platform.c
+@@ -192,6 +192,9 @@ static int bgmac_probe(struct platform_device *pdev)
+ 	bgmac->dma_dev = &pdev->dev;
+ 
+ 	ret = of_get_mac_address(np, bgmac->net_dev->dev_addr);
++	if (ret == -EPROBE_DEFER)
++		return ret;
++
+ 	if (ret)
+ 		dev_warn(&pdev->dev,
+ 			 "MAC address not present in device tree\n");
+diff --git a/drivers/net/ethernet/sun/Kconfig b/drivers/net/ethernet/sun/Kconfig
+index 309de38a75304..b0d3f9a2950c0 100644
+--- a/drivers/net/ethernet/sun/Kconfig
++++ b/drivers/net/ethernet/sun/Kconfig
+@@ -73,6 +73,7 @@ config CASSINI
+ config SUNVNET_COMMON
+ 	tristate "Common routines to support Sun Virtual Networking"
+ 	depends on SUN_LDOMS
++	depends on INET
+ 	default m
+ 
+ config SUNVNET
+diff --git a/drivers/pinctrl/qcom/pinctrl-sc7280.c b/drivers/pinctrl/qcom/pinctrl-sc7280.c
+index afddf6d60dbe6..9017ede409c9c 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sc7280.c
++++ b/drivers/pinctrl/qcom/pinctrl-sc7280.c
+@@ -1496,6 +1496,7 @@ static const struct of_device_id sc7280_pinctrl_of_match[] = {
+ static struct platform_driver sc7280_pinctrl_driver = {
+ 	.driver = {
+ 		.name = "sc7280-pinctrl",
++		.pm = &msm_pinctrl_dev_pm_ops,
+ 		.of_match_table = sc7280_pinctrl_of_match,
+ 	},
+ 	.probe = sc7280_pinctrl_probe,
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 2aa8f519aae62..5f1092195d1f6 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -2399,7 +2399,7 @@ static void qla24xx_nvme_iocb_entry(scsi_qla_host_t *vha, struct req_que *req,
+ 	}
+ 
+ 	if (unlikely(logit))
+-		ql_log(ql_log_warn, fcport->vha, 0x5060,
++		ql_log(ql_dbg_io, fcport->vha, 0x5060,
+ 		   "NVME-%s ERR Handling - hdl=%x status(%x) tr_len:%x resid=%x  ox_id=%x\n",
+ 		   sp->name, sp->handle, comp_status,
+ 		   fd->transferred_length, le32_to_cpu(sts->residual_len),
+@@ -3246,7 +3246,7 @@ check_scsi_status:
+ 
+ out:
+ 	if (logit)
+-		ql_log(ql_log_warn, fcport->vha, 0x3022,
++		ql_log(ql_dbg_io, fcport->vha, 0x3022,
+ 		       "FCP command status: 0x%x-0x%x (0x%x) nexus=%ld:%d:%llu portid=%02x%02x%02x oxid=0x%x cdb=%10phN len=0x%x rsp_info=0x%x resid=0x%x fw_resid=0x%x sp=%p cp=%p.\n",
+ 		       comp_status, scsi_status, res, vha->host_no,
+ 		       cp->device->id, cp->device->lun, fcport->d_id.b.domain,
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index 43e682297fd5f..0a1734f34587d 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -118,7 +118,7 @@ static int ses_recv_diag(struct scsi_device *sdev, int page_code,
+ static int ses_send_diag(struct scsi_device *sdev, int page_code,
+ 			 void *buf, int bufflen)
+ {
+-	u32 result;
++	int result;
+ 
+ 	unsigned char cmd[] = {
+ 		SEND_DIAGNOSTIC,
+diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
+index b0deaf4af5a37..13f55f41a902d 100644
+--- a/drivers/scsi/virtio_scsi.c
++++ b/drivers/scsi/virtio_scsi.c
+@@ -300,7 +300,7 @@ static void virtscsi_handle_transport_reset(struct virtio_scsi *vscsi,
+ 		}
+ 		break;
+ 	default:
+-		pr_info("Unsupport virtio scsi event reason %x\n", event->reason);
++		pr_info("Unsupported virtio scsi event reason %x\n", event->reason);
+ 	}
+ }
+ 
+@@ -392,7 +392,7 @@ static void virtscsi_handle_event(struct work_struct *work)
+ 		virtscsi_handle_param_change(vscsi, event);
+ 		break;
+ 	default:
+-		pr_err("Unsupport virtio scsi event %x\n", event->event);
++		pr_err("Unsupported virtio scsi event %x\n", event->event);
+ 	}
+ 	virtscsi_kick_event(vscsi, event_node);
+ }
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 24e994e75f5ca..8049448476a65 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -733,18 +733,13 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	void *kaddr;
+ 	struct ext4_iloc iloc;
+ 
+-	if (unlikely(copied < len)) {
+-		if (!PageUptodate(page)) {
+-			copied = 0;
+-			goto out;
+-		}
+-	}
++	if (unlikely(copied < len) && !PageUptodate(page))
++		return 0;
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret) {
+ 		ext4_std_error(inode->i_sb, ret);
+-		copied = 0;
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	ext4_write_lock_xattr(inode, &no_expand);
+@@ -757,7 +752,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	(void) ext4_find_inline_data_nolock(inode);
+ 
+ 	kaddr = kmap_atomic(page);
+-	ext4_write_inline_data(inode, &iloc, kaddr, pos, len);
++	ext4_write_inline_data(inode, &iloc, kaddr, pos, copied);
+ 	kunmap_atomic(kaddr);
+ 	SetPageUptodate(page);
+ 	/* clear page dirty so that writepages wouldn't work for us. */
+@@ -766,7 +761,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	ext4_write_unlock_xattr(inode, &no_expand);
+ 	brelse(iloc.bh);
+ 	mark_inode_dirty(inode);
+-out:
++
+ 	return copied;
+ }
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 73daf9443e5e0..fc6ea56de77c2 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1295,6 +1295,7 @@ static int ext4_write_end(struct file *file,
+ 			goto errout;
+ 		}
+ 		copied = ret;
++		ret = 0;
+ 	} else
+ 		copied = block_write_end(file, mapping, pos,
+ 					 len, copied, page, fsdata);
+@@ -1321,13 +1322,14 @@ static int ext4_write_end(struct file *file,
+ 	if (i_size_changed || inline_data)
+ 		ret = ext4_mark_inode_dirty(handle, inode);
+ 
++errout:
+ 	if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
+ 		/* if we have allocated more blocks and copied
+ 		 * less. We will have blocks allocated outside
+ 		 * inode->i_size. So truncate them
+ 		 */
+ 		ext4_orphan_add(handle, inode);
+-errout:
++
+ 	ret2 = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = ret2;
+@@ -1410,6 +1412,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 			goto errout;
+ 		}
+ 		copied = ret;
++		ret = 0;
+ 	} else if (unlikely(copied < len) && !PageUptodate(page)) {
+ 		copied = 0;
+ 		ext4_journalled_zero_new_buffers(handle, page, from, to);
+@@ -1439,6 +1442,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 			ret = ret2;
+ 	}
+ 
++errout:
+ 	if (pos + len > inode->i_size && !verity && ext4_can_truncate(inode))
+ 		/* if we have allocated more blocks and copied
+ 		 * less. We will have blocks allocated outside
+@@ -1446,7 +1450,6 @@ static int ext4_journalled_write_end(struct file *file,
+ 		 */
+ 		ext4_orphan_add(handle, inode);
+ 
+-errout:
+ 	ret2 = ext4_journal_stop(handle);
+ 	if (!ret)
+ 		ret = ret2;
+@@ -3089,35 +3092,37 @@ static int ext4_da_write_end(struct file *file,
+ 	end = start + copied - 1;
+ 
+ 	/*
+-	 * generic_write_end() will run mark_inode_dirty() if i_size
+-	 * changes.  So let's piggyback the i_disksize mark_inode_dirty
+-	 * into that.
++	 * Since we are holding inode lock, we are sure i_disksize <=
++	 * i_size. We also know that if i_disksize < i_size, there are
++	 * delalloc writes pending in the range upto i_size. If the end of
++	 * the current write is <= i_size, there's no need to touch
++	 * i_disksize since writeback will push i_disksize upto i_size
++	 * eventually. If the end of the current write is > i_size and
++	 * inside an allocated block (ext4_da_should_update_i_disksize()
++	 * check), we need to update i_disksize here as neither
++	 * ext4_writepage() nor certain ext4_writepages() paths not
++	 * allocating blocks update i_disksize.
++	 *
++	 * Note that we defer inode dirtying to generic_write_end() /
++	 * ext4_da_write_inline_data_end().
+ 	 */
+ 	new_i_size = pos + copied;
+-	if (copied && new_i_size > EXT4_I(inode)->i_disksize) {
++	if (copied && new_i_size > inode->i_size) {
+ 		if (ext4_has_inline_data(inode) ||
+-		    ext4_da_should_update_i_disksize(page, end)) {
++		    ext4_da_should_update_i_disksize(page, end))
+ 			ext4_update_i_disksize(inode, new_i_size);
+-			/* We need to mark inode dirty even if
+-			 * new_i_size is less that inode->i_size
+-			 * bu greater than i_disksize.(hint delalloc)
+-			 */
+-			ret = ext4_mark_inode_dirty(handle, inode);
+-		}
+ 	}
+ 
+ 	if (write_mode != CONVERT_INLINE_DATA &&
+ 	    ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
+ 	    ext4_has_inline_data(inode))
+-		ret2 = ext4_da_write_inline_data_end(inode, pos, len, copied,
++		ret = ext4_da_write_inline_data_end(inode, pos, len, copied,
+ 						     page);
+ 	else
+-		ret2 = generic_write_end(file, mapping, pos, len, copied,
++		ret = generic_write_end(file, mapping, pos, len, copied,
+ 							page, fsdata);
+ 
+-	copied = ret2;
+-	if (ret2 < 0)
+-		ret = ret2;
++	copied = ret;
+ 	ret2 = ext4_journal_stop(handle);
+ 	if (unlikely(ret2 && !ret))
+ 		ret = ret2;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 675216f7022da..2f79586c1a7c8 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -419,7 +419,6 @@ struct io_ring_ctx {
+ 		struct wait_queue_head	cq_wait;
+ 		unsigned		cq_extra;
+ 		atomic_t		cq_timeouts;
+-		struct fasync_struct	*cq_fasync;
+ 		unsigned		cq_last_tm_flush;
+ 	} ____cacheline_aligned_in_smp;
+ 
+@@ -1448,10 +1447,8 @@ static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
+ 		wake_up(&ctx->sq_data->wait);
+ 	if (io_should_trigger_evfd(ctx))
+ 		eventfd_signal(ctx->cq_ev_fd, 1);
+-	if (waitqueue_active(&ctx->poll_wait)) {
++	if (waitqueue_active(&ctx->poll_wait))
+ 		wake_up_interruptible(&ctx->poll_wait);
+-		kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
+-	}
+ }
+ 
+ static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
+@@ -1465,10 +1462,8 @@ static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
+ 	}
+ 	if (io_should_trigger_evfd(ctx))
+ 		eventfd_signal(ctx->cq_ev_fd, 1);
+-	if (waitqueue_active(&ctx->poll_wait)) {
++	if (waitqueue_active(&ctx->poll_wait))
+ 		wake_up_interruptible(&ctx->poll_wait);
+-		kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
+-	}
+ }
+ 
+ /* Returns true if there are no backlogged entries after the flush */
+@@ -8779,13 +8774,6 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+ 	return mask;
+ }
+ 
+-static int io_uring_fasync(int fd, struct file *file, int on)
+-{
+-	struct io_ring_ctx *ctx = file->private_data;
+-
+-	return fasync_helper(fd, file, on, &ctx->cq_fasync);
+-}
+-
+ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
+ {
+ 	const struct cred *creds;
+@@ -9571,7 +9559,6 @@ static const struct file_operations io_uring_fops = {
+ 	.mmap_capabilities = io_uring_nommu_mmap_capabilities,
+ #endif
+ 	.poll		= io_uring_poll,
+-	.fasync		= io_uring_fasync,
+ #ifdef CONFIG_PROC_FS
+ 	.show_fdinfo	= io_uring_show_fdinfo,
+ #endif
+diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c
+index 4f5e59f062846..37dd3fe5b1e98 100644
+--- a/fs/vboxsf/super.c
++++ b/fs/vboxsf/super.c
+@@ -21,10 +21,7 @@
+ 
+ #define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */
+ 
+-#define VBSF_MOUNT_SIGNATURE_BYTE_0 ('\000')
+-#define VBSF_MOUNT_SIGNATURE_BYTE_1 ('\377')
+-#define VBSF_MOUNT_SIGNATURE_BYTE_2 ('\376')
+-#define VBSF_MOUNT_SIGNATURE_BYTE_3 ('\375')
++static const unsigned char VBSF_MOUNT_SIGNATURE[4] = "\000\377\376\375";
+ 
+ static int follow_symlinks;
+ module_param(follow_symlinks, int, 0444);
+@@ -386,12 +383,7 @@ fail_nomem:
+ 
+ static int vboxsf_parse_monolithic(struct fs_context *fc, void *data)
+ {
+-	unsigned char *options = data;
+-
+-	if (options && options[0] == VBSF_MOUNT_SIGNATURE_BYTE_0 &&
+-		       options[1] == VBSF_MOUNT_SIGNATURE_BYTE_1 &&
+-		       options[2] == VBSF_MOUNT_SIGNATURE_BYTE_2 &&
+-		       options[3] == VBSF_MOUNT_SIGNATURE_BYTE_3) {
++	if (data && !memcmp(data, VBSF_MOUNT_SIGNATURE, 4)) {
+ 		vbg_err("vboxsf: Old binary mount data not supported, remove obsolete mount.vboxsf and/or update your VBoxService.\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 2d510ad750edc..4aa52f7a48c16 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -683,7 +683,9 @@ struct perf_event {
+ 	/*
+ 	 * timestamp shadows the actual context timing but it can
+ 	 * be safely used in NMI interrupt context. It reflects the
+-	 * context time as it was when the event was last scheduled in.
++	 * context time as it was when the event was last scheduled in,
++	 * or when ctx_sched_in failed to schedule the event because we
++	 * run out of PMC.
+ 	 *
+ 	 * ctx_time already accounts for ctx->timestamp. Therefore to
+ 	 * compute ctx_time for a sample, simply add perf_clock().
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f6935787e7e8b..8e10c7accdbcc 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1633,7 +1633,7 @@ extern struct pid *cad_pid;
+ #define tsk_used_math(p)			((p)->flags & PF_USED_MATH)
+ #define used_math()				tsk_used_math(current)
+ 
+-static inline bool is_percpu_thread(void)
++static __always_inline bool is_percpu_thread(void)
+ {
+ #ifdef CONFIG_SMP
+ 	return (current->flags & PF_NO_SETAFFINITY) &&
+diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
+index 6d7b12cba0158..bf79f3a890af2 100644
+--- a/include/net/pkt_sched.h
++++ b/include/net/pkt_sched.h
+@@ -11,6 +11,7 @@
+ #include <uapi/linux/pkt_sched.h>
+ 
+ #define DEFAULT_TX_QUEUE_LEN	1000
++#define STAB_SIZE_LOG_MAX	30
+ 
+ struct qdisc_walker {
+ 	int	stop;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e5c4aca620c58..22c5b1622c226 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3707,6 +3707,29 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
+ 	return 0;
+ }
+ 
++static inline bool event_update_userpage(struct perf_event *event)
++{
++	if (likely(!atomic_read(&event->mmap_count)))
++		return false;
++
++	perf_event_update_time(event);
++	perf_set_shadow_time(event, event->ctx);
++	perf_event_update_userpage(event);
++
++	return true;
++}
++
++static inline void group_update_userpage(struct perf_event *group_event)
++{
++	struct perf_event *event;
++
++	if (!event_update_userpage(group_event))
++		return;
++
++	for_each_sibling_event(event, group_event)
++		event_update_userpage(event);
++}
++
+ static int merge_sched_in(struct perf_event *event, void *data)
+ {
+ 	struct perf_event_context *ctx = event->ctx;
+@@ -3725,14 +3748,15 @@ static int merge_sched_in(struct perf_event *event, void *data)
+ 	}
+ 
+ 	if (event->state == PERF_EVENT_STATE_INACTIVE) {
++		*can_add_hw = 0;
+ 		if (event->attr.pinned) {
+ 			perf_cgroup_event_disable(event, ctx);
+ 			perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
++		} else {
++			ctx->rotate_necessary = 1;
++			perf_mux_hrtimer_restart(cpuctx);
++			group_update_userpage(event);
+ 		}
+-
+-		*can_add_hw = 0;
+-		ctx->rotate_necessary = 1;
+-		perf_mux_hrtimer_restart(cpuctx);
+ 	}
+ 
+ 	return 0;
+@@ -6311,6 +6335,8 @@ accounting:
+ 
+ 		ring_buffer_attach(event, rb);
+ 
++		perf_event_update_time(event);
++		perf_set_shadow_time(event, event->ctx);
+ 		perf_event_init_userpage(event);
+ 		perf_event_update_userpage(event);
+ 	} else {
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index de2cf3943b91e..a579ea14a69b6 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -273,6 +273,7 @@ ip6t_do_table(struct sk_buff *skb,
+ 	 * things we don't know, ie. tcp syn flag or ports).  If the
+ 	 * rule is also a fragment-specific rule, non-fragments won't
+ 	 * match it. */
++	acpar.fragoff = 0;
+ 	acpar.hotdrop = false;
+ 	acpar.state   = state;
+ 
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index efbefcbac3ac6..7cab1cf09bf1a 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -60,7 +60,10 @@ static struct mesh_table *mesh_table_alloc(void)
+ 	atomic_set(&newtbl->entries,  0);
+ 	spin_lock_init(&newtbl->gates_lock);
+ 	spin_lock_init(&newtbl->walk_lock);
+-	rhashtable_init(&newtbl->rhead, &mesh_rht_params);
++	if (rhashtable_init(&newtbl->rhead, &mesh_rht_params)) {
++		kfree(newtbl);
++		return NULL;
++	}
+ 
+ 	return newtbl;
+ }
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 2563473b5cf16..e023e307c0c3d 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4053,7 +4053,8 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ 		if (!bssid)
+ 			return false;
+ 		if (ether_addr_equal(sdata->vif.addr, hdr->addr2) ||
+-		    ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2))
++		    ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2) ||
++		    !is_valid_ether_addr(hdr->addr2))
+ 			return false;
+ 		if (ieee80211_is_beacon(hdr->frame_control))
+ 			return true;
+diff --git a/net/netfilter/nf_nat_masquerade.c b/net/netfilter/nf_nat_masquerade.c
+index 8e8a65d46345b..acd73f717a088 100644
+--- a/net/netfilter/nf_nat_masquerade.c
++++ b/net/netfilter/nf_nat_masquerade.c
+@@ -9,8 +9,19 @@
+ 
+ #include <net/netfilter/nf_nat_masquerade.h>
+ 
++struct masq_dev_work {
++	struct work_struct work;
++	struct net *net;
++	union nf_inet_addr addr;
++	int ifindex;
++	int (*iter)(struct nf_conn *i, void *data);
++};
++
++#define MAX_MASQ_WORKER_COUNT	16
++
+ static DEFINE_MUTEX(masq_mutex);
+ static unsigned int masq_refcnt __read_mostly;
++static atomic_t masq_worker_count __read_mostly;
+ 
+ unsigned int
+ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
+@@ -63,13 +74,71 @@ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4);
+ 
+-static int device_cmp(struct nf_conn *i, void *ifindex)
++static void iterate_cleanup_work(struct work_struct *work)
++{
++	struct masq_dev_work *w;
++
++	w = container_of(work, struct masq_dev_work, work);
++
++	nf_ct_iterate_cleanup_net(w->net, w->iter, (void *)w, 0, 0);
++
++	put_net(w->net);
++	kfree(w);
++	atomic_dec(&masq_worker_count);
++	module_put(THIS_MODULE);
++}
++
++/* Iterate conntrack table in the background and remove conntrack entries
++ * that use the device/address being removed.
++ *
++ * In case too many work items have been queued already or memory allocation
++ * fails iteration is skipped, conntrack entries will time out eventually.
++ */
++static void nf_nat_masq_schedule(struct net *net, union nf_inet_addr *addr,
++				 int ifindex,
++				 int (*iter)(struct nf_conn *i, void *data),
++				 gfp_t gfp_flags)
++{
++	struct masq_dev_work *w;
++
++	if (atomic_read(&masq_worker_count) > MAX_MASQ_WORKER_COUNT)
++		return;
++
++	net = maybe_get_net(net);
++	if (!net)
++		return;
++
++	if (!try_module_get(THIS_MODULE))
++		goto err_module;
++
++	w = kzalloc(sizeof(*w), gfp_flags);
++	if (w) {
++		/* We can overshoot MAX_MASQ_WORKER_COUNT, no big deal */
++		atomic_inc(&masq_worker_count);
++
++		INIT_WORK(&w->work, iterate_cleanup_work);
++		w->ifindex = ifindex;
++		w->net = net;
++		w->iter = iter;
++		if (addr)
++			w->addr = *addr;
++		schedule_work(&w->work);
++		return;
++	}
++
++	module_put(THIS_MODULE);
++ err_module:
++	put_net(net);
++}
++
++static int device_cmp(struct nf_conn *i, void *arg)
+ {
+ 	const struct nf_conn_nat *nat = nfct_nat(i);
++	const struct masq_dev_work *w = arg;
+ 
+ 	if (!nat)
+ 		return 0;
+-	return nat->masq_index == (int)(long)ifindex;
++	return nat->masq_index == w->ifindex;
+ }
+ 
+ static int masq_device_event(struct notifier_block *this,
+@@ -85,8 +154,8 @@ static int masq_device_event(struct notifier_block *this,
+ 		 * and forget them.
+ 		 */
+ 
+-		nf_ct_iterate_cleanup_net(net, device_cmp,
+-					  (void *)(long)dev->ifindex, 0, 0);
++		nf_nat_masq_schedule(net, NULL, dev->ifindex,
++				     device_cmp, GFP_KERNEL);
+ 	}
+ 
+ 	return NOTIFY_DONE;
+@@ -94,35 +163,45 @@ static int masq_device_event(struct notifier_block *this,
+ 
+ static int inet_cmp(struct nf_conn *ct, void *ptr)
+ {
+-	struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
+-	struct net_device *dev = ifa->ifa_dev->dev;
+ 	struct nf_conntrack_tuple *tuple;
++	struct masq_dev_work *w = ptr;
+ 
+-	if (!device_cmp(ct, (void *)(long)dev->ifindex))
++	if (!device_cmp(ct, ptr))
+ 		return 0;
+ 
+ 	tuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
+ 
+-	return ifa->ifa_address == tuple->dst.u3.ip;
++	return nf_inet_addr_cmp(&w->addr, &tuple->dst.u3);
+ }
+ 
+ static int masq_inet_event(struct notifier_block *this,
+ 			   unsigned long event,
+ 			   void *ptr)
+ {
+-	struct in_device *idev = ((struct in_ifaddr *)ptr)->ifa_dev;
+-	struct net *net = dev_net(idev->dev);
++	const struct in_ifaddr *ifa = ptr;
++	const struct in_device *idev;
++	const struct net_device *dev;
++	union nf_inet_addr addr;
++
++	if (event != NETDEV_DOWN)
++		return NOTIFY_DONE;
+ 
+ 	/* The masq_dev_notifier will catch the case of the device going
+ 	 * down.  So if the inetdev is dead and being destroyed we have
+ 	 * no work to do.  Otherwise this is an individual address removal
+ 	 * and we have to perform the flush.
+ 	 */
++	idev = ifa->ifa_dev;
+ 	if (idev->dead)
+ 		return NOTIFY_DONE;
+ 
+-	if (event == NETDEV_DOWN)
+-		nf_ct_iterate_cleanup_net(net, inet_cmp, ptr, 0, 0);
++	memset(&addr, 0, sizeof(addr));
++
++	addr.ip = ifa->ifa_address;
++
++	dev = idev->dev;
++	nf_nat_masq_schedule(dev_net(idev->dev), &addr, dev->ifindex,
++			     inet_cmp, GFP_KERNEL);
+ 
+ 	return NOTIFY_DONE;
+ }
+@@ -136,8 +215,6 @@ static struct notifier_block masq_inet_notifier = {
+ };
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-static atomic_t v6_worker_count __read_mostly;
+-
+ static int
+ nat_ipv6_dev_get_saddr(struct net *net, const struct net_device *dev,
+ 		       const struct in6_addr *daddr, unsigned int srcprefs,
+@@ -187,40 +264,6 @@ nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range,
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6);
+ 
+-struct masq_dev_work {
+-	struct work_struct work;
+-	struct net *net;
+-	struct in6_addr addr;
+-	int ifindex;
+-};
+-
+-static int inet6_cmp(struct nf_conn *ct, void *work)
+-{
+-	struct masq_dev_work *w = (struct masq_dev_work *)work;
+-	struct nf_conntrack_tuple *tuple;
+-
+-	if (!device_cmp(ct, (void *)(long)w->ifindex))
+-		return 0;
+-
+-	tuple = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
+-
+-	return ipv6_addr_equal(&w->addr, &tuple->dst.u3.in6);
+-}
+-
+-static void iterate_cleanup_work(struct work_struct *work)
+-{
+-	struct masq_dev_work *w;
+-
+-	w = container_of(work, struct masq_dev_work, work);
+-
+-	nf_ct_iterate_cleanup_net(w->net, inet6_cmp, (void *)w, 0, 0);
+-
+-	put_net(w->net);
+-	kfree(w);
+-	atomic_dec(&v6_worker_count);
+-	module_put(THIS_MODULE);
+-}
+-
+ /* atomic notifier; can't call nf_ct_iterate_cleanup_net (it can sleep).
+  *
+  * Defer it to the system workqueue.
+@@ -233,36 +276,19 @@ static int masq_inet6_event(struct notifier_block *this,
+ {
+ 	struct inet6_ifaddr *ifa = ptr;
+ 	const struct net_device *dev;
+-	struct masq_dev_work *w;
+-	struct net *net;
++	union nf_inet_addr addr;
+ 
+-	if (event != NETDEV_DOWN || atomic_read(&v6_worker_count) >= 16)
++	if (event != NETDEV_DOWN)
+ 		return NOTIFY_DONE;
+ 
+ 	dev = ifa->idev->dev;
+-	net = maybe_get_net(dev_net(dev));
+-	if (!net)
+-		return NOTIFY_DONE;
+ 
+-	if (!try_module_get(THIS_MODULE))
+-		goto err_module;
++	memset(&addr, 0, sizeof(addr));
+ 
+-	w = kmalloc(sizeof(*w), GFP_ATOMIC);
+-	if (w) {
+-		atomic_inc(&v6_worker_count);
+-
+-		INIT_WORK(&w->work, iterate_cleanup_work);
+-		w->ifindex = dev->ifindex;
+-		w->net = net;
+-		w->addr = ifa->addr;
+-		schedule_work(&w->work);
++	addr.in6 = ifa->addr;
+ 
+-		return NOTIFY_DONE;
+-	}
+-
+-	module_put(THIS_MODULE);
+- err_module:
+-	put_net(net);
++	nf_nat_masq_schedule(dev_net(dev), &addr, dev->ifindex, inet_cmp,
++			     GFP_ATOMIC);
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index f87d07736a140..148edd0e71e32 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -513,6 +513,12 @@ static struct qdisc_size_table *qdisc_get_stab(struct nlattr *opt,
+ 		return stab;
+ 	}
+ 
++	if (s->size_log > STAB_SIZE_LOG_MAX ||
++	    s->cell_log > STAB_SIZE_LOG_MAX) {
++		NL_SET_ERR_MSG(extack, "Invalid logarithmic size of size table");
++		return ERR_PTR(-EINVAL);
++	}
++
+ 	stab = kmalloc(sizeof(*stab) + tsize * sizeof(u16), GFP_KERNEL);
+ 	if (!stab)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index cb5b5e3a481b9..daf731364695b 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -184,13 +184,16 @@ static int detect_quirks(struct snd_oxfw *oxfw, const struct ieee1394_device_id
+ 			model = val;
+ 	}
+ 
+-	/*
+-	 * Mackie Onyx Satellite with base station has a quirk to report a wrong
+-	 * value in 'dbs' field of CIP header against its format information.
+-	 */
+-	if (vendor == VENDOR_LOUD && model == MODEL_SATELLITE)
++	if (vendor == VENDOR_LOUD) {
++		// Mackie Onyx Satellite with base station has a quirk to report a wrong
++		// value in 'dbs' field of CIP header against its format information.
+ 		oxfw->quirks |= SND_OXFW_QUIRK_WRONG_DBS;
+ 
++		// OXFW971-based models may transfer events by blocking method.
++		if (!(oxfw->quirks & SND_OXFW_QUIRK_JUMBO_PAYLOAD))
++			oxfw->quirks |= SND_OXFW_QUIRK_BLOCKING_TRANSMISSION;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 1a867c73a48e0..cb3afc4519cf6 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -860,6 +860,11 @@ static int create_sdw_dailink(struct device *dev, int *be_index,
+ 			      cpus + *cpu_id, cpu_dai_num,
+ 			      codecs, codec_num,
+ 			      NULL, &sdw_ops);
++		/*
++		 * SoundWire DAILINKs use 'stream' functions and Bank Switch operations
++		 * based on wait_for_completion(), tag them as 'nonatomic'.
++		 */
++		dai_links[*be_index].nonatomic = true;
+ 
+ 		ret = set_codec_init_func(link, dai_links + (*be_index)++,
+ 					  playback, group_id);
+diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
+index 3e4dd4a86363b..59d0d7b2b55c8 100644
+--- a/sound/soc/sof/core.c
++++ b/sound/soc/sof/core.c
+@@ -371,7 +371,6 @@ int snd_sof_device_remove(struct device *dev)
+ 			dev_warn(dev, "error: %d failed to prepare DSP for device removal",
+ 				 ret);
+ 
+-		snd_sof_fw_unload(sdev);
+ 		snd_sof_ipc_free(sdev);
+ 		snd_sof_free_debug(sdev);
+ 		snd_sof_free_trace(sdev);
+@@ -394,8 +393,7 @@ int snd_sof_device_remove(struct device *dev)
+ 		snd_sof_remove(sdev);
+ 
+ 	/* release firmware */
+-	release_firmware(pdata->fw);
+-	pdata->fw = NULL;
++	snd_sof_fw_unload(sdev);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sof/loader.c b/sound/soc/sof/loader.c
+index 2b38a77cd594f..9c3f251a0dd05 100644
+--- a/sound/soc/sof/loader.c
++++ b/sound/soc/sof/loader.c
+@@ -880,5 +880,7 @@ EXPORT_SYMBOL(snd_sof_run_firmware);
+ void snd_sof_fw_unload(struct snd_sof_dev *sdev)
+ {
+ 	/* TODO: support module unloading at runtime */
++	release_firmware(sdev->pdata->fw);
++	sdev->pdata->fw = NULL;
+ }
+ EXPORT_SYMBOL(snd_sof_fw_unload);
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 6abfc9d079e7c..fa75b7e72ad1f 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -1020,7 +1020,7 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ 	return 0;
+ }
+ 
+-static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
++static int usb_audio_resume(struct usb_interface *intf)
+ {
+ 	struct snd_usb_audio *chip = usb_get_intfdata(intf);
+ 	struct snd_usb_stream *as;
+@@ -1046,7 +1046,7 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+ 	 * we just notify and restart the mixers
+ 	 */
+ 	list_for_each_entry(mixer, &chip->mixer_list, list) {
+-		err = snd_usb_mixer_resume(mixer, reset_resume);
++		err = snd_usb_mixer_resume(mixer);
+ 		if (err < 0)
+ 			goto err_out;
+ 	}
+@@ -1066,20 +1066,10 @@ err_out:
+ 	atomic_dec(&chip->active); /* allow autopm after this point */
+ 	return err;
+ }
+-
+-static int usb_audio_resume(struct usb_interface *intf)
+-{
+-	return __usb_audio_resume(intf, false);
+-}
+-
+-static int usb_audio_reset_resume(struct usb_interface *intf)
+-{
+-	return __usb_audio_resume(intf, true);
+-}
+ #else
+ #define usb_audio_suspend	NULL
+ #define usb_audio_resume	NULL
+-#define usb_audio_reset_resume	NULL
++#define usb_audio_resume	NULL
+ #endif		/* CONFIG_PM */
+ 
+ static const struct usb_device_id usb_audio_ids [] = {
+@@ -1101,7 +1091,7 @@ static struct usb_driver usb_audio_driver = {
+ 	.disconnect =	usb_audio_disconnect,
+ 	.suspend =	usb_audio_suspend,
+ 	.resume =	usb_audio_resume,
+-	.reset_resume =	usb_audio_reset_resume,
++	.reset_resume =	usb_audio_resume,
+ 	.id_table =	usb_audio_ids,
+ 	.supports_autosuspend = 1,
+ };
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 9b713b4a5ec4c..fa7cf982d39e5 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -3655,33 +3655,16 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list)
+ 	return 0;
+ }
+ 
+-static int default_mixer_reset_resume(struct usb_mixer_elem_list *list)
+-{
+-	int err;
+-
+-	if (list->resume) {
+-		err = list->resume(list);
+-		if (err < 0)
+-			return err;
+-	}
+-	return restore_mixer_value(list);
+-}
+-
+-int snd_usb_mixer_resume(struct usb_mixer_interface *mixer, bool reset_resume)
++int snd_usb_mixer_resume(struct usb_mixer_interface *mixer)
+ {
+ 	struct usb_mixer_elem_list *list;
+-	usb_mixer_elem_resume_func_t f;
+ 	int id, err;
+ 
+ 	/* restore cached mixer values */
+ 	for (id = 0; id < MAX_ID_ELEMS; id++) {
+ 		for_each_mixer_elem(list, mixer, id) {
+-			if (reset_resume)
+-				f = list->reset_resume;
+-			else
+-				f = list->resume;
+-			if (f) {
+-				err = f(list);
++			if (list->resume) {
++				err = list->resume(list);
+ 				if (err < 0)
+ 					return err;
+ 			}
+@@ -3702,7 +3685,6 @@ void snd_usb_mixer_elem_init_std(struct usb_mixer_elem_list *list,
+ 	list->id = unitid;
+ 	list->dump = snd_usb_mixer_dump_cval;
+ #ifdef CONFIG_PM
+-	list->resume = NULL;
+-	list->reset_resume = default_mixer_reset_resume;
++	list->resume = restore_mixer_value;
+ #endif
+ }
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index ea41e7a1f7bf2..16567912b998e 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -70,7 +70,6 @@ struct usb_mixer_elem_list {
+ 	bool is_std_info;
+ 	usb_mixer_elem_dump_func_t dump;
+ 	usb_mixer_elem_resume_func_t resume;
+-	usb_mixer_elem_resume_func_t reset_resume;
+ };
+ 
+ /* iterate over mixer element list of the given unit id */
+@@ -122,7 +121,7 @@ int snd_usb_mixer_vol_tlv(struct snd_kcontrol *kcontrol, int op_flag,
+ 
+ #ifdef CONFIG_PM
+ int snd_usb_mixer_suspend(struct usb_mixer_interface *mixer);
+-int snd_usb_mixer_resume(struct usb_mixer_interface *mixer, bool reset_resume);
++int snd_usb_mixer_resume(struct usb_mixer_interface *mixer);
+ #endif
+ 
+ int snd_usb_set_cur_mix_value(struct usb_mixer_elem_info *cval, int channel,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 0a3cb8fd7d004..4a4d3361ac047 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -151,7 +151,7 @@ static int add_single_ctl_with_resume(struct usb_mixer_interface *mixer,
+ 		*listp = list;
+ 	list->mixer = mixer;
+ 	list->id = id;
+-	list->reset_resume = resume;
++	list->resume = resume;
+ 	kctl = snd_ctl_new1(knew, list);
+ 	if (!kctl) {
+ 		kfree(list);


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-18 21:17 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-18 21:17 UTC (permalink / raw
  To: gentoo-commits

commit:     94ecf1189db01fb00ff9286d4497ed594578abe3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 18 21:14:04 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Oct 18 21:17:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=94ecf118

For systemd, select CONFIG_KCMP as systemd uses the kcmp() call

Originally tied to CHECKPOINT_RESTORE.

Thanks to Mike Gilbert for reporting.

Bug: https://bugs.gentoo.org/818832

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 74e80d3..95a64aa 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -124,7 +124,6 @@
 +	select BPF_SYSCALL
 +	select CGROUP_BPF
 +	select CGROUPS
-+	select CHECKPOINT_RESTORE
 +	select CRYPTO_HMAC 
 +	select CRYPTO_SHA256
 +	select CRYPTO_USER_API_HASH
@@ -136,6 +135,7 @@
 +	select FILE_LOCKING
 +	select INOTIFY_USER
 +	select IPV6
++	select KCMP
 +	select NET
 +	select NET_NS
 +	select PROC_FS


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-20 13:22 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-20 13:22 UTC (permalink / raw
  To: gentoo-commits

commit:     a7ca4775ed7327b81ce2817711f77e28b0bcb732
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 20 13:22:33 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 20 13:22:33 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a7ca4775

Linux patch 5.14.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-5.14.14.patch | 4421 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4425 insertions(+)

diff --git a/0000_README b/0000_README
index 31ed9a4..1bea116 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1012_linux-5.14.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.13
 
+Patch:  1013_linux-5.14.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-5.14.14.patch b/1013_linux-5.14.14.patch
new file mode 100644
index 0000000..f4e20f8
--- /dev/null
+++ b/1013_linux-5.14.14.patch
@@ -0,0 +1,4421 @@
+diff --git a/Makefile b/Makefile
+index 7bdca9dc0e61b..f05668e1ffaba 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+index f24bdd0870a52..72ce80fbf2662 100644
+--- a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
++++ b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+@@ -40,8 +40,8 @@
+ 		regulator-always-on;
+ 		regulator-settling-time-us = <5000>;
+ 		gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x1
+-			  3300000 0x0>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
+ 		status = "okay";
+ 	};
+ 
+@@ -217,15 +217,16 @@
+ };
+ 
+ &pcie0 {
+-	pci@1,0 {
++	pci@0,0 {
++		device_type = "pci";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+ 		ranges;
+ 
+ 		reg = <0 0 0 0 0>;
+ 
+-		usb@1,0 {
+-			reg = <0x10000 0 0 0 0>;
++		usb@0,0 {
++			reg = <0 0 0 0 0>;
+ 			resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index b8a4096192aa9..3b60297af7f60 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -300,6 +300,14 @@
+ 			status = "disabled";
+ 		};
+ 
++		vec: vec@7ec13000 {
++			compatible = "brcm,bcm2711-vec";
++			reg = <0x7ec13000 0x1000>;
++			clocks = <&clocks BCM2835_CLOCK_VEC>;
++			interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
++			status = "disabled";
++		};
++
+ 		dvp: clock@7ef00000 {
+ 			compatible = "brcm,brcm2711-dvp";
+ 			reg = <0x7ef00000 0x10>;
+@@ -532,8 +540,8 @@
+ 				compatible = "brcm,genet-mdio-v5";
+ 				reg = <0xe14 0x8>;
+ 				reg-names = "mdio";
+-				#address-cells = <0x0>;
+-				#size-cells = <0x1>;
++				#address-cells = <0x1>;
++				#size-cells = <0x0>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-common.dtsi b/arch/arm/boot/dts/bcm2835-common.dtsi
+index 4119271c979d6..c25e797b90600 100644
+--- a/arch/arm/boot/dts/bcm2835-common.dtsi
++++ b/arch/arm/boot/dts/bcm2835-common.dtsi
+@@ -106,6 +106,14 @@
+ 			status = "okay";
+ 		};
+ 
++		vec: vec@7e806000 {
++			compatible = "brcm,bcm2835-vec";
++			reg = <0x7e806000 0x1000>;
++			clocks = <&clocks BCM2835_CLOCK_VEC>;
++			interrupts = <2 27>;
++			status = "disabled";
++		};
++
+ 		pixelvalve@7e807000 {
+ 			compatible = "brcm,bcm2835-pixelvalve2";
+ 			reg = <0x7e807000 0x100>;
+diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi
+index 0f3be55201a5b..a3e06b6809476 100644
+--- a/arch/arm/boot/dts/bcm283x.dtsi
++++ b/arch/arm/boot/dts/bcm283x.dtsi
+@@ -464,14 +464,6 @@
+ 			status = "disabled";
+ 		};
+ 
+-		vec: vec@7e806000 {
+-			compatible = "brcm,bcm2835-vec";
+-			reg = <0x7e806000 0x1000>;
+-			clocks = <&clocks BCM2835_CLOCK_VEC>;
+-			interrupts = <2 27>;
+-			status = "disabled";
+-		};
+-
+ 		usb: usb@7e980000 {
+ 			compatible = "brcm,bcm2835-usb";
+ 			reg = <0x7e980000 0x10000>;
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 23505fc353247..a8158c9489666 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -43,7 +43,7 @@ void __init arm64_hugetlb_cma_reserve(void)
+ #ifdef CONFIG_ARM64_4K_PAGES
+ 	order = PUD_SHIFT - PAGE_SHIFT;
+ #else
+-	order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
++	order = CONT_PMD_SHIFT - PAGE_SHIFT;
+ #endif
+ 	/*
+ 	 * HugeTLB CMA reservation is required for gigantic
+diff --git a/arch/csky/kernel/ptrace.c b/arch/csky/kernel/ptrace.c
+index 0105ac81b4328..1a5f54e0d2726 100644
+--- a/arch/csky/kernel/ptrace.c
++++ b/arch/csky/kernel/ptrace.c
+@@ -99,7 +99,8 @@ static int gpr_set(struct task_struct *target,
+ 	if (ret)
+ 		return ret;
+ 
+-	regs.sr = task_pt_regs(target)->sr;
++	/* BIT(0) of regs.sr is Condition Code/Carry bit */
++	regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));
+ #ifdef CONFIG_CPU_HAS_HILO
+ 	regs.dcsr = task_pt_regs(target)->dcsr;
+ #endif
+diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c
+index 312f046d452d8..6ba3969ec175e 100644
+--- a/arch/csky/kernel/signal.c
++++ b/arch/csky/kernel/signal.c
+@@ -52,10 +52,14 @@ static long restore_sigcontext(struct pt_regs *regs,
+ 	struct sigcontext __user *sc)
+ {
+ 	int err = 0;
++	unsigned long sr = regs->sr;
+ 
+ 	/* sc_pt_regs is structured the same as the start of pt_regs */
+ 	err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));
+ 
++	/* BIT(0) of regs->sr is Condition Code/Carry bit */
++	regs->sr = (sr & ~1) | (regs->sr & 1);
++
+ 	/* Restore the floating-point state. */
+ 	err |= restore_fpu_state(sc);
+ 
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 8183ca343675a..1d2546ac6fbc3 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -961,7 +961,8 @@ static int xive_get_irqchip_state(struct irq_data *data,
+ 		 * interrupt to be inactive in that case.
+ 		 */
+ 		*state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&
+-			(xd->saved_p || !!(pq & XIVE_ESB_VAL_P));
++			(xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&
++			 !irqd_irq_disabled(data)));
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+diff --git a/arch/s390/lib/string.c b/arch/s390/lib/string.c
+index cfcdf76d6a957..a95ca6df4e5e6 100644
+--- a/arch/s390/lib/string.c
++++ b/arch/s390/lib/string.c
+@@ -259,14 +259,13 @@ EXPORT_SYMBOL(strcmp);
+ #ifdef __HAVE_ARCH_STRRCHR
+ char *strrchr(const char *s, int c)
+ {
+-       size_t len = __strend(s) - s;
+-
+-       if (len)
+-	       do {
+-		       if (s[len] == (char) c)
+-			       return (char *) s + len;
+-	       } while (--len > 0);
+-       return NULL;
++	ssize_t len = __strend(s) - s;
++
++	do {
++		if (s[len] == (char)c)
++			return (char *)s + len;
++	} while (--len >= 0);
++	return NULL;
+ }
+ EXPORT_SYMBOL(strrchr);
+ #endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 51341f2e218de..551eaab376f31 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1520,7 +1520,6 @@ config AMD_MEM_ENCRYPT
+ 
+ config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
+ 	bool "Activate AMD Secure Memory Encryption (SME) by default"
+-	default y
+ 	depends on AMD_MEM_ENCRYPT
+ 	help
+ 	  Say yes to have system memory encrypted by default if running on
+diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
+index 23001ae03e82b..7afc2d72d8634 100644
+--- a/arch/x86/kernel/cpu/resctrl/core.c
++++ b/arch/x86/kernel/cpu/resctrl/core.c
+@@ -590,6 +590,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
+ 	}
+ 
+ 	if (r->mon_capable && domain_setup_mon_state(r, d)) {
++		kfree(d->ctrl_val);
++		kfree(d->mbps_val);
+ 		kfree(d);
+ 		return;
+ 	}
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index fa17a27390ab0..831b25c5e7058 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -385,7 +385,7 @@ static int __fpu_restore_sig(void __user *buf, void __user *buf_fx,
+ 				return -EINVAL;
+ 		} else {
+ 			/* Mask invalid bits out for historical reasons (broken hardware). */
+-			fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask;
++			fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;
+ 		}
+ 
+ 		/* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
+diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
+index 0a0a982f9c28d..c0e77c1c8e09d 100644
+--- a/drivers/acpi/arm64/gtdt.c
++++ b/drivers/acpi/arm64/gtdt.c
+@@ -36,7 +36,7 @@ struct acpi_gtdt_descriptor {
+ 
+ static struct acpi_gtdt_descriptor acpi_gtdt_desc __initdata;
+ 
+-static inline void *next_platform_timer(void *platform_timer)
++static inline __init void *next_platform_timer(void *platform_timer)
+ {
+ 	struct acpi_gtdt_header *gh = platform_timer;
+ 
+diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
+index bd92b549fd5a4..1c48358b43ba3 100644
+--- a/drivers/acpi/x86/s2idle.c
++++ b/drivers/acpi/x86/s2idle.c
+@@ -371,7 +371,7 @@ static int lps0_device_attach(struct acpi_device *adev,
+ 		return 0;
+ 
+ 	if (acpi_s2idle_vendor_amd()) {
+-		/* AMD0004, AMDI0005:
++		/* AMD0004, AMD0005, AMDI0005:
+ 		 * - Should use rev_id 0x0
+ 		 * - function mask > 0x3: Should use AMD method, but has off by one bug
+ 		 * - function mask = 0x3: Should use Microsoft method
+@@ -390,6 +390,7 @@ static int lps0_device_attach(struct acpi_device *adev,
+ 					ACPI_LPS0_DSM_UUID_MICROSOFT, 0,
+ 					&lps0_dsm_guid_microsoft);
+ 		if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||
++						 !strcmp(hid, "AMD0005") ||
+ 						 !strcmp(hid, "AMDI0005"))) {
+ 			lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;
+ 			acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index b2f5520882918..0910441321f72 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -440,10 +440,7 @@ struct ahci_host_priv *ahci_platform_get_resources(struct platform_device *pdev,
+ 	hpriv->phy_regulator = devm_regulator_get(dev, "phy");
+ 	if (IS_ERR(hpriv->phy_regulator)) {
+ 		rc = PTR_ERR(hpriv->phy_regulator);
+-		if (rc == -EPROBE_DEFER)
+-			goto err_out;
+-		rc = 0;
+-		hpriv->phy_regulator = NULL;
++		goto err_out;
+ 	}
+ 
+ 	if (flags & AHCI_PLATFORM_GET_RESETS) {
+diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c
+index c3e6592712c4b..0a8bf09a5c19e 100644
+--- a/drivers/ata/pata_legacy.c
++++ b/drivers/ata/pata_legacy.c
+@@ -352,7 +352,8 @@ static unsigned int pdc_data_xfer_vlb(struct ata_queued_cmd *qc,
+ 			iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+ 
+ 		if (unlikely(slop)) {
+-			__le32 pad;
++			__le32 pad = 0;
++
+ 			if (rw == READ) {
+ 				pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
+ 				memcpy(buf + buflen - slop, &pad, slop);
+@@ -742,7 +743,8 @@ static unsigned int vlb32_data_xfer(struct ata_queued_cmd *qc,
+ 			ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
+ 
+ 		if (unlikely(slop)) {
+-			__le32 pad;
++			__le32 pad = 0;
++
+ 			if (rw == WRITE) {
+ 				memcpy(&pad, buf + buflen - slop, slop);
+ 				iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 56f54e6eb9874..f150ebebb3068 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -675,7 +675,8 @@ struct device_link *device_link_add(struct device *consumer,
+ {
+ 	struct device_link *link;
+ 
+-	if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
++	if (!consumer || !supplier || consumer == supplier ||
++	    flags & ~DL_ADD_VALID_FLAGS ||
+ 	    (flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
+ 	    (flags & DL_FLAG_SYNC_STATE_ONLY &&
+ 	     (flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||
+diff --git a/drivers/block/rnbd/rnbd-clt-sysfs.c b/drivers/block/rnbd/rnbd-clt-sysfs.c
+index 324afdd63a967..102c08ad4dd06 100644
+--- a/drivers/block/rnbd/rnbd-clt-sysfs.c
++++ b/drivers/block/rnbd/rnbd-clt-sysfs.c
+@@ -71,8 +71,10 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
+ 	int opt_mask = 0;
+ 	int token;
+ 	int ret = -EINVAL;
+-	int i, dest_port, nr_poll_queues;
++	int nr_poll_queues = 0;
++	int dest_port = 0;
+ 	int p_cnt = 0;
++	int i;
+ 
+ 	options = kstrdup(buf, GFP_KERNEL);
+ 	if (!options)
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index afb37aac09e88..e870248d5c152 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -692,28 +692,6 @@ static const struct blk_mq_ops virtio_mq_ops = {
+ static unsigned int virtblk_queue_depth;
+ module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);
+ 
+-static int virtblk_validate(struct virtio_device *vdev)
+-{
+-	u32 blk_size;
+-
+-	if (!vdev->config->get) {
+-		dev_err(&vdev->dev, "%s failure: config access disabled\n",
+-			__func__);
+-		return -EINVAL;
+-	}
+-
+-	if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE))
+-		return 0;
+-
+-	blk_size = virtio_cread32(vdev,
+-			offsetof(struct virtio_blk_config, blk_size));
+-
+-	if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)
+-		__virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE);
+-
+-	return 0;
+-}
+-
+ static int virtblk_probe(struct virtio_device *vdev)
+ {
+ 	struct virtio_blk *vblk;
+@@ -725,6 +703,12 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 	u8 physical_block_exp, alignment_offset;
+ 	unsigned int queue_depth;
+ 
++	if (!vdev->config->get) {
++		dev_err(&vdev->dev, "%s failure: config access disabled\n",
++			__func__);
++		return -EINVAL;
++	}
++
+ 	err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),
+ 			     GFP_KERNEL);
+ 	if (err < 0)
+@@ -765,7 +749,7 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 		goto out_free_vblk;
+ 
+ 	/* Default queue sizing is to fill the ring. */
+-	if (likely(!virtblk_queue_depth)) {
++	if (!virtblk_queue_depth) {
+ 		queue_depth = vblk->vqs[0].vq->num_free;
+ 		/* ... but without indirect descs, we use 2 descs per req */
+ 		if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC))
+@@ -839,14 +823,6 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 	else
+ 		blk_size = queue_logical_block_size(q);
+ 
+-	if (unlikely(blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)) {
+-		dev_err(&vdev->dev,
+-			"block size is changed unexpectedly, now is %u\n",
+-			blk_size);
+-		err = -EINVAL;
+-		goto err_cleanup_disk;
+-	}
+-
+ 	/* Use topology information if available */
+ 	err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY,
+ 				   struct virtio_blk_config, physical_block_exp,
+@@ -905,8 +881,6 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 	device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups);
+ 	return 0;
+ 
+-err_cleanup_disk:
+-	blk_cleanup_disk(vblk->disk);
+ out_free_tags:
+ 	blk_mq_free_tag_set(&vblk->tag_set);
+ out_free_vq:
+@@ -1009,7 +983,6 @@ static struct virtio_driver virtio_blk = {
+ 	.driver.name			= KBUILD_MODNAME,
+ 	.driver.owner			= THIS_MODULE,
+ 	.id_table			= id_table,
+-	.validate			= virtblk_validate,
+ 	.probe				= virtblk_probe,
+ 	.remove				= virtblk_remove,
+ 	.config_changed			= virtblk_config_changed,
+diff --git a/drivers/bus/simple-pm-bus.c b/drivers/bus/simple-pm-bus.c
+index 01a3d0cd08edc..6b8d6257ed8a4 100644
+--- a/drivers/bus/simple-pm-bus.c
++++ b/drivers/bus/simple-pm-bus.c
+@@ -13,11 +13,36 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
+ 
+-
+ static int simple_pm_bus_probe(struct platform_device *pdev)
+ {
+-	const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev);
+-	struct device_node *np = pdev->dev.of_node;
++	const struct device *dev = &pdev->dev;
++	const struct of_dev_auxdata *lookup = dev_get_platdata(dev);
++	struct device_node *np = dev->of_node;
++	const struct of_device_id *match;
++
++	/*
++	 * Allow user to use driver_override to bind this driver to a
++	 * transparent bus device which has a different compatible string
++	 * that's not listed in simple_pm_bus_of_match. We don't want to do any
++	 * of the simple-pm-bus tasks for these devices, so return early.
++	 */
++	if (pdev->driver_override)
++		return 0;
++
++	match = of_match_device(dev->driver->of_match_table, dev);
++	/*
++	 * These are transparent bus devices (not simple-pm-bus matches) that
++	 * have their child nodes populated automatically.  So, don't need to
++	 * do anything more. We only match with the device if this driver is
++	 * the most specific match because we don't want to incorrectly bind to
++	 * a device that has a more specific driver.
++	 */
++	if (match && match->data) {
++		if (of_property_match_string(np, "compatible", match->compatible) == 0)
++			return 0;
++		else
++			return -ENODEV;
++	}
+ 
+ 	dev_dbg(&pdev->dev, "%s\n", __func__);
+ 
+@@ -31,14 +56,25 @@ static int simple_pm_bus_probe(struct platform_device *pdev)
+ 
+ static int simple_pm_bus_remove(struct platform_device *pdev)
+ {
++	const void *data = of_device_get_match_data(&pdev->dev);
++
++	if (pdev->driver_override || data)
++		return 0;
++
+ 	dev_dbg(&pdev->dev, "%s\n", __func__);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 	return 0;
+ }
+ 
++#define ONLY_BUS	((void *) 1) /* Match if the device is only a bus. */
++
+ static const struct of_device_id simple_pm_bus_of_match[] = {
+ 	{ .compatible = "simple-pm-bus", },
++	{ .compatible = "simple-bus",	.data = ONLY_BUS },
++	{ .compatible = "simple-mfd",	.data = ONLY_BUS },
++	{ .compatible = "isa",		.data = ONLY_BUS },
++	{ .compatible = "arm,amba-bus",	.data = ONLY_BUS },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+diff --git a/drivers/clk/renesas/renesas-rzg2l-cpg.c b/drivers/clk/renesas/renesas-rzg2l-cpg.c
+index f894a210de902..ab6149b2048b5 100644
+--- a/drivers/clk/renesas/renesas-rzg2l-cpg.c
++++ b/drivers/clk/renesas/renesas-rzg2l-cpg.c
+@@ -398,7 +398,7 @@ static int rzg2l_mod_clock_is_enabled(struct clk_hw *hw)
+ 
+ 	value = readl(priv->base + CLK_MON_R(clock->off));
+ 
+-	return !(value & bitmask);
++	return value & bitmask;
+ }
+ 
+ static const struct clk_ops rzg2l_mod_clock_ops = {
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index 242e94c0cf8a3..bf8cd928c2283 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -165,13 +165,6 @@ static const struct clk_parent_data mpu_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
+-static const struct clk_parent_data s2f_usr0_mux[] = {
+-	{ .fw_name = "f2s-free-clk",
+-	  .name = "f2s-free-clk", },
+-	{ .fw_name = "boot_clk",
+-	  .name = "boot_clk", },
+-};
+-
+ static const struct clk_parent_data emac_mux[] = {
+ 	{ .fw_name = "emaca_free_clk",
+ 	  .name = "emaca_free_clk", },
+@@ -312,8 +305,6 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  4, 0x44, 28, 1, 0, 0, 0},
+ 	{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
+ 	  5, 0, 0, 0, 0x30, 1, 0},
+-	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
+-	  6, 0, 0, 0, 0, 0, 0},
+ 	{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+ 	  0, 0, 0, 0, 0x94, 26, 0},
+ 	{ AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+diff --git a/drivers/edac/armada_xp_edac.c b/drivers/edac/armada_xp_edac.c
+index e3e757513d1bc..b1f46a974b9e0 100644
+--- a/drivers/edac/armada_xp_edac.c
++++ b/drivers/edac/armada_xp_edac.c
+@@ -178,7 +178,7 @@ static void axp_mc_check(struct mem_ctl_info *mci)
+ 				     "details unavailable (multiple errors)");
+ 	if (cnt_dbe)
+ 		edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci,
+-				     cnt_sbe, /* error count */
++				     cnt_dbe, /* error count */
+ 				     0, 0, 0, /* pfn, offset, syndrome */
+ 				     -1, -1, -1, /* top, mid, low layer */
+ 				     mci->ctl_name,
+diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c
+index 00fe595a5bc89..fca1e311ea6c7 100644
+--- a/drivers/firmware/arm_ffa/bus.c
++++ b/drivers/firmware/arm_ffa/bus.c
+@@ -49,6 +49,15 @@ static int ffa_device_probe(struct device *dev)
+ 	return ffa_drv->probe(ffa_dev);
+ }
+ 
++static int ffa_device_remove(struct device *dev)
++{
++	struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
++
++	ffa_drv->remove(to_ffa_dev(dev));
++
++	return 0;
++}
++
+ static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
+ {
+ 	struct ffa_device *ffa_dev = to_ffa_dev(dev);
+@@ -86,6 +95,7 @@ struct bus_type ffa_bus_type = {
+ 	.name		= "arm_ffa",
+ 	.match		= ffa_device_match,
+ 	.probe		= ffa_device_probe,
++	.remove		= ffa_device_remove,
+ 	.uevent		= ffa_device_uevent,
+ 	.dev_groups	= ffa_device_attributes_groups,
+ };
+@@ -127,7 +137,7 @@ static void ffa_release_device(struct device *dev)
+ 
+ static int __ffa_devices_unregister(struct device *dev, void *data)
+ {
+-	ffa_release_device(dev);
++	device_unregister(dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index ea7ca74fc1730..232c092c4c970 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -25,8 +25,6 @@
+ #include <acpi/ghes.h>
+ #include <ras/ras_event.h>
+ 
+-static char rcd_decode_str[CPER_REC_LEN];
+-
+ /*
+  * CPER record ID need to be unique even after reboot, because record
+  * ID is used as index for ERST storage, while CPER records from
+@@ -313,6 +311,7 @@ const char *cper_mem_err_unpack(struct trace_seq *p,
+ 				struct cper_mem_err_compact *cmem)
+ {
+ 	const char *ret = trace_seq_buffer_ptr(p);
++	char rcd_decode_str[CPER_REC_LEN];
+ 
+ 	if (cper_mem_err_location(cmem, rcd_decode_str))
+ 		trace_seq_printf(p, "%s", rcd_decode_str);
+@@ -327,6 +326,7 @@ static void cper_print_mem(const char *pfx, const struct cper_sec_mem_err *mem,
+ 	int len)
+ {
+ 	struct cper_mem_err_compact cmem;
++	char rcd_decode_str[CPER_REC_LEN];
+ 
+ 	/* Don't trust UEFI 2.1/2.2 structure with bad validation bits */
+ 	if (len == sizeof(struct cper_sec_mem_err_old) &&
+diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
+index 1410beaef5c30..f3e54f6616f02 100644
+--- a/drivers/firmware/efi/runtime-wrappers.c
++++ b/drivers/firmware/efi/runtime-wrappers.c
+@@ -414,7 +414,7 @@ static void virt_efi_reset_system(int reset_type,
+ 				  unsigned long data_size,
+ 				  efi_char16_t *data)
+ {
+-	if (down_interruptible(&efi_runtime_lock)) {
++	if (down_trylock(&efi_runtime_lock)) {
+ 		pr_warn("failed to invoke the reset_system() runtime service:\n"
+ 			"could not get exclusive access to the firmware\n");
+ 		return;
+diff --git a/drivers/fpga/ice40-spi.c b/drivers/fpga/ice40-spi.c
+index 69dec5af23c36..029d3cdb918d1 100644
+--- a/drivers/fpga/ice40-spi.c
++++ b/drivers/fpga/ice40-spi.c
+@@ -192,12 +192,19 @@ static const struct of_device_id ice40_fpga_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ice40_fpga_of_match);
+ 
++static const struct spi_device_id ice40_fpga_spi_ids[] = {
++	{ .name = "ice40-fpga-mgr", },
++	{},
++};
++MODULE_DEVICE_TABLE(spi, ice40_fpga_spi_ids);
++
+ static struct spi_driver ice40_fpga_driver = {
+ 	.probe = ice40_fpga_probe,
+ 	.driver = {
+ 		.name = "ice40spi",
+ 		.of_match_table = of_match_ptr(ice40_fpga_of_match),
+ 	},
++	.id_table = ice40_fpga_spi_ids,
+ };
+ 
+ module_spi_driver(ice40_fpga_driver);
+diff --git a/drivers/gpio/gpio-74x164.c b/drivers/gpio/gpio-74x164.c
+index 05637d5851526..4a55cdf089d62 100644
+--- a/drivers/gpio/gpio-74x164.c
++++ b/drivers/gpio/gpio-74x164.c
+@@ -174,6 +174,13 @@ static int gen_74x164_remove(struct spi_device *spi)
+ 	return 0;
+ }
+ 
++static const struct spi_device_id gen_74x164_spi_ids[] = {
++	{ .name = "74hc595" },
++	{ .name = "74lvc594" },
++	{},
++};
++MODULE_DEVICE_TABLE(spi, gen_74x164_spi_ids);
++
+ static const struct of_device_id gen_74x164_dt_ids[] = {
+ 	{ .compatible = "fairchild,74hc595" },
+ 	{ .compatible = "nxp,74lvc594" },
+@@ -188,6 +195,7 @@ static struct spi_driver gen_74x164_driver = {
+ 	},
+ 	.probe		= gen_74x164_probe,
+ 	.remove		= gen_74x164_remove,
++	.id_table	= gen_74x164_spi_ids,
+ };
+ module_spi_driver(gen_74x164_driver);
+ 
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 8ebf369b3ba0f..d2fe76f3f34fd 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -559,21 +559,21 @@ static int pca953x_gpio_set_pull_up_down(struct pca953x_chip *chip,
+ 
+ 	mutex_lock(&chip->i2c_lock);
+ 
+-	/* Disable pull-up/pull-down */
+-	ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
+-	if (ret)
+-		goto exit;
+-
+ 	/* Configure pull-up/pull-down */
+ 	if (config == PIN_CONFIG_BIAS_PULL_UP)
+ 		ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit);
+ 	else if (config == PIN_CONFIG_BIAS_PULL_DOWN)
+ 		ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0);
++	else
++		ret = 0;
+ 	if (ret)
+ 		goto exit;
+ 
+-	/* Enable pull-up/pull-down */
+-	ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
++	/* Disable/Enable pull-up/pull-down */
++	if (config == PIN_CONFIG_BIAS_DISABLE)
++		ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);
++	else
++		ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);
+ 
+ exit:
+ 	mutex_unlock(&chip->i2c_lock);
+@@ -587,7 +587,9 @@ static int pca953x_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
+ 
+ 	switch (pinconf_to_config_param(config)) {
+ 	case PIN_CONFIG_BIAS_PULL_UP:
++	case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
++	case PIN_CONFIG_BIAS_DISABLE:
+ 		return pca953x_gpio_set_pull_up_down(chip, offset, config);
+ 	default:
+ 		return -ENOTSUPP;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 81d5f25242469..1dfb2efac6c25 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -1834,11 +1834,20 @@ static void connector_bad_edid(struct drm_connector *connector,
+ 			       u8 *edid, int num_blocks)
+ {
+ 	int i;
+-	u8 num_of_ext = edid[0x7e];
++	u8 last_block;
++
++	/*
++	 * 0x7e in the EDID is the number of extension blocks. The EDID
++	 * is 1 (base block) + num_ext_blocks big. That means we can think
++	 * of 0x7e in the EDID of the _index_ of the last block in the
++	 * combined chunk of memory.
++	 */
++	last_block = edid[0x7e];
+ 
+ 	/* Calculate real checksum for the last edid extension block data */
+-	connector->real_edid_checksum =
+-		drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);
++	if (last_block < num_blocks)
++		connector->real_edid_checksum =
++			drm_edid_block_checksum(edid + last_block * EDID_LENGTH);
+ 
+ 	if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
+ 		return;
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index d77a24507d309..c2257a761e38e 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1506,6 +1506,7 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
+ {
+ 	struct drm_client_dev *client = &fb_helper->client;
+ 	struct drm_device *dev = fb_helper->dev;
++	struct drm_mode_config *config = &dev->mode_config;
+ 	int ret = 0;
+ 	int crtc_count = 0;
+ 	struct drm_connector_list_iter conn_iter;
+@@ -1663,6 +1664,11 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
+ 	/* Handle our overallocation */
+ 	sizes.surface_height *= drm_fbdev_overalloc;
+ 	sizes.surface_height /= 100;
++	if (sizes.surface_height > config->max_height) {
++		drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n",
++			    config->max_height);
++		sizes.surface_height = config->max_height;
++	}
+ 
+ 	/* push down into drivers */
+ 	ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
+diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+index 4534633fe7cdb..8fb847c174ff8 100644
+--- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+@@ -571,13 +571,14 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
+ 	}
+ 
+ 	icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
+-	ret = IS_ERR(icc_path);
+-	if (ret)
++	if (IS_ERR(icc_path)) {
++		ret = PTR_ERR(icc_path);
+ 		goto fail;
++	}
+ 
+ 	ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
+-	ret = IS_ERR(ocmem_icc_path);
+-	if (ret) {
++	if (IS_ERR(ocmem_icc_path)) {
++		ret = PTR_ERR(ocmem_icc_path);
+ 		/* allow -ENODATA, ocmem icc is optional */
+ 		if (ret != -ENODATA)
+ 			goto fail;
+diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+index 82bebb40234de..a96ee79cc5e08 100644
+--- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+@@ -699,13 +699,14 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
+ 	}
+ 
+ 	icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
+-	ret = IS_ERR(icc_path);
+-	if (ret)
++	if (IS_ERR(icc_path)) {
++		ret = PTR_ERR(icc_path);
+ 		goto fail;
++	}
+ 
+ 	ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");
+-	ret = IS_ERR(ocmem_icc_path);
+-	if (ret) {
++	if (IS_ERR(ocmem_icc_path)) {
++		ret = PTR_ERR(ocmem_icc_path);
+ 		/* allow -ENODATA, ocmem icc is optional */
+ 		if (ret != -ENODATA)
+ 			goto fail;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 183b9f9c1b315..1b3519b821a3f 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -102,7 +102,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+ 	u32 asid;
+ 	u64 memptr = rbmemptr(ring, ttbr0);
+ 
+-	if (ctx == a6xx_gpu->cur_ctx)
++	if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)
+ 		return;
+ 
+ 	if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
+@@ -135,7 +135,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
+ 	OUT_PKT7(ring, CP_EVENT_WRITE, 1);
+ 	OUT_RING(ring, 0x31);
+ 
+-	a6xx_gpu->cur_ctx = ctx;
++	a6xx_gpu->cur_ctx_seqno = ctx->seqno;
+ }
+ 
+ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+@@ -1053,7 +1053,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
+ 	/* Always come up on rb 0 */
+ 	a6xx_gpu->cur_ring = gpu->rb[0];
+ 
+-	a6xx_gpu->cur_ctx = NULL;
++	a6xx_gpu->cur_ctx_seqno = 0;
+ 
+ 	/* Enable the SQE_to start the CP engine */
+ 	gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+index 0bc2d062f54ab..8e5527c881b1e 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+@@ -19,7 +19,16 @@ struct a6xx_gpu {
+ 	uint64_t sqe_iova;
+ 
+ 	struct msm_ringbuffer *cur_ring;
+-	struct msm_file_private *cur_ctx;
++
++	/**
++	 * cur_ctx_seqno:
++	 *
++	 * The ctx->seqno value of the context with current pgtables
++	 * installed.  Tracked by seqno rather than pointer value to
++	 * avoid dangling pointers, and cases where a ctx can be freed
++	 * and a new one created with the same address.
++	 */
++	int cur_ctx_seqno;
+ 
+ 	struct a6xx_gmu gmu;
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index f482e0911d039..bb7d066618e64 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -1125,6 +1125,20 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
+ 	__drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
+ }
+ 
++static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
++	.set_config = drm_atomic_helper_set_config,
++	.destroy = mdp5_crtc_destroy,
++	.page_flip = drm_atomic_helper_page_flip,
++	.reset = mdp5_crtc_reset,
++	.atomic_duplicate_state = mdp5_crtc_duplicate_state,
++	.atomic_destroy_state = mdp5_crtc_destroy_state,
++	.atomic_print_state = mdp5_crtc_atomic_print_state,
++	.get_vblank_counter = mdp5_crtc_get_vblank_counter,
++	.enable_vblank  = msm_crtc_enable_vblank,
++	.disable_vblank = msm_crtc_disable_vblank,
++	.get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp,
++};
++
+ static const struct drm_crtc_funcs mdp5_crtc_funcs = {
+ 	.set_config = drm_atomic_helper_set_config,
+ 	.destroy = mdp5_crtc_destroy,
+@@ -1313,6 +1327,8 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
+ 	mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true;
+ 
+ 	drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane,
++				  cursor_plane ?
++				  &mdp5_crtc_no_lm_cursor_funcs :
+ 				  &mdp5_crtc_funcs, NULL);
+ 
+ 	drm_flip_work_init(&mdp5_crtc->unref_cursor_work,
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 29d11f1cb79b0..4ccf27f66d025 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -208,8 +208,10 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
+ 		goto fail;
+ 	}
+ 
+-	if (!msm_dsi_manager_validate_current_config(msm_dsi->id))
++	if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {
++		ret = -EINVAL;
+ 		goto fail;
++	}
+ 
+ 	msm_dsi->encoder = encoder;
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index ed504fe5074f6..52826ba350af7 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -463,7 +463,7 @@ static int dsi_bus_clk_enable(struct msm_dsi_host *msm_host)
+ 
+ 	return 0;
+ err:
+-	for (; i > 0; i--)
++	while (--i >= 0)
+ 		clk_disable_unprepare(msm_host->bus_clks[i]);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index bb31230721bdd..3e1101451c8ac 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -110,14 +110,13 @@ static struct dsi_pll_14nm *pll_14nm_list[DSI_MAX];
+ static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm,
+ 				    u32 nb_tries, u32 timeout_us)
+ {
+-	bool pll_locked = false;
++	bool pll_locked = false, pll_ready = false;
+ 	void __iomem *base = pll_14nm->phy->pll_base;
+ 	u32 tries, val;
+ 
+ 	tries = nb_tries;
+ 	while (tries--) {
+-		val = dsi_phy_read(base +
+-			       REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
++		val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
+ 		pll_locked = !!(val & BIT(5));
+ 
+ 		if (pll_locked)
+@@ -126,23 +125,24 @@ static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm,
+ 		udelay(timeout_us);
+ 	}
+ 
+-	if (!pll_locked) {
+-		tries = nb_tries;
+-		while (tries--) {
+-			val = dsi_phy_read(base +
+-				REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
+-			pll_locked = !!(val & BIT(0));
++	if (!pll_locked)
++		goto out;
+ 
+-			if (pll_locked)
+-				break;
++	tries = nb_tries;
++	while (tries--) {
++		val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS);
++		pll_ready = !!(val & BIT(0));
+ 
+-			udelay(timeout_us);
+-		}
++		if (pll_ready)
++			break;
++
++		udelay(timeout_us);
+ 	}
+ 
+-	DBG("DSI PLL is %slocked", pll_locked ? "" : "*not* ");
++out:
++	DBG("DSI PLL is %slocked, %sready", pll_locked ? "" : "*not* ", pll_ready ? "" : "*not* ");
+ 
+-	return pll_locked;
++	return pll_locked && pll_ready;
+ }
+ 
+ static void dsi_pll_14nm_config_init(struct dsi_pll_config *pconf)
+diff --git a/drivers/gpu/drm/msm/edp/edp_ctrl.c b/drivers/gpu/drm/msm/edp/edp_ctrl.c
+index 4fb397ee7c842..fe1366b4c49f5 100644
+--- a/drivers/gpu/drm/msm/edp/edp_ctrl.c
++++ b/drivers/gpu/drm/msm/edp/edp_ctrl.c
+@@ -1116,7 +1116,7 @@ void msm_edp_ctrl_power(struct edp_ctrl *ctrl, bool on)
+ int msm_edp_ctrl_init(struct msm_edp *edp)
+ {
+ 	struct edp_ctrl *ctrl = NULL;
+-	struct device *dev = &edp->pdev->dev;
++	struct device *dev;
+ 	int ret;
+ 
+ 	if (!edp) {
+@@ -1124,6 +1124,7 @@ int msm_edp_ctrl_init(struct msm_edp *edp)
+ 		return -EINVAL;
+ 	}
+ 
++	dev = &edp->pdev->dev;
+ 	ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL);
+ 	if (!ctrl)
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 729ab68d02034..bcb24810905fd 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -566,10 +566,11 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 	if (ret)
+ 		goto err_msm_uninit;
+ 
+-	ret = msm_disp_snapshot_init(ddev);
+-	if (ret)
+-		DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
+-
++	if (kms) {
++		ret = msm_disp_snapshot_init(ddev);
++		if (ret)
++			DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
++	}
+ 	drm_mode_config_reset(ddev);
+ 
+ #ifdef CONFIG_DRM_FBDEV_EMULATION
+@@ -618,6 +619,7 @@ static void load_gpu(struct drm_device *dev)
+ 
+ static int context_init(struct drm_device *dev, struct drm_file *file)
+ {
++	static atomic_t ident = ATOMIC_INIT(0);
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct msm_file_private *ctx;
+ 
+@@ -631,6 +633,8 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
+ 	ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current);
+ 	file->driver_priv = ctx;
+ 
++	ctx->seqno = atomic_inc_return(&ident);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index 1a48a709ffb36..183d837dcea32 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -59,6 +59,7 @@ struct msm_file_private {
+ 	int queueid;
+ 	struct msm_gem_address_space *aspace;
+ 	struct kref ref;
++	int seqno;
+ };
+ 
+ enum msm_mdp_plane_property {
+@@ -535,7 +536,7 @@ static inline int align_pitch(int width, int bpp)
+ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
+ {
+ 	ktime_t now = ktime_get();
+-	unsigned long remaining_jiffies;
++	s64 remaining_jiffies;
+ 
+ 	if (ktime_compare(*timeout, now) < 0) {
+ 		remaining_jiffies = 0;
+@@ -544,7 +545,7 @@ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
+ 		remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
+ 	}
+ 
+-	return remaining_jiffies;
++	return clamp(remaining_jiffies, 0LL, (s64)INT_MAX);
+ }
+ 
+ #endif /* __MSM_DRV_H__ */
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index 44f84bfd0c0e7..a10b79f9729ef 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -161,7 +161,8 @@ out:
+ static int submit_lookup_cmds(struct msm_gem_submit *submit,
+ 		struct drm_msm_gem_submit *args, struct drm_file *file)
+ {
+-	unsigned i, sz;
++	unsigned i;
++	size_t sz;
+ 	int ret = 0;
+ 
+ 	for (i = 0; i < args->nr_cmds; i++) {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/chang84.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/chang84.c
+index 353b77d9b3dcf..3492c561f2cfc 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/chang84.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/chang84.c
+@@ -82,7 +82,7 @@ g84_fifo_chan_engine_fini(struct nvkm_fifo_chan *base,
+ 	if (offset < 0)
+ 		return 0;
+ 
+-	engn = fifo->base.func->engine_id(&fifo->base, engine);
++	engn = fifo->base.func->engine_id(&fifo->base, engine) - 1;
+ 	save = nvkm_mask(device, 0x002520, 0x0000003f, 1 << engn);
+ 	nvkm_wr32(device, 0x0032fc, chan->base.inst->addr >> 12);
+ 	done = nvkm_msec(device, 2000,
+diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
+index ef87d92cdf496..a9c8e05355aa6 100644
+--- a/drivers/gpu/drm/panel/Kconfig
++++ b/drivers/gpu/drm/panel/Kconfig
+@@ -273,6 +273,7 @@ config DRM_PANEL_OLIMEX_LCD_OLINUXINO
+ 	depends on OF
+ 	depends on I2C
+ 	depends on BACKLIGHT_CLASS_DEVICE
++	select CRC32
+ 	help
+ 	  The panel is used with different sizes LCDs, from 480x272 to
+ 	  1280x800, and 24 bit per pixel.
+diff --git a/drivers/iio/accel/fxls8962af-core.c b/drivers/iio/accel/fxls8962af-core.c
+index 0019f1ea7df27..f41db9e0249a7 100644
+--- a/drivers/iio/accel/fxls8962af-core.c
++++ b/drivers/iio/accel/fxls8962af-core.c
+@@ -738,7 +738,7 @@ static irqreturn_t fxls8962af_interrupt(int irq, void *p)
+ 
+ 	if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) {
+ 		ret = fxls8962af_fifo_flush(indio_dev);
+-		if (ret)
++		if (ret < 0)
+ 			return IRQ_NONE;
+ 
+ 		return IRQ_HANDLED;
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index ee8ed9481025d..2121a812b0c31 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -293,6 +293,7 @@ static const struct ad_sigma_delta_info ad7192_sigma_delta_info = {
+ 	.has_registers = true,
+ 	.addr_shift = 3,
+ 	.read_mask = BIT(6),
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ static const struct ad_sd_calib_data ad7192_calib_arr[8] = {
+diff --git a/drivers/iio/adc/ad7780.c b/drivers/iio/adc/ad7780.c
+index 42bb952f47388..b6e8c8abf6f4c 100644
+--- a/drivers/iio/adc/ad7780.c
++++ b/drivers/iio/adc/ad7780.c
+@@ -203,7 +203,7 @@ static const struct ad_sigma_delta_info ad7780_sigma_delta_info = {
+ 	.set_mode = ad7780_set_mode,
+ 	.postprocess_sample = ad7780_postprocess_sample,
+ 	.has_registers = false,
+-	.irq_flags = IRQF_TRIGGER_LOW,
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ #define _AD7780_CHANNEL(_bits, _wordsize, _mask_all)		\
+diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c
+index ef3e2d3ecb0c6..0e7ab3fb072a9 100644
+--- a/drivers/iio/adc/ad7793.c
++++ b/drivers/iio/adc/ad7793.c
+@@ -206,7 +206,7 @@ static const struct ad_sigma_delta_info ad7793_sigma_delta_info = {
+ 	.has_registers = true,
+ 	.addr_shift = 3,
+ 	.read_mask = BIT(6),
+-	.irq_flags = IRQF_TRIGGER_LOW,
++	.irq_flags = IRQF_TRIGGER_FALLING,
+ };
+ 
+ static const struct ad_sd_calib_data ad7793_calib_arr[6] = {
+diff --git a/drivers/iio/adc/aspeed_adc.c b/drivers/iio/adc/aspeed_adc.c
+index 19efaa41bc344..34ec0c28b2dff 100644
+--- a/drivers/iio/adc/aspeed_adc.c
++++ b/drivers/iio/adc/aspeed_adc.c
+@@ -183,6 +183,7 @@ static int aspeed_adc_probe(struct platform_device *pdev)
+ 
+ 	data = iio_priv(indio_dev);
+ 	data->dev = &pdev->dev;
++	platform_set_drvdata(pdev, indio_dev);
+ 
+ 	data->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(data->base))
+diff --git a/drivers/iio/adc/max1027.c b/drivers/iio/adc/max1027.c
+index 655ab02d03d84..b753658bb41ec 100644
+--- a/drivers/iio/adc/max1027.c
++++ b/drivers/iio/adc/max1027.c
+@@ -103,7 +103,7 @@ MODULE_DEVICE_TABLE(of, max1027_adc_dt_ids);
+ 			.sign = 'u',					\
+ 			.realbits = depth,				\
+ 			.storagebits = 16,				\
+-			.shift = 2,					\
++			.shift = (depth == 10) ? 2 : 0,			\
+ 			.endianness = IIO_BE,				\
+ 		},							\
+ 	}
+@@ -142,7 +142,6 @@ MODULE_DEVICE_TABLE(of, max1027_adc_dt_ids);
+ 	MAX1027_V_CHAN(11, depth)
+ 
+ #define MAX1X31_CHANNELS(depth)			\
+-	MAX1X27_CHANNELS(depth),		\
+ 	MAX1X29_CHANNELS(depth),		\
+ 	MAX1027_V_CHAN(12, depth),		\
+ 	MAX1027_V_CHAN(13, depth),		\
+diff --git a/drivers/iio/adc/mt6577_auxadc.c b/drivers/iio/adc/mt6577_auxadc.c
+index 79c1dd68b9092..d4fccd52ef08b 100644
+--- a/drivers/iio/adc/mt6577_auxadc.c
++++ b/drivers/iio/adc/mt6577_auxadc.c
+@@ -82,6 +82,10 @@ static const struct iio_chan_spec mt6577_auxadc_iio_channels[] = {
+ 	MT6577_AUXADC_CHANNEL(15),
+ };
+ 
++/* For Voltage calculation */
++#define VOLTAGE_FULL_RANGE  1500	/* VA voltage */
++#define AUXADC_PRECISE      4096	/* 12 bits */
++
+ static int mt_auxadc_get_cali_data(int rawdata, bool enable_cali)
+ {
+ 	return rawdata;
+@@ -191,6 +195,10 @@ static int mt6577_auxadc_read_raw(struct iio_dev *indio_dev,
+ 		}
+ 		if (adc_dev->dev_comp->sample_data_cali)
+ 			*val = mt_auxadc_get_cali_data(*val, true);
++
++		/* Convert adc raw data to voltage: 0 - 1500 mV */
++		*val = *val * VOLTAGE_FULL_RANGE / AUXADC_PRECISE;
++
+ 		return IIO_VAL_INT;
+ 
+ 	default:
+diff --git a/drivers/iio/adc/ti-adc128s052.c b/drivers/iio/adc/ti-adc128s052.c
+index 3143f35a6509a..83c1ae07b3e9a 100644
+--- a/drivers/iio/adc/ti-adc128s052.c
++++ b/drivers/iio/adc/ti-adc128s052.c
+@@ -171,7 +171,13 @@ static int adc128_probe(struct spi_device *spi)
+ 	mutex_init(&adc->lock);
+ 
+ 	ret = iio_device_register(indio_dev);
++	if (ret)
++		goto err_disable_regulator;
+ 
++	return 0;
++
++err_disable_regulator:
++	regulator_disable(adc->reg);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/common/ssp_sensors/ssp_spi.c b/drivers/iio/common/ssp_sensors/ssp_spi.c
+index 4864c38b8d1c2..769bd9280524a 100644
+--- a/drivers/iio/common/ssp_sensors/ssp_spi.c
++++ b/drivers/iio/common/ssp_sensors/ssp_spi.c
+@@ -137,7 +137,7 @@ static int ssp_print_mcu_debug(char *data_frame, int *data_index,
+ 	if (length > received_len - *data_index || length <= 0) {
+ 		ssp_dbg("[SSP]: MSG From MCU-invalid debug length(%d/%d)\n",
+ 			length, received_len);
+-		return length ? length : -EPROTO;
++		return -EPROTO;
+ 	}
+ 
+ 	ssp_dbg("[SSP]: MSG From MCU - %s\n", &data_frame[*data_index]);
+@@ -273,6 +273,8 @@ static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
+ 	for (idx = 0; idx < len;) {
+ 		switch (dataframe[idx++]) {
+ 		case SSP_MSG2AP_INST_BYPASS_DATA:
++			if (idx >= len)
++				return -EPROTO;
+ 			sd = dataframe[idx++];
+ 			if (sd < 0 || sd >= SSP_SENSOR_MAX) {
+ 				dev_err(SSP_DEV,
+@@ -282,10 +284,13 @@ static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
+ 
+ 			if (indio_devs[sd]) {
+ 				spd = iio_priv(indio_devs[sd]);
+-				if (spd->process_data)
++				if (spd->process_data) {
++					if (idx >= len)
++						return -EPROTO;
+ 					spd->process_data(indio_devs[sd],
+ 							  &dataframe[idx],
+ 							  data->timestamp);
++				}
+ 			} else {
+ 				dev_err(SSP_DEV, "no client for frame\n");
+ 			}
+@@ -293,6 +298,8 @@ static int ssp_parse_dataframe(struct ssp_data *data, char *dataframe, int len)
+ 			idx += ssp_offset_map[sd];
+ 			break;
+ 		case SSP_MSG2AP_INST_DEBUG_DATA:
++			if (idx >= len)
++				return -EPROTO;
+ 			sd = ssp_print_mcu_debug(dataframe, &idx, len);
+ 			if (sd) {
+ 				dev_err(SSP_DEV,
+diff --git a/drivers/iio/dac/ti-dac5571.c b/drivers/iio/dac/ti-dac5571.c
+index 2a5ba1b08a1d0..546a4cf6c5ef8 100644
+--- a/drivers/iio/dac/ti-dac5571.c
++++ b/drivers/iio/dac/ti-dac5571.c
+@@ -350,6 +350,7 @@ static int dac5571_probe(struct i2c_client *client,
+ 		data->dac5571_pwrdwn = dac5571_pwrdwn_quad;
+ 		break;
+ 	default:
++		ret = -EINVAL;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c
+index eb48102f94243..287fff39a927a 100644
+--- a/drivers/iio/imu/adis16475.c
++++ b/drivers/iio/imu/adis16475.c
+@@ -353,10 +353,11 @@ static int adis16475_set_freq(struct adis16475 *st, const u32 freq)
+ 	if (dec > st->info->max_dec)
+ 		dec = st->info->max_dec;
+ 
+-	ret = adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);
++	ret = __adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);
+ 	if (ret)
+ 		goto error;
+ 
++	adis_dev_unlock(&st->adis);
+ 	/*
+ 	 * If decimation is used, then gyro and accel data will have meaningful
+ 	 * bits on the LSB registers. This info is used on the trigger handler.
+diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
+index a869a6e52a16b..ed129321a14da 100644
+--- a/drivers/iio/imu/adis16480.c
++++ b/drivers/iio/imu/adis16480.c
+@@ -144,6 +144,7 @@ struct adis16480_chip_info {
+ 	unsigned int max_dec_rate;
+ 	const unsigned int *filter_freqs;
+ 	bool has_pps_clk_mode;
++	bool has_sleep_cnt;
+ 	const struct adis_data adis_data;
+ };
+ 
+@@ -939,6 +940,7 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ 		.temp_scale = 5650, /* 5.65 milli degree Celsius */
+ 		.int_clk = 2460000,
+ 		.max_dec_rate = 2048,
++		.has_sleep_cnt = true,
+ 		.filter_freqs = adis16480_def_filter_freqs,
+ 		.adis_data = ADIS16480_DATA(16375, &adis16485_timeouts, 0),
+ 	},
+@@ -952,6 +954,7 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ 		.temp_scale = 5650, /* 5.65 milli degree Celsius */
+ 		.int_clk = 2460000,
+ 		.max_dec_rate = 2048,
++		.has_sleep_cnt = true,
+ 		.filter_freqs = adis16480_def_filter_freqs,
+ 		.adis_data = ADIS16480_DATA(16480, &adis16480_timeouts, 0),
+ 	},
+@@ -965,6 +968,7 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ 		.temp_scale = 5650, /* 5.65 milli degree Celsius */
+ 		.int_clk = 2460000,
+ 		.max_dec_rate = 2048,
++		.has_sleep_cnt = true,
+ 		.filter_freqs = adis16480_def_filter_freqs,
+ 		.adis_data = ADIS16480_DATA(16485, &adis16485_timeouts, 0),
+ 	},
+@@ -978,6 +982,7 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
+ 		.temp_scale = 5650, /* 5.65 milli degree Celsius */
+ 		.int_clk = 2460000,
+ 		.max_dec_rate = 2048,
++		.has_sleep_cnt = true,
+ 		.filter_freqs = adis16480_def_filter_freqs,
+ 		.adis_data = ADIS16480_DATA(16488, &adis16485_timeouts, 0),
+ 	},
+@@ -1425,9 +1430,12 @@ static int adis16480_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = devm_add_action_or_reset(&spi->dev, adis16480_stop, indio_dev);
+-	if (ret)
+-		return ret;
++	if (st->chip_info->has_sleep_cnt) {
++		ret = devm_add_action_or_reset(&spi->dev, adis16480_stop,
++					       indio_dev);
++		if (ret)
++			return ret;
++	}
+ 
+ 	ret = adis16480_config_irq_pin(spi->dev.of_node, st);
+ 	if (ret)
+diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c
+index 52963da401a78..1880bd5bb2586 100644
+--- a/drivers/iio/light/opt3001.c
++++ b/drivers/iio/light/opt3001.c
+@@ -276,6 +276,8 @@ static int opt3001_get_lux(struct opt3001 *opt, int *val, int *val2)
+ 		ret = wait_event_timeout(opt->result_ready_queue,
+ 				opt->result_ready,
+ 				msecs_to_jiffies(OPT3001_RESULT_READY_LONG));
++		if (ret == 0)
++			return -ETIMEDOUT;
+ 	} else {
+ 		/* Sleep for result ready time */
+ 		timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ?
+@@ -312,9 +314,7 @@ err:
+ 		/* Disallow IRQ to access the device while lock is active */
+ 		opt->ok_to_ignore_lock = false;
+ 
+-	if (ret == 0)
+-		return -ETIMEDOUT;
+-	else if (ret < 0)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	if (opt->use_irq) {
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 29de8412e4165..4c914f75a9027 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -334,6 +334,7 @@ static const struct xpad_device {
+ 	{ 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ 	{ 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
++	{ 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
+ 	{ 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
+ 	{ 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+ 	{ 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
+@@ -451,6 +452,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOXONE_VENDOR(0x24c6),		/* PowerA Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x2e24),		/* Hyperkin Duke X-Box One pad */
+ 	XPAD_XBOX360_VENDOR(0x2f24),		/* GameSir Controllers */
++	XPAD_XBOX360_VENDOR(0x3285),		/* Nacon GC-100 */
+ 	{ }
+ };
+ 
+diff --git a/drivers/input/touchscreen/resistive-adc-touch.c b/drivers/input/touchscreen/resistive-adc-touch.c
+index 744544a723b77..6f754a8d30b11 100644
+--- a/drivers/input/touchscreen/resistive-adc-touch.c
++++ b/drivers/input/touchscreen/resistive-adc-touch.c
+@@ -71,19 +71,22 @@ static int grts_cb(const void *data, void *private)
+ 		unsigned int z2 = touch_info[st->ch_map[GRTS_CH_Z2]];
+ 		unsigned int Rt;
+ 
+-		Rt = z2;
+-		Rt -= z1;
+-		Rt *= st->x_plate_ohms;
+-		Rt = DIV_ROUND_CLOSEST(Rt, 16);
+-		Rt *= x;
+-		Rt /= z1;
+-		Rt = DIV_ROUND_CLOSEST(Rt, 256);
+-		/*
+-		 * On increased pressure the resistance (Rt) is decreasing
+-		 * so, convert values to make it looks as real pressure.
+-		 */
+-		if (Rt < GRTS_DEFAULT_PRESSURE_MAX)
+-			press = GRTS_DEFAULT_PRESSURE_MAX - Rt;
++		if (likely(x && z1)) {
++			Rt = z2;
++			Rt -= z1;
++			Rt *= st->x_plate_ohms;
++			Rt = DIV_ROUND_CLOSEST(Rt, 16);
++			Rt *= x;
++			Rt /= z1;
++			Rt = DIV_ROUND_CLOSEST(Rt, 256);
++			/*
++			 * On increased pressure the resistance (Rt) is
++			 * decreasing so, convert values to make it looks as
++			 * real pressure.
++			 */
++			if (Rt < GRTS_DEFAULT_PRESSURE_MAX)
++				press = GRTS_DEFAULT_PRESSURE_MAX - Rt;
++		}
+ 	}
+ 
+ 	if ((!x && !y) || (st->pressure && (press < st->pressure_min))) {
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 0dbd48cbdff95..231efbe38c214 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -490,6 +490,14 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	struct mapped_device *md = tio->md;
+ 	struct dm_target *ti = md->immutable_target;
+ 
++	/*
++	 * blk-mq's unquiesce may come from outside events, such as
++	 * elevator switch, updating nr_requests or others, and request may
++	 * come during suspend, so simply ask for blk-mq to requeue it.
++	 */
++	if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)))
++		return BLK_STS_RESOURCE;
++
+ 	if (unlikely(!ti)) {
+ 		int srcu_idx;
+ 		struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 2c5f9e5852117..bb895430981f2 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -492,18 +492,17 @@ static void start_io_acct(struct dm_io *io)
+ 				    false, 0, &io->stats_aux);
+ }
+ 
+-static void end_io_acct(struct dm_io *io)
++static void end_io_acct(struct mapped_device *md, struct bio *bio,
++			unsigned long start_time, struct dm_stats_aux *stats_aux)
+ {
+-	struct mapped_device *md = io->md;
+-	struct bio *bio = io->orig_bio;
+-	unsigned long duration = jiffies - io->start_time;
++	unsigned long duration = jiffies - start_time;
+ 
+-	bio_end_io_acct(bio, io->start_time);
++	bio_end_io_acct(bio, start_time);
+ 
+ 	if (unlikely(dm_stats_used(&md->stats)))
+ 		dm_stats_account_io(&md->stats, bio_data_dir(bio),
+ 				    bio->bi_iter.bi_sector, bio_sectors(bio),
+-				    true, duration, &io->stats_aux);
++				    true, duration, stats_aux);
+ 
+ 	/* nudge anyone waiting on suspend queue */
+ 	if (unlikely(wq_has_sleeper(&md->wait)))
+@@ -786,6 +785,8 @@ void dm_io_dec_pending(struct dm_io *io, blk_status_t error)
+ 	blk_status_t io_error;
+ 	struct bio *bio;
+ 	struct mapped_device *md = io->md;
++	unsigned long start_time = 0;
++	struct dm_stats_aux stats_aux;
+ 
+ 	/* Push-back supersedes any I/O errors */
+ 	if (unlikely(error)) {
+@@ -817,8 +818,10 @@ void dm_io_dec_pending(struct dm_io *io, blk_status_t error)
+ 		}
+ 
+ 		io_error = io->status;
+-		end_io_acct(io);
++		start_time = io->start_time;
++		stats_aux = io->stats_aux;
+ 		free_io(md, io);
++		end_io_acct(md, bio, start_time, &stats_aux);
+ 
+ 		if (io_error == BLK_STS_DM_REQUEUE)
+ 			return;
+diff --git a/drivers/misc/cb710/sgbuf2.c b/drivers/misc/cb710/sgbuf2.c
+index e5a4ed3701eb8..a798fad5f03c2 100644
+--- a/drivers/misc/cb710/sgbuf2.c
++++ b/drivers/misc/cb710/sgbuf2.c
+@@ -47,7 +47,7 @@ static inline bool needs_unaligned_copy(const void *ptr)
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ 	return false;
+ #else
+-	return ((ptr - NULL) & 3) != 0;
++	return ((uintptr_t)ptr & 3) != 0;
+ #endif
+ }
+ 
+diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
+index 4d09b672ac3c8..632325474233a 100644
+--- a/drivers/misc/eeprom/at25.c
++++ b/drivers/misc/eeprom/at25.c
+@@ -366,6 +366,13 @@ static const struct of_device_id at25_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, at25_of_match);
+ 
++static const struct spi_device_id at25_spi_ids[] = {
++	{ .name = "at25",},
++	{ .name = "fm25",},
++	{ }
++};
++MODULE_DEVICE_TABLE(spi, at25_spi_ids);
++
+ static int at25_probe(struct spi_device *spi)
+ {
+ 	struct at25_data	*at25 = NULL;
+@@ -491,6 +498,7 @@ static struct spi_driver at25_driver = {
+ 		.dev_groups	= sernum_groups,
+ 	},
+ 	.probe		= at25_probe,
++	.id_table	= at25_spi_ids,
+ };
+ 
+ module_spi_driver(at25_driver);
+diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
+index 29d8971ec558b..1f15399e5cb49 100644
+--- a/drivers/misc/eeprom/eeprom_93xx46.c
++++ b/drivers/misc/eeprom/eeprom_93xx46.c
+@@ -406,6 +406,23 @@ static const struct of_device_id eeprom_93xx46_of_table[] = {
+ };
+ MODULE_DEVICE_TABLE(of, eeprom_93xx46_of_table);
+ 
++static const struct spi_device_id eeprom_93xx46_spi_ids[] = {
++	{ .name = "eeprom-93xx46",
++	  .driver_data = (kernel_ulong_t)&at93c46_data, },
++	{ .name = "at93c46",
++	  .driver_data = (kernel_ulong_t)&at93c46_data, },
++	{ .name = "at93c46d",
++	  .driver_data = (kernel_ulong_t)&atmel_at93c46d_data, },
++	{ .name = "at93c56",
++	  .driver_data = (kernel_ulong_t)&at93c56_data, },
++	{ .name = "at93c66",
++	  .driver_data = (kernel_ulong_t)&at93c66_data, },
++	{ .name = "93lc46b",
++	  .driver_data = (kernel_ulong_t)&microchip_93lc46b_data, },
++	{}
++};
++MODULE_DEVICE_TABLE(spi, eeprom_93xx46_spi_ids);
++
+ static int eeprom_93xx46_probe_dt(struct spi_device *spi)
+ {
+ 	const struct of_device_id *of_id =
+@@ -555,6 +572,7 @@ static struct spi_driver eeprom_93xx46_driver = {
+ 	},
+ 	.probe		= eeprom_93xx46_probe,
+ 	.remove		= eeprom_93xx46_remove,
++	.id_table	= eeprom_93xx46_spi_ids,
+ };
+ 
+ module_spi_driver(eeprom_93xx46_driver);
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index beda610e6b30d..ad6ced4546556 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -814,10 +814,12 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ 			rpra[i].pv = (u64) ctx->args[i].ptr;
+ 			pages[i].addr = ctx->maps[i]->phys;
+ 
++			mmap_read_lock(current->mm);
+ 			vma = find_vma(current->mm, ctx->args[i].ptr);
+ 			if (vma)
+ 				pages[i].addr += ctx->args[i].ptr -
+ 						 vma->vm_start;
++			mmap_read_unlock(current->mm);
+ 
+ 			pg_start = (ctx->args[i].ptr & PAGE_MASK) >> PAGE_SHIFT;
+ 			pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >>
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index 99b5c1ecc4441..be41843df75bc 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1298,7 +1298,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
+ 		    dev->hbm_state != MEI_HBM_STARTING) {
+-			if (dev->dev_state == MEI_DEV_POWER_DOWN) {
++			if (dev->dev_state == MEI_DEV_POWER_DOWN ||
++			    dev->dev_state == MEI_DEV_POWERING_DOWN) {
+ 				dev_dbg(dev->dev, "hbm: start: on shutdown, ignoring\n");
+ 				return 0;
+ 			}
+@@ -1381,7 +1382,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
+ 		    dev->hbm_state != MEI_HBM_DR_SETUP) {
+-			if (dev->dev_state == MEI_DEV_POWER_DOWN) {
++			if (dev->dev_state == MEI_DEV_POWER_DOWN ||
++			    dev->dev_state == MEI_DEV_POWERING_DOWN) {
+ 				dev_dbg(dev->dev, "hbm: dma setup response: on shutdown, ignoring\n");
+ 				return 0;
+ 			}
+@@ -1448,7 +1450,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
+ 		    dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) {
+-			if (dev->dev_state == MEI_DEV_POWER_DOWN) {
++			if (dev->dev_state == MEI_DEV_POWER_DOWN ||
++			    dev->dev_state == MEI_DEV_POWERING_DOWN) {
+ 				dev_dbg(dev->dev, "hbm: properties response: on shutdown, ignoring\n");
+ 				return 0;
+ 			}
+@@ -1490,7 +1493,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
+ 		    dev->hbm_state != MEI_HBM_ENUM_CLIENTS) {
+-			if (dev->dev_state == MEI_DEV_POWER_DOWN) {
++			if (dev->dev_state == MEI_DEV_POWER_DOWN ||
++			    dev->dev_state == MEI_DEV_POWERING_DOWN) {
+ 				dev_dbg(dev->dev, "hbm: enumeration response: on shutdown, ignoring\n");
+ 				return 0;
+ 			}
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index cb34925e10f15..67bb6a25fd0a0 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -92,6 +92,7 @@
+ #define MEI_DEV_ID_CDF        0x18D3  /* Cedar Fork */
+ 
+ #define MEI_DEV_ID_ICP_LP     0x34E0  /* Ice Lake Point LP */
++#define MEI_DEV_ID_ICP_N      0x38E0  /* Ice Lake Point N */
+ 
+ #define MEI_DEV_ID_JSP_N      0x4DE0  /* Jasper Lake Point N */
+ 
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index c3393b383e598..3a45aaf002ac8 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -96,6 +96,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_N, MEI_ME_PCH12_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index ef0badea4f415..04e6f7b267064 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -1676,13 +1676,17 @@ qcom_nandc_read_cw_raw(struct mtd_info *mtd, struct nand_chip *chip,
+ 	struct nand_ecc_ctrl *ecc = &chip->ecc;
+ 	int data_size1, data_size2, oob_size1, oob_size2;
+ 	int ret, reg_off = FLASH_BUF_ACC, read_loc = 0;
++	int raw_cw = cw;
+ 
+ 	nand_read_page_op(chip, page, 0, NULL, 0);
+ 	host->use_ecc = false;
+ 
++	if (nandc->props->qpic_v2)
++		raw_cw = ecc->steps - 1;
++
+ 	clear_bam_transaction(nandc);
+ 	set_address(host, host->cw_size * cw, page);
+-	update_rw_regs(host, 1, true, cw);
++	update_rw_regs(host, 1, true, raw_cw);
+ 	config_nand_page_read(chip);
+ 
+ 	data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1);
+@@ -1711,7 +1715,7 @@ qcom_nandc_read_cw_raw(struct mtd_info *mtd, struct nand_chip *chip,
+ 		nandc_set_read_loc(chip, cw, 3, read_loc, oob_size2, 1);
+ 	}
+ 
+-	config_nand_cw_read(chip, false, cw);
++	config_nand_cw_read(chip, false, raw_cw);
+ 
+ 	read_data_dma(nandc, reg_off, data_buf, data_size1, 0);
+ 	reg_off += data_size1;
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 1542bfb8b5e54..7c2968a639eba 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -449,8 +449,10 @@ EXPORT_SYMBOL(ksz_switch_register);
+ void ksz_switch_remove(struct ksz_device *dev)
+ {
+ 	/* timer started */
+-	if (dev->mib_read_interval)
++	if (dev->mib_read_interval) {
++		dev->mib_read_interval = 0;
+ 		cancel_delayed_work_sync(&dev->mib_read);
++	}
+ 
+ 	dev->dev_ops->exit(dev);
+ 	dsa_unregister_switch(dev->ds);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 66b4f4a9832a4..f5b2e5e87da43 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -749,7 +749,11 @@ static void mv88e6xxx_mac_link_down(struct dsa_switch *ds, int port,
+ 	ops = chip->info->ops;
+ 
+ 	mv88e6xxx_reg_lock(chip);
+-	if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
++	/* Internal PHYs propagate their configuration directly to the MAC.
++	 * External PHYs depend on whether the PPU is enabled for this port.
++	 */
++	if (((!mv88e6xxx_phy_is_internal(ds, port) &&
++	      !mv88e6xxx_port_ppu_updates(chip, port)) ||
+ 	     mode == MLO_AN_FIXED) && ops->port_sync_link)
+ 		err = ops->port_sync_link(chip, port, mode, false);
+ 	mv88e6xxx_reg_unlock(chip);
+@@ -772,7 +776,12 @@ static void mv88e6xxx_mac_link_up(struct dsa_switch *ds, int port,
+ 	ops = chip->info->ops;
+ 
+ 	mv88e6xxx_reg_lock(chip);
+-	if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {
++	/* Internal PHYs propagate their configuration directly to the MAC.
++	 * External PHYs depend on whether the PPU is enabled for this port.
++	 */
++	if ((!mv88e6xxx_phy_is_internal(ds, port) &&
++	     !mv88e6xxx_port_ppu_updates(chip, port)) ||
++	    mode == MLO_AN_FIXED) {
+ 		/* FIXME: for an automedia port, should we force the link
+ 		 * down here - what if the link comes up due to "other" media
+ 		 * while we're bringing the port up, how is the exclusivity
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index a2a15919b9606..0ba3762d5c219 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -271,12 +271,12 @@ static void felix_8021q_cpu_port_deinit(struct ocelot *ocelot, int port)
+  */
+ static int felix_setup_mmio_filtering(struct felix *felix)
+ {
+-	unsigned long user_ports = 0, cpu_ports = 0;
++	unsigned long user_ports = dsa_user_ports(felix->ds);
+ 	struct ocelot_vcap_filter *redirect_rule;
+ 	struct ocelot_vcap_filter *tagging_rule;
+ 	struct ocelot *ocelot = &felix->ocelot;
+ 	struct dsa_switch *ds = felix->ds;
+-	int port, ret;
++	int cpu = -1, port, ret;
+ 
+ 	tagging_rule = kzalloc(sizeof(struct ocelot_vcap_filter), GFP_KERNEL);
+ 	if (!tagging_rule)
+@@ -289,12 +289,15 @@ static int felix_setup_mmio_filtering(struct felix *felix)
+ 	}
+ 
+ 	for (port = 0; port < ocelot->num_phys_ports; port++) {
+-		if (dsa_is_user_port(ds, port))
+-			user_ports |= BIT(port);
+-		if (dsa_is_cpu_port(ds, port))
+-			cpu_ports |= BIT(port);
++		if (dsa_is_cpu_port(ds, port)) {
++			cpu = port;
++			break;
++		}
+ 	}
+ 
++	if (cpu < 0)
++		return -EINVAL;
++
+ 	tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE;
+ 	*(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588);
+ 	*(__be16 *)tagging_rule->key.etype.etype.mask = htons(0xffff);
+@@ -330,7 +333,7 @@ static int felix_setup_mmio_filtering(struct felix *felix)
+ 		 * the CPU port module
+ 		 */
+ 		redirect_rule->action.mask_mode = OCELOT_MASK_MODE_REDIRECT;
+-		redirect_rule->action.port_mask = cpu_ports;
++		redirect_rule->action.port_mask = BIT(cpu);
+ 	} else {
+ 		/* Trap PTP packets only to the CPU port module (which is
+ 		 * redirected to the NPI port)
+@@ -1241,6 +1244,7 @@ static int felix_setup(struct dsa_switch *ds)
+ 		 * there's no real point in checking for errors.
+ 		 */
+ 		felix_set_tag_protocol(ds, port, felix->tag_proto);
++		break;
+ 	}
+ 
+ 	ds->mtu_enforcement_ingress = true;
+@@ -1277,6 +1281,7 @@ static void felix_teardown(struct dsa_switch *ds)
+ 			continue;
+ 
+ 		felix_del_tag_protocol(ds, port, felix->tag_proto);
++		break;
+ 	}
+ 
+ 	ocelot_devlink_sb_unregister(ocelot);
+@@ -1406,8 +1411,12 @@ static void felix_txtstamp(struct dsa_switch *ds, int port,
+ 	if (!ocelot->ptp)
+ 		return;
+ 
+-	if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone))
++	if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) {
++		dev_err_ratelimited(ds->dev,
++				    "port %d delivering skb without TX timestamp\n",
++				    port);
+ 		return;
++	}
+ 
+ 	if (clone)
+ 		OCELOT_SKB_CB(skb)->clone = clone;
+diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
+index 1cdff1dca790c..17d3da4605ec0 100644
+--- a/drivers/net/ethernet/Kconfig
++++ b/drivers/net/ethernet/Kconfig
+@@ -100,6 +100,7 @@ config JME
+ config KORINA
+ 	tristate "Korina (IDT RC32434) Ethernet support"
+ 	depends on MIKROTIK_RB532 || COMPILE_TEST
++	select CRC32
+ 	select MII
+ 	help
+ 	  If you have a Mikrotik RouterBoard 500 or IDT RC32434
+diff --git a/drivers/net/ethernet/arc/Kconfig b/drivers/net/ethernet/arc/Kconfig
+index 37a41773dd435..92a79c4ffa2c7 100644
+--- a/drivers/net/ethernet/arc/Kconfig
++++ b/drivers/net/ethernet/arc/Kconfig
+@@ -21,6 +21,7 @@ config ARC_EMAC_CORE
+ 	depends on ARC || ARCH_ROCKCHIP || COMPILE_TEST
+ 	select MII
+ 	select PHYLIB
++	select CRC32
+ 
+ config ARC_EMAC
+ 	tristate "ARC EMAC support"
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 234bc68e79f96..c2465b9d80567 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1324,22 +1324,21 @@ ice_ptp_flush_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
+ {
+ 	u8 idx;
+ 
+-	spin_lock(&tx->lock);
+-
+ 	for (idx = 0; idx < tx->len; idx++) {
+ 		u8 phy_idx = idx + tx->quad_offset;
+ 
+-		/* Clear any potential residual timestamp in the PHY block */
+-		if (!pf->hw.reset_ongoing)
+-			ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
+-
++		spin_lock(&tx->lock);
+ 		if (tx->tstamps[idx].skb) {
+ 			dev_kfree_skb_any(tx->tstamps[idx].skb);
+ 			tx->tstamps[idx].skb = NULL;
+ 		}
+-	}
++		clear_bit(idx, tx->in_use);
++		spin_unlock(&tx->lock);
+ 
+-	spin_unlock(&tx->lock);
++		/* Clear any potential residual timestamp in the PHY block */
++		if (!pf->hw.reset_ongoing)
++			ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
++	}
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+index 360e093874d4f..c74600be570ed 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+@@ -154,6 +154,8 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
+ 	u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {};
+ 	int err;
+ 
++	mlx5_debug_cq_remove(dev, cq);
++
+ 	mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq);
+ 	mlx5_eq_del_cq(&cq->eq->core, cq);
+ 
+@@ -161,16 +163,13 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
+ 	MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
+ 	MLX5_SET(destroy_cq_in, in, uid, cq->uid);
+ 	err = mlx5_cmd_exec_in(dev, destroy_cq, in);
+-	if (err)
+-		return err;
+ 
+ 	synchronize_irq(cq->irqn);
+ 
+-	mlx5_debug_cq_remove(dev, cq);
+ 	mlx5_cq_put(cq);
+ 	wait_for_completion(&cq->free);
+ 
+-	return 0;
++	return err;
+ }
+ EXPORT_SYMBOL(mlx5_core_destroy_cq);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 548e8e7fc956e..56fdcd487b9d7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3724,20 +3724,67 @@ static int set_feature_rx_all(struct net_device *netdev, bool enable)
+ 	return mlx5_set_port_fcs(mdev, !enable);
+ }
+ 
++static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable)
++{
++	u32 in[MLX5_ST_SZ_DW(pcmr_reg)] = {};
++	bool supported, curr_state;
++	int err;
++
++	if (!MLX5_CAP_GEN(mdev, ports_check))
++		return 0;
++
++	err = mlx5_query_ports_check(mdev, in, sizeof(in));
++	if (err)
++		return err;
++
++	supported = MLX5_GET(pcmr_reg, in, rx_ts_over_crc_cap);
++	curr_state = MLX5_GET(pcmr_reg, in, rx_ts_over_crc);
++
++	if (!supported || enable == curr_state)
++		return 0;
++
++	MLX5_SET(pcmr_reg, in, local_port, 1);
++	MLX5_SET(pcmr_reg, in, rx_ts_over_crc, enable);
++
++	return mlx5_set_ports_check(mdev, in, sizeof(in));
++}
++
+ static int set_feature_rx_fcs(struct net_device *netdev, bool enable)
+ {
+ 	struct mlx5e_priv *priv = netdev_priv(netdev);
++	struct mlx5e_channels *chs = &priv->channels;
++	struct mlx5_core_dev *mdev = priv->mdev;
+ 	int err;
+ 
+ 	mutex_lock(&priv->state_lock);
+ 
+-	priv->channels.params.scatter_fcs_en = enable;
+-	err = mlx5e_modify_channels_scatter_fcs(&priv->channels, enable);
+-	if (err)
+-		priv->channels.params.scatter_fcs_en = !enable;
++	if (enable) {
++		err = mlx5e_set_rx_port_ts(mdev, false);
++		if (err)
++			goto out;
+ 
+-	mutex_unlock(&priv->state_lock);
++		chs->params.scatter_fcs_en = true;
++		err = mlx5e_modify_channels_scatter_fcs(chs, true);
++		if (err) {
++			chs->params.scatter_fcs_en = false;
++			mlx5e_set_rx_port_ts(mdev, true);
++		}
++	} else {
++		chs->params.scatter_fcs_en = false;
++		err = mlx5e_modify_channels_scatter_fcs(chs, false);
++		if (err) {
++			chs->params.scatter_fcs_en = true;
++			goto out;
++		}
++		err = mlx5e_set_rx_port_ts(mdev, true);
++		if (err) {
++			mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err);
++			err = 0;
++		}
++	}
+ 
++out:
++	mutex_unlock(&priv->state_lock);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index bec1d344481cd..8b757d790f560 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -611,7 +611,6 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev,
+ 	netdev->hw_features    |= NETIF_F_RXCSUM;
+ 
+ 	netdev->features |= netdev->hw_features;
+-	netdev->features |= NETIF_F_VLAN_CHALLENGED;
+ 	netdev->features |= NETIF_F_NETNS_LOCAL;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index 0998dcc9cac04..b29824448aa85 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -24,16 +24,8 @@
+ #define MLXSW_THERMAL_ZONE_MAX_NAME	16
+ #define MLXSW_THERMAL_TEMP_SCORE_MAX	GENMASK(31, 0)
+ #define MLXSW_THERMAL_MAX_STATE	10
++#define MLXSW_THERMAL_MIN_STATE	2
+ #define MLXSW_THERMAL_MAX_DUTY	255
+-/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values
+- * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for
+- * setting fan speed dynamic minimum. For example, if value is set to 14 (40%)
+- * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to
+- * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100.
+- */
+-#define MLXSW_THERMAL_SPEED_MIN		(MLXSW_THERMAL_MAX_STATE + 2)
+-#define MLXSW_THERMAL_SPEED_MAX		(MLXSW_THERMAL_MAX_STATE * 2)
+-#define MLXSW_THERMAL_SPEED_MIN_LEVEL	2		/* 20% */
+ 
+ /* External cooling devices, allowed for binding to mlxsw thermal zones. */
+ static char * const mlxsw_thermal_external_allowed_cdev[] = {
+@@ -646,49 +638,16 @@ static int mlxsw_thermal_set_cur_state(struct thermal_cooling_device *cdev,
+ 	struct mlxsw_thermal *thermal = cdev->devdata;
+ 	struct device *dev = thermal->bus_info->dev;
+ 	char mfsc_pl[MLXSW_REG_MFSC_LEN];
+-	unsigned long cur_state, i;
+ 	int idx;
+-	u8 duty;
+ 	int err;
+ 
++	if (state > MLXSW_THERMAL_MAX_STATE)
++		return -EINVAL;
++
+ 	idx = mlxsw_get_cooling_device_idx(thermal, cdev);
+ 	if (idx < 0)
+ 		return idx;
+ 
+-	/* Verify if this request is for changing allowed fan dynamical
+-	 * minimum. If it is - update cooling levels accordingly and update
+-	 * state, if current state is below the newly requested minimum state.
+-	 * For example, if current state is 5, and minimal state is to be
+-	 * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed
+-	 * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be
+-	 * overwritten.
+-	 */
+-	if (state >= MLXSW_THERMAL_SPEED_MIN &&
+-	    state <= MLXSW_THERMAL_SPEED_MAX) {
+-		state -= MLXSW_THERMAL_MAX_STATE;
+-		for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++)
+-			thermal->cooling_levels[i] = max(state, i);
+-
+-		mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0);
+-		err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl);
+-		if (err)
+-			return err;
+-
+-		duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl);
+-		cur_state = mlxsw_duty_to_state(duty);
+-
+-		/* If current fan state is lower than requested dynamical
+-		 * minimum, increase fan speed up to dynamical minimum.
+-		 */
+-		if (state < cur_state)
+-			return 0;
+-
+-		state = cur_state;
+-	}
+-
+-	if (state > MLXSW_THERMAL_MAX_STATE)
+-		return -EINVAL;
+-
+ 	/* Normalize the state to the valid speed range. */
+ 	state = thermal->cooling_levels[state];
+ 	mlxsw_reg_mfsc_pack(mfsc_pl, idx, mlxsw_state_to_duty(state));
+@@ -998,8 +957,7 @@ int mlxsw_thermal_init(struct mlxsw_core *core,
+ 
+ 	/* Initialize cooling levels per PWM state. */
+ 	for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++)
+-		thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL,
+-						 i);
++		thermal->cooling_levels[i] = max(MLXSW_THERMAL_MIN_STATE, i);
+ 
+ 	thermal->polling_delay = bus_info->low_frequency ?
+ 				 MLXSW_THERMAL_SLOW_POLL_INT :
+diff --git a/drivers/net/ethernet/microchip/encx24j600-regmap.c b/drivers/net/ethernet/microchip/encx24j600-regmap.c
+index 796e46a539269..81a8ccca7e5e0 100644
+--- a/drivers/net/ethernet/microchip/encx24j600-regmap.c
++++ b/drivers/net/ethernet/microchip/encx24j600-regmap.c
+@@ -497,13 +497,19 @@ static struct regmap_bus phymap_encx24j600 = {
+ 	.reg_read = regmap_encx24j600_phy_reg_read,
+ };
+ 
+-void devm_regmap_init_encx24j600(struct device *dev,
+-				 struct encx24j600_context *ctx)
++int devm_regmap_init_encx24j600(struct device *dev,
++				struct encx24j600_context *ctx)
+ {
+ 	mutex_init(&ctx->mutex);
+ 	regcfg.lock_arg = ctx;
+ 	ctx->regmap = devm_regmap_init(dev, &regmap_encx24j600, ctx, &regcfg);
++	if (IS_ERR(ctx->regmap))
++		return PTR_ERR(ctx->regmap);
+ 	ctx->phymap = devm_regmap_init(dev, &phymap_encx24j600, ctx, &phycfg);
++	if (IS_ERR(ctx->phymap))
++		return PTR_ERR(ctx->phymap);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(devm_regmap_init_encx24j600);
+ 
+diff --git a/drivers/net/ethernet/microchip/encx24j600.c b/drivers/net/ethernet/microchip/encx24j600.c
+index ee921a99e439a..0bc6b3176fbf0 100644
+--- a/drivers/net/ethernet/microchip/encx24j600.c
++++ b/drivers/net/ethernet/microchip/encx24j600.c
+@@ -1023,10 +1023,13 @@ static int encx24j600_spi_probe(struct spi_device *spi)
+ 	priv->speed = SPEED_100;
+ 
+ 	priv->ctx.spi = spi;
+-	devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
+ 	ndev->irq = spi->irq;
+ 	ndev->netdev_ops = &encx24j600_netdev_ops;
+ 
++	ret = devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
++	if (ret)
++		goto out_free;
++
+ 	mutex_init(&priv->lock);
+ 
+ 	/* Reset device and check if it is connected */
+diff --git a/drivers/net/ethernet/microchip/encx24j600_hw.h b/drivers/net/ethernet/microchip/encx24j600_hw.h
+index fac61a8fbd020..34c5a289898c9 100644
+--- a/drivers/net/ethernet/microchip/encx24j600_hw.h
++++ b/drivers/net/ethernet/microchip/encx24j600_hw.h
+@@ -15,8 +15,8 @@ struct encx24j600_context {
+ 	int bank;
+ };
+ 
+-void devm_regmap_init_encx24j600(struct device *dev,
+-				 struct encx24j600_context *ctx);
++int devm_regmap_init_encx24j600(struct device *dev,
++				struct encx24j600_context *ctx);
+ 
+ /* Single-byte instructions */
+ #define BANK_SELECT(bank) (0xC0 | ((bank & (BANK_MASK >> BANK_SHIFT)) << 1))
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 512dff9551669..acfbe94b52918 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -536,20 +536,36 @@ void ocelot_port_disable(struct ocelot *ocelot, int port)
+ }
+ EXPORT_SYMBOL(ocelot_port_disable);
+ 
+-static void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
+-					 struct sk_buff *clone)
++static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
++					struct sk_buff *clone)
+ {
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
++	unsigned long flags;
+ 
+-	spin_lock(&ocelot_port->ts_id_lock);
++	spin_lock_irqsave(&ocelot->ts_id_lock, flags);
++
++	if (ocelot_port->ptp_skbs_in_flight == OCELOT_MAX_PTP_ID ||
++	    ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
++		spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
++		return -EBUSY;
++	}
+ 
+ 	skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+ 	/* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
+ 	OCELOT_SKB_CB(clone)->ts_id = ocelot_port->ts_id;
+-	ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4;
++
++	ocelot_port->ts_id++;
++	if (ocelot_port->ts_id == OCELOT_MAX_PTP_ID)
++		ocelot_port->ts_id = 0;
++
++	ocelot_port->ptp_skbs_in_flight++;
++	ocelot->ptp_skbs_in_flight++;
++
+ 	skb_queue_tail(&ocelot_port->tx_skbs, clone);
+ 
+-	spin_unlock(&ocelot_port->ts_id_lock);
++	spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
++
++	return 0;
+ }
+ 
+ u32 ocelot_ptp_rew_op(struct sk_buff *skb)
+@@ -569,16 +585,12 @@ u32 ocelot_ptp_rew_op(struct sk_buff *skb)
+ }
+ EXPORT_SYMBOL(ocelot_ptp_rew_op);
+ 
+-static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb)
++static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb,
++				       unsigned int ptp_class)
+ {
+ 	struct ptp_header *hdr;
+-	unsigned int ptp_class;
+ 	u8 msgtype, twostep;
+ 
+-	ptp_class = ptp_classify_raw(skb);
+-	if (ptp_class == PTP_CLASS_NONE)
+-		return false;
+-
+ 	hdr = ptp_parse_header(skb, ptp_class);
+ 	if (!hdr)
+ 		return false;
+@@ -598,10 +610,20 @@ int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port,
+ {
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
+ 	u8 ptp_cmd = ocelot_port->ptp_cmd;
++	unsigned int ptp_class;
++	int err;
++
++	/* Don't do anything if PTP timestamping not enabled */
++	if (!ptp_cmd)
++		return 0;
++
++	ptp_class = ptp_classify_raw(skb);
++	if (ptp_class == PTP_CLASS_NONE)
++		return -EINVAL;
+ 
+ 	/* Store ptp_cmd in OCELOT_SKB_CB(skb)->ptp_cmd */
+ 	if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
+-		if (ocelot_ptp_is_onestep_sync(skb)) {
++		if (ocelot_ptp_is_onestep_sync(skb, ptp_class)) {
+ 			OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
+ 			return 0;
+ 		}
+@@ -615,8 +637,12 @@ int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port,
+ 		if (!(*clone))
+ 			return -ENOMEM;
+ 
+-		ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
++		err = ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
++		if (err)
++			return err;
++
+ 		OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
++		OCELOT_SKB_CB(*clone)->ptp_class = ptp_class;
+ 	}
+ 
+ 	return 0;
+@@ -650,6 +676,17 @@ static void ocelot_get_hwtimestamp(struct ocelot *ocelot,
+ 	spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
+ }
+ 
++static bool ocelot_validate_ptp_skb(struct sk_buff *clone, u16 seqid)
++{
++	struct ptp_header *hdr;
++
++	hdr = ptp_parse_header(clone, OCELOT_SKB_CB(clone)->ptp_class);
++	if (WARN_ON(!hdr))
++		return false;
++
++	return seqid == ntohs(hdr->sequence_id);
++}
++
+ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ {
+ 	int budget = OCELOT_PTP_QUEUE_SZ;
+@@ -657,10 +694,10 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ 	while (budget--) {
+ 		struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
+ 		struct skb_shared_hwtstamps shhwtstamps;
++		u32 val, id, seqid, txport;
+ 		struct ocelot_port *port;
+ 		struct timespec64 ts;
+ 		unsigned long flags;
+-		u32 val, id, txport;
+ 
+ 		val = ocelot_read(ocelot, SYS_PTP_STATUS);
+ 
+@@ -673,10 +710,17 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ 		/* Retrieve the ts ID and Tx port */
+ 		id = SYS_PTP_STATUS_PTP_MESS_ID_X(val);
+ 		txport = SYS_PTP_STATUS_PTP_MESS_TXPORT_X(val);
++		seqid = SYS_PTP_STATUS_PTP_MESS_SEQ_ID(val);
+ 
+-		/* Retrieve its associated skb */
+ 		port = ocelot->ports[txport];
+ 
++		spin_lock(&ocelot->ts_id_lock);
++		port->ptp_skbs_in_flight--;
++		ocelot->ptp_skbs_in_flight--;
++		spin_unlock(&ocelot->ts_id_lock);
++
++		/* Retrieve its associated skb */
++try_again:
+ 		spin_lock_irqsave(&port->tx_skbs.lock, flags);
+ 
+ 		skb_queue_walk_safe(&port->tx_skbs, skb, skb_tmp) {
+@@ -689,12 +733,20 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ 
+ 		spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
+ 
++		if (WARN_ON(!skb_match))
++			continue;
++
++		if (!ocelot_validate_ptp_skb(skb_match, seqid)) {
++			dev_err_ratelimited(ocelot->dev,
++					    "port %d received stale TX timestamp for seqid %d, discarding\n",
++					    txport, seqid);
++			dev_kfree_skb_any(skb);
++			goto try_again;
++		}
++
+ 		/* Get the h/w timestamp */
+ 		ocelot_get_hwtimestamp(ocelot, &ts);
+ 
+-		if (unlikely(!skb_match))
+-			continue;
+-
+ 		/* Set the timestamp into the skb */
+ 		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ 		shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+@@ -1915,7 +1967,6 @@ void ocelot_init_port(struct ocelot *ocelot, int port)
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
+ 
+ 	skb_queue_head_init(&ocelot_port->tx_skbs);
+-	spin_lock_init(&ocelot_port->ts_id_lock);
+ 
+ 	/* Basic L2 initialization */
+ 
+@@ -2039,6 +2090,7 @@ int ocelot_init(struct ocelot *ocelot)
+ 	mutex_init(&ocelot->stats_lock);
+ 	mutex_init(&ocelot->ptp_lock);
+ 	spin_lock_init(&ocelot->ptp_clock_lock);
++	spin_lock_init(&ocelot->ts_id_lock);
+ 	snprintf(queue_name, sizeof(queue_name), "%s-stats",
+ 		 dev_name(ocelot->dev));
+ 	ocelot->stats_queue = create_singlethread_workqueue(queue_name);
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index 0b017d4f5c085..a988ed360185b 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -8566,7 +8566,7 @@ static void s2io_io_resume(struct pci_dev *pdev)
+ 			return;
+ 		}
+ 
+-		if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) {
++		if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) {
+ 			s2io_card_down(sp);
+ 			pr_err("Can't restore mac addr after reset.\n");
+ 			return;
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c
+index c029950a81e20..ac1dcfa1d1790 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/main.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/main.c
+@@ -830,10 +830,6 @@ static int nfp_flower_init(struct nfp_app *app)
+ 	if (err)
+ 		goto err_cleanup;
+ 
+-	err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
+-	if (err)
+-		goto err_cleanup;
+-
+ 	if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM)
+ 		nfp_flower_qos_init(app);
+ 
+@@ -942,7 +938,20 @@ static int nfp_flower_start(struct nfp_app *app)
+ 			return err;
+ 	}
+ 
+-	return nfp_tunnel_config_start(app);
++	err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
++	if (err)
++		return err;
++
++	err = nfp_tunnel_config_start(app);
++	if (err)
++		goto err_tunnel_config;
++
++	return 0;
++
++err_tunnel_config:
++	flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,
++				 nfp_flower_setup_indr_tc_release);
++	return err;
+ }
+ 
+ static void nfp_flower_stop(struct nfp_app *app)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index e795fa63ca12e..14429d2900f2f 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1357,6 +1357,10 @@ static int ionic_addr_add(struct net_device *netdev, const u8 *addr)
+ 
+ static int ionic_addr_del(struct net_device *netdev, const u8 *addr)
+ {
++	/* Don't delete our own address from the uc list */
++	if (ether_addr_equal(addr, netdev->dev_addr))
++		return 0;
++
+ 	return ionic_lif_addr(netdev_priv(netdev), addr, DEL_ADDR);
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 6bb9ec98a12b5..41bc31e3f9356 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -1295,6 +1295,7 @@ static int qed_slowpath_start(struct qed_dev *cdev,
+ 			} else {
+ 				DP_NOTICE(cdev,
+ 					  "Failed to acquire PTT for aRFS\n");
++				rc = -EINVAL;
+ 				goto err;
+ 			}
+ 		}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+index 90383abafa66a..f5581db0ba9ba 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+@@ -218,11 +218,18 @@ static void dwmac1000_dump_dma_regs(void __iomem *ioaddr, u32 *reg_space)
+ 				readl(ioaddr + DMA_BUS_MODE + i * 4);
+ }
+ 
+-static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
+-				     struct dma_features *dma_cap)
++static int dwmac1000_get_hw_feature(void __iomem *ioaddr,
++				    struct dma_features *dma_cap)
+ {
+ 	u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE);
+ 
++	if (!hw_cap) {
++		/* 0x00000000 is the value read on old hardware that does not
++		 * implement this register
++		 */
++		return -EOPNOTSUPP;
++	}
++
+ 	dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
+ 	dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
+ 	dma_cap->half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
+@@ -252,6 +259,8 @@ static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
+ 	/* Alternate (enhanced) DESC mode */
+ 	dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
++
++	return 0;
+ }
+ 
+ static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index 5be8e6a631d9b..d99fa028c6468 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -347,8 +347,8 @@ static void dwmac4_dma_tx_chan_op_mode(void __iomem *ioaddr, int mode,
+ 	writel(mtl_tx_op, ioaddr +  MTL_CHAN_TX_OP_MODE(channel));
+ }
+ 
+-static void dwmac4_get_hw_feature(void __iomem *ioaddr,
+-				  struct dma_features *dma_cap)
++static int dwmac4_get_hw_feature(void __iomem *ioaddr,
++				 struct dma_features *dma_cap)
+ {
+ 	u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0);
+ 
+@@ -437,6 +437,8 @@ static void dwmac4_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->frpbs = (hw_cap & GMAC_HW_FEAT_FRPBS) >> 11;
+ 	dma_cap->frpsel = (hw_cap & GMAC_HW_FEAT_FRPSEL) >> 10;
+ 	dma_cap->dvlan = (hw_cap & GMAC_HW_FEAT_DVLAN) >> 5;
++
++	return 0;
+ }
+ 
+ /* Enable/disable TSO feature and set MSS */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+index 906e985441a93..5e98355f422b3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -371,8 +371,8 @@ static int dwxgmac2_dma_interrupt(void __iomem *ioaddr,
+ 	return ret;
+ }
+ 
+-static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+-				    struct dma_features *dma_cap)
++static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
++				   struct dma_features *dma_cap)
+ {
+ 	u32 hw_cap;
+ 
+@@ -445,6 +445,8 @@ static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->frpes = (hw_cap & XGMAC_HWFEAT_FRPES) >> 11;
+ 	dma_cap->frpbs = (hw_cap & XGMAC_HWFEAT_FRPPB) >> 9;
+ 	dma_cap->frpsel = (hw_cap & XGMAC_HWFEAT_FRPSEL) >> 3;
++
++	return 0;
+ }
+ 
+ static void dwxgmac2_rx_watchdog(void __iomem *ioaddr, u32 riwt, u32 queue)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index 6dc1c98ebec82..fe2660d5694d7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -203,8 +203,8 @@ struct stmmac_dma_ops {
+ 	int (*dma_interrupt) (void __iomem *ioaddr,
+ 			      struct stmmac_extra_stats *x, u32 chan, u32 dir);
+ 	/* If supported then get the optional core features */
+-	void (*get_hw_feature)(void __iomem *ioaddr,
+-			       struct dma_features *dma_cap);
++	int (*get_hw_feature)(void __iomem *ioaddr,
++			      struct dma_features *dma_cap);
+ 	/* Program the HW RX Watchdog */
+ 	void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt, u32 queue);
+ 	void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len, u32 chan);
+@@ -255,7 +255,7 @@ struct stmmac_dma_ops {
+ #define stmmac_dma_interrupt_status(__priv, __args...) \
+ 	stmmac_do_callback(__priv, dma, dma_interrupt, __args)
+ #define stmmac_get_hw_feature(__priv, __args...) \
+-	stmmac_do_void_callback(__priv, dma, get_hw_feature, __args)
++	stmmac_do_callback(__priv, dma, get_hw_feature, __args)
+ #define stmmac_rx_watchdog(__priv, __args...) \
+ 	stmmac_do_void_callback(__priv, dma, rx_watchdog, __args)
+ #define stmmac_set_tx_ring_len(__priv, __args...) \
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 5d5f9a9ee768a..787462310aae3 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3112,6 +3112,9 @@ static void phy_shutdown(struct device *dev)
+ {
+ 	struct phy_device *phydev = to_phy_device(dev);
+ 
++	if (phydev->state == PHY_READY || !phydev->attached_dev)
++		return;
++
+ 	phy_disable_interrupts(phydev);
+ }
+ 
+diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
+index 4c5d69732a7e1..f87f175033731 100644
+--- a/drivers/net/usb/Kconfig
++++ b/drivers/net/usb/Kconfig
+@@ -99,6 +99,10 @@ config USB_RTL8150
+ config USB_RTL8152
+ 	tristate "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"
+ 	select MII
++	select CRC32
++	select CRYPTO
++	select CRYPTO_HASH
++	select CRYPTO_SHA256
+ 	help
+ 	  This option adds support for Realtek RTL8152 based USB 2.0
+ 	  10/100 Ethernet adapters and RTL8153 based USB 3.0 10/100/1000
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4f22fbafe964f..1f41619018c16 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1331,7 +1331,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
+ 	iod->aborted = 1;
+ 
+ 	cmd.abort.opcode = nvme_admin_abort_cmd;
+-	cmd.abort.cid = req->tag;
++	cmd.abort.cid = nvme_cid(req);
+ 	cmd.abort.sqid = cpu_to_le16(nvmeq->qid);
+ 
+ 	dev_warn(nvmeq->dev->ctrl.device,
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 3d87fadaa160d..8976da38b375a 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1383,7 +1383,8 @@ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf)
+ 		*p-- = 0;
+ 
+ 	/* clear msb bits if any leftover in the last byte */
+-	*p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0);
++	if (cell->nbits % BITS_PER_BYTE)
++		*p &= GENMASK((cell->nbits % BITS_PER_BYTE) - 1, 0);
+ }
+ 
+ static int __nvmem_cell_read(struct nvmem_device *nvmem,
+diff --git a/drivers/platform/mellanox/mlxreg-io.c b/drivers/platform/mellanox/mlxreg-io.c
+index 7646708d57e42..a916cd89cbbed 100644
+--- a/drivers/platform/mellanox/mlxreg-io.c
++++ b/drivers/platform/mellanox/mlxreg-io.c
+@@ -98,7 +98,7 @@ mlxreg_io_get_reg(void *regmap, struct mlxreg_core_data *data, u32 in_val,
+ 			if (ret)
+ 				goto access_error;
+ 
+-			*regval |= rol32(val, regsize * i);
++			*regval |= rol32(val, regsize * i * 8);
+ 		}
+ 	}
+ 
+@@ -141,7 +141,7 @@ mlxreg_io_attr_store(struct device *dev, struct device_attribute *attr,
+ 		return -EINVAL;
+ 
+ 	/* Convert buffer to input value. */
+-	ret = kstrtou32(buf, len, &input_val);
++	ret = kstrtou32(buf, 0, &input_val);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
+index d6a7c896ac866..fc95620101e85 100644
+--- a/drivers/platform/x86/amd-pmc.c
++++ b/drivers/platform/x86/amd-pmc.c
+@@ -476,6 +476,7 @@ static const struct acpi_device_id amd_pmc_acpi_ids[] = {
+ 	{"AMDI0006", 0},
+ 	{"AMDI0007", 0},
+ 	{"AMD0004", 0},
++	{"AMD0005", 0},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(acpi, amd_pmc_acpi_ids);
+diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c
+index d53634c8a6e09..658bab4b79648 100644
+--- a/drivers/platform/x86/gigabyte-wmi.c
++++ b/drivers/platform/x86/gigabyte-wmi.c
+@@ -141,6 +141,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev)
+ 
+ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"),
++	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE AX V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"),
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index 9171a46a9e3fe..25b98b12439f1 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -232,7 +232,7 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset)
+ /* Wait till scu status is busy */
+ static inline int busy_loop(struct intel_scu_ipc_dev *scu)
+ {
+-	unsigned long end = jiffies + msecs_to_jiffies(IPC_TIMEOUT);
++	unsigned long end = jiffies + IPC_TIMEOUT;
+ 
+ 	do {
+ 		u32 status;
+diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c
+index 788dcdf25f003..f872cf196c2f3 100644
+--- a/drivers/spi/spi-atmel.c
++++ b/drivers/spi/spi-atmel.c
+@@ -1301,7 +1301,7 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 	 * DMA map early, for performance (empties dcache ASAP) and
+ 	 * better fault reporting.
+ 	 */
+-	if ((!master->cur_msg_mapped)
++	if ((!master->cur_msg->is_dma_mapped)
+ 		&& as->use_pdc) {
+ 		if (atmel_spi_dma_map_xfer(as, xfer) < 0)
+ 			return -ENOMEM;
+@@ -1381,7 +1381,7 @@ static int atmel_spi_one_transfer(struct spi_master *master,
+ 		}
+ 	}
+ 
+-	if (!master->cur_msg_mapped
++	if (!master->cur_msg->is_dma_mapped
+ 		&& as->use_pdc)
+ 		atmel_spi_dma_unmap_xfer(master, xfer);
+ 
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index a78e56f566dd8..3043677ba2226 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1250,10 +1250,14 @@ static void bcm_qspi_hw_init(struct bcm_qspi *qspi)
+ 
+ static void bcm_qspi_hw_uninit(struct bcm_qspi *qspi)
+ {
++	u32 status = bcm_qspi_read(qspi, MSPI, MSPI_MSPI_STATUS);
++
+ 	bcm_qspi_write(qspi, MSPI, MSPI_SPCR2, 0);
+ 	if (has_bspi(qspi))
+ 		bcm_qspi_write(qspi, MSPI, MSPI_WRITE_LOCK, 0);
+ 
++	/* clear interrupt */
++	bcm_qspi_write(qspi, MSPI, MSPI_MSPI_STATUS, status & ~1);
+ }
+ 
+ static const struct spi_controller_mem_ops bcm_qspi_mem_ops = {
+@@ -1397,6 +1401,47 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 	if (!qspi->dev_ids)
+ 		return -ENOMEM;
+ 
++	/*
++	 * Some SoCs integrate spi controller (e.g., its interrupt bits)
++	 * in specific ways
++	 */
++	if (soc_intc) {
++		qspi->soc_intc = soc_intc;
++		soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);
++	} else {
++		qspi->soc_intc = NULL;
++	}
++
++	if (qspi->clk) {
++		ret = clk_prepare_enable(qspi->clk);
++		if (ret) {
++			dev_err(dev, "failed to prepare clock\n");
++			goto qspi_probe_err;
++		}
++		qspi->base_clk = clk_get_rate(qspi->clk);
++	} else {
++		qspi->base_clk = MSPI_BASE_FREQ;
++	}
++
++	if (data->has_mspi_rev) {
++		rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);
++		/* some older revs do not have a MSPI_REV register */
++		if ((rev & 0xff) == 0xff)
++			rev = 0;
++	}
++
++	qspi->mspi_maj_rev = (rev >> 4) & 0xf;
++	qspi->mspi_min_rev = rev & 0xf;
++	qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;
++
++	qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);
++
++	/*
++	 * On SW resets it is possible to have the mask still enabled
++	 * Need to disable the mask and clear the status while we init
++	 */
++	bcm_qspi_hw_uninit(qspi);
++
+ 	for (val = 0; val < num_irqs; val++) {
+ 		irq = -1;
+ 		name = qspi_irq_tab[val].irq_name;
+@@ -1433,38 +1478,6 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 		goto qspi_probe_err;
+ 	}
+ 
+-	/*
+-	 * Some SoCs integrate spi controller (e.g., its interrupt bits)
+-	 * in specific ways
+-	 */
+-	if (soc_intc) {
+-		qspi->soc_intc = soc_intc;
+-		soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);
+-	} else {
+-		qspi->soc_intc = NULL;
+-	}
+-
+-	ret = clk_prepare_enable(qspi->clk);
+-	if (ret) {
+-		dev_err(dev, "failed to prepare clock\n");
+-		goto qspi_probe_err;
+-	}
+-
+-	qspi->base_clk = clk_get_rate(qspi->clk);
+-
+-	if (data->has_mspi_rev) {
+-		rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);
+-		/* some older revs do not have a MSPI_REV register */
+-		if ((rev & 0xff) == 0xff)
+-			rev = 0;
+-	}
+-
+-	qspi->mspi_maj_rev = (rev >> 4) & 0xf;
+-	qspi->mspi_min_rev = rev & 0xf;
+-	qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;
+-
+-	qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);
+-
+ 	bcm_qspi_hw_init(qspi);
+ 	init_completion(&qspi->mspi_done);
+ 	init_completion(&qspi->bspi_done);
+diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
+index 24e9469ea35bb..515466c60f77f 100644
+--- a/drivers/spi/spidev.c
++++ b/drivers/spi/spidev.c
+@@ -673,6 +673,19 @@ static const struct file_operations spidev_fops = {
+ 
+ static struct class *spidev_class;
+ 
++static const struct spi_device_id spidev_spi_ids[] = {
++	{ .name = "dh2228fv" },
++	{ .name = "ltc2488" },
++	{ .name = "sx1301" },
++	{ .name = "bk4" },
++	{ .name = "dhcom-board" },
++	{ .name = "m53cpld" },
++	{ .name = "spi-petra" },
++	{ .name = "spi-authenta" },
++	{},
++};
++MODULE_DEVICE_TABLE(spi, spidev_spi_ids);
++
+ #ifdef CONFIG_OF
+ static const struct of_device_id spidev_dt_ids[] = {
+ 	{ .compatible = "rohm,dh2228fv" },
+@@ -819,6 +832,7 @@ static struct spi_driver spidev_spi_driver = {
+ 	},
+ 	.probe =	spidev_probe,
+ 	.remove =	spidev_remove,
++	.id_table =	spidev_spi_ids,
+ 
+ 	/* NOTE:  suspend/resume methods are not necessary here.
+ 	 * We don't do anything except pass the requests to/from
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index 5ce13b099d7dc..5363ebebfc357 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -585,6 +585,9 @@ static int optee_remove(struct platform_device *pdev)
+ {
+ 	struct optee *optee = platform_get_drvdata(pdev);
+ 
++	/* Unregister OP-TEE specific client devices on TEE bus */
++	optee_unregister_devices();
++
+ 	/*
+ 	 * Ask OP-TEE to free all cached shared memory objects to decrease
+ 	 * reference counters and also avoid wild pointers in secure world
+diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
+index ec1d24693ebaa..128a2d2a50a16 100644
+--- a/drivers/tee/optee/device.c
++++ b/drivers/tee/optee/device.c
+@@ -53,6 +53,13 @@ static int get_devices(struct tee_context *ctx, u32 session,
+ 	return 0;
+ }
+ 
++static void optee_release_device(struct device *dev)
++{
++	struct tee_client_device *optee_device = to_tee_client_device(dev);
++
++	kfree(optee_device);
++}
++
+ static int optee_register_device(const uuid_t *device_uuid)
+ {
+ 	struct tee_client_device *optee_device = NULL;
+@@ -63,6 +70,7 @@ static int optee_register_device(const uuid_t *device_uuid)
+ 		return -ENOMEM;
+ 
+ 	optee_device->dev.bus = &tee_bus_type;
++	optee_device->dev.release = optee_release_device;
+ 	if (dev_set_name(&optee_device->dev, "optee-ta-%pUb", device_uuid)) {
+ 		kfree(optee_device);
+ 		return -ENOMEM;
+@@ -154,3 +162,17 @@ int optee_enumerate_devices(u32 func)
+ {
+ 	return  __optee_enumerate_devices(func);
+ }
++
++static int __optee_unregister_device(struct device *dev, void *data)
++{
++	if (!strncmp(dev_name(dev), "optee-ta", strlen("optee-ta")))
++		device_unregister(dev);
++
++	return 0;
++}
++
++void optee_unregister_devices(void)
++{
++	bus_for_each_dev(&tee_bus_type, NULL, NULL,
++			 __optee_unregister_device);
++}
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index dbdd367be1568..f6bb4a763ba94 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -184,6 +184,7 @@ void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages,
+ #define PTA_CMD_GET_DEVICES		0x0
+ #define PTA_CMD_GET_DEVICES_SUPP	0x1
+ int optee_enumerate_devices(u32 func);
++void optee_unregister_devices(void);
+ 
+ /*
+  * Small helpers
+diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
+index bef104511352c..509b597e30edb 100644
+--- a/drivers/usb/host/xhci-dbgtty.c
++++ b/drivers/usb/host/xhci-dbgtty.c
+@@ -408,40 +408,38 @@ static int xhci_dbc_tty_register_device(struct xhci_dbc *dbc)
+ 		return -EBUSY;
+ 
+ 	xhci_dbc_tty_init_port(dbc, port);
+-	tty_dev = tty_port_register_device(&port->port,
+-					   dbc_tty_driver, 0, NULL);
+-	if (IS_ERR(tty_dev)) {
+-		ret = PTR_ERR(tty_dev);
+-		goto register_fail;
+-	}
+ 
+ 	ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL);
+ 	if (ret)
+-		goto buf_alloc_fail;
++		goto err_exit_port;
+ 
+ 	ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool,
+ 				      dbc_read_complete);
+ 	if (ret)
+-		goto request_fail;
++		goto err_free_fifo;
+ 
+ 	ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool,
+ 				      dbc_write_complete);
+ 	if (ret)
+-		goto request_fail;
++		goto err_free_requests;
++
++	tty_dev = tty_port_register_device(&port->port,
++					   dbc_tty_driver, 0, NULL);
++	if (IS_ERR(tty_dev)) {
++		ret = PTR_ERR(tty_dev);
++		goto err_free_requests;
++	}
+ 
+ 	port->registered = true;
+ 
+ 	return 0;
+ 
+-request_fail:
++err_free_requests:
+ 	xhci_dbc_free_requests(&port->read_pool);
+ 	xhci_dbc_free_requests(&port->write_pool);
++err_free_fifo:
+ 	kfifo_free(&port->write_fifo);
+-
+-buf_alloc_fail:
+-	tty_unregister_device(dbc_tty_driver, 0);
+-
+-register_fail:
++err_exit_port:
+ 	xhci_dbc_tty_exit_port(port);
+ 
+ 	dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 1c9a7957c45c5..003c5f0a8760f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -30,6 +30,7 @@
+ #define PCI_VENDOR_ID_FRESCO_LOGIC	0x1b73
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK	0x1000
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009	0x1009
++#define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100	0x1100
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400	0x1400
+ 
+ #define PCI_VENDOR_ID_ETRON		0x1b6f
+@@ -113,6 +114,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	/* Look for vendor-specific quirks */
+ 	if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC &&
+ 			(pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK ||
++			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 ||
+ 			 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) {
+ 		if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
+ 				pdev->revision == 0x0) {
+@@ -279,8 +281,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 			pdev->device == 0x3432)
+ 		xhci->quirks |= XHCI_BROKEN_STREAMS;
+ 
+-	if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483)
++	if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) {
+ 		xhci->quirks |= XHCI_LPM_SUPPORT;
++		xhci->quirks |= XHCI_EP_CTX_BROKEN_DCS;
++	}
+ 
+ 	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ 		pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 9017986241f51..8fedf1bf292ba 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -366,16 +366,22 @@ static void xhci_handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ /* Must be called with xhci->lock held, releases and aquires lock back */
+ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)
+ {
+-	u64 temp_64;
++	u32 temp_32;
+ 	int ret;
+ 
+ 	xhci_dbg(xhci, "Abort command ring\n");
+ 
+ 	reinit_completion(&xhci->cmd_ring_stop_completion);
+ 
+-	temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+-	xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
+-			&xhci->op_regs->cmd_ring);
++	/*
++	 * The control bits like command stop, abort are located in lower
++	 * dword of the command ring control register. Limit the write
++	 * to the lower dword to avoid corrupting the command ring pointer
++	 * in case if the command ring is stopped by the time upper dword
++	 * is written.
++	 */
++	temp_32 = readl(&xhci->op_regs->cmd_ring);
++	writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);
+ 
+ 	/* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the
+ 	 * completion of the Command Abort operation. If CRR is not negated in 5
+@@ -559,8 +565,11 @@ static int xhci_move_dequeue_past_td(struct xhci_hcd *xhci,
+ 	struct xhci_ring *ep_ring;
+ 	struct xhci_command *cmd;
+ 	struct xhci_segment *new_seg;
++	struct xhci_segment *halted_seg = NULL;
+ 	union xhci_trb *new_deq;
+ 	int new_cycle;
++	union xhci_trb *halted_trb;
++	int index = 0;
+ 	dma_addr_t addr;
+ 	u64 hw_dequeue;
+ 	bool cycle_found = false;
+@@ -598,7 +607,27 @@ static int xhci_move_dequeue_past_td(struct xhci_hcd *xhci,
+ 	hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);
+ 	new_seg = ep_ring->deq_seg;
+ 	new_deq = ep_ring->dequeue;
+-	new_cycle = hw_dequeue & 0x1;
++
++	/*
++	 * Quirk: xHC write-back of the DCS field in the hardware dequeue
++	 * pointer is wrong - use the cycle state of the TRB pointed to by
++	 * the dequeue pointer.
++	 */
++	if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS &&
++	    !(ep->ep_state & EP_HAS_STREAMS))
++		halted_seg = trb_in_td(xhci, td->start_seg,
++				       td->first_trb, td->last_trb,
++				       hw_dequeue & ~0xf, false);
++	if (halted_seg) {
++		index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) /
++			 sizeof(*halted_trb);
++		halted_trb = &halted_seg->trbs[index];
++		new_cycle = halted_trb->generic.field[3] & 0x1;
++		xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n",
++			 (u8)(hw_dequeue & 0x1), index, new_cycle);
++	} else {
++		new_cycle = hw_dequeue & 0x1;
++	}
+ 
+ 	/*
+ 	 * We want to find the pointer, segment and cycle state of the new trb
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 4a1346e3de1b2..cb730683f898f 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3212,10 +3212,13 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ 		return;
+ 
+ 	/* Bail out if toggle is already being cleared by a endpoint reset */
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {
+ 		ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;
++		spin_unlock_irqrestore(&xhci->lock, flags);
+ 		return;
+ 	}
++	spin_unlock_irqrestore(&xhci->lock, flags);
+ 	/* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */
+ 	if (usb_endpoint_xfer_control(&host_ep->desc) ||
+ 	    usb_endpoint_xfer_isoc(&host_ep->desc))
+@@ -3301,8 +3304,10 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
+ 	xhci_free_command(xhci, cfg_cmd);
+ cleanup:
+ 	xhci_free_command(xhci, stop_cmd);
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)
+ 		ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
++	spin_unlock_irqrestore(&xhci->lock, flags);
+ }
+ 
+ static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index dca6181c33fdb..5a75fe5631238 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1899,6 +1899,7 @@ struct xhci_hcd {
+ #define XHCI_SG_TRB_CACHE_SIZE_QUIRK	BIT_ULL(39)
+ #define XHCI_NO_SOFT_RETRY	BIT_ULL(40)
+ #define XHCI_BROKEN_D3COLD	BIT_ULL(41)
++#define XHCI_EP_CTX_BROKEN_DCS	BIT_ULL(42)
+ 
+ 	unsigned int		num_active_eps;
+ 	unsigned int		limit_active_eps;
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index ce9fc46c92661..b5935834f9d24 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -899,11 +899,13 @@ static int dsps_probe(struct platform_device *pdev)
+ 	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
+ 		ret = dsps_setup_optional_vbus_irq(pdev, glue);
+ 		if (ret)
+-			goto err;
++			goto unregister_pdev;
+ 	}
+ 
+ 	return 0;
+ 
++unregister_pdev:
++	platform_device_unregister(glue->musb);
+ err:
+ 	pm_runtime_disable(&pdev->dev);
+ 	iounmap(glue->usbss_base);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 6cfb5d33609fb..a484ff5e4ebf8 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -246,11 +246,13 @@ static void option_instat_callback(struct urb *urb);
+ /* These Quectel products use Quectel's vendor ID */
+ #define QUECTEL_PRODUCT_EC21			0x0121
+ #define QUECTEL_PRODUCT_EC25			0x0125
++#define QUECTEL_PRODUCT_EG91			0x0191
+ #define QUECTEL_PRODUCT_EG95			0x0195
+ #define QUECTEL_PRODUCT_BG96			0x0296
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
++#define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ 
+ #define CMOTECH_VENDOR_ID			0x16d8
+@@ -1111,6 +1113,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0xff, 0xff),
++	  .driver_info = NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
+@@ -1128,6 +1133,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+ 	  .driver_info = ZLP },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ 
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+@@ -1227,6 +1233,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1203, 0xff),	/* Telit LE910Cx (RNDIS) */
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1204, 0xff),	/* Telit LE910Cx (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 83da8236e3c8b..c18bf8164bc2e 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -165,6 +165,7 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x1199, 0x907b)},	/* Sierra Wireless EM74xx */
+ 	{DEVICE_SWI(0x1199, 0x9090)},	/* Sierra Wireless EM7565 QDL */
+ 	{DEVICE_SWI(0x1199, 0x9091)},	/* Sierra Wireless EM7565 */
++	{DEVICE_SWI(0x1199, 0x90d2)},	/* Sierra Wireless EM9191 QDL */
+ 	{DEVICE_SWI(0x413c, 0x81a2)},	/* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ 	{DEVICE_SWI(0x413c, 0x81a3)},	/* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ 	{DEVICE_SWI(0x413c, 0x81a4)},	/* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 9479f7f792173..55ec22f4ef901 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -316,7 +316,7 @@ static long vhost_vdpa_set_config_call(struct vhost_vdpa *v, u32 __user *argp)
+ 	struct eventfd_ctx *ctx;
+ 
+ 	cb.callback = vhost_vdpa_config_cb;
+-	cb.private = v->vdpa;
++	cb.private = v;
+ 	if (copy_from_user(&fd, argp, sizeof(fd)))
+ 		return  -EFAULT;
+ 
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 49984d2cba246..581871e707eb2 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -238,6 +238,17 @@ static int virtio_dev_probe(struct device *_d)
+ 		driver_features_legacy = driver_features;
+ 	}
+ 
++	/*
++	 * Some devices detect legacy solely via F_VERSION_1. Write
++	 * F_VERSION_1 to force LE config space accesses before FEATURES_OK for
++	 * these when needed.
++	 */
++	if (drv->validate && !virtio_legacy_is_little_endian()
++			  && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) {
++		dev->features = BIT_ULL(VIRTIO_F_VERSION_1);
++		dev->config->finalize_features(dev);
++	}
++
+ 	if (device_features & (1ULL << VIRTIO_F_VERSION_1))
+ 		dev->features = driver_features & device_features;
+ 	else
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 268ce58d45697..5fec2706490ca 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4859,6 +4859,7 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
+ out_free_delayed:
+ 	btrfs_free_delayed_extent_op(extent_op);
+ out_free_buf:
++	btrfs_tree_unlock(buf);
+ 	free_extent_buffer(buf);
+ out_free_reserved:
+ 	btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 0);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index ee34497500e16..ba44039071e5b 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -733,8 +733,7 @@ int btrfs_drop_extents(struct btrfs_trans_handle *trans,
+ 	if (args->start >= inode->disk_i_size && !args->replace_extent)
+ 		modify_tree = 0;
+ 
+-	update_refs = (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||
+-		       root == fs_info->tree_root);
++	update_refs = (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID);
+ 	while (1) {
+ 		recow = 0;
+ 		ret = btrfs_lookup_file_extent(trans, root, path, ino,
+@@ -2692,14 +2691,16 @@ int btrfs_replace_file_extents(struct btrfs_inode *inode,
+ 						 drop_args.bytes_found);
+ 		if (ret != -ENOSPC) {
+ 			/*
+-			 * When cloning we want to avoid transaction aborts when
+-			 * nothing was done and we are attempting to clone parts
+-			 * of inline extents, in such cases -EOPNOTSUPP is
+-			 * returned by __btrfs_drop_extents() without having
+-			 * changed anything in the file.
++			 * The only time we don't want to abort is if we are
++			 * attempting to clone a partial inline extent, in which
++			 * case we'll get EOPNOTSUPP.  However if we aren't
++			 * clone we need to abort no matter what, because if we
++			 * got EOPNOTSUPP via prealloc then we messed up and
++			 * need to abort.
+ 			 */
+-			if (extent_info && !extent_info->is_new_extent &&
+-			    ret && ret != -EOPNOTSUPP)
++			if (ret &&
++			    (ret != -EOPNOTSUPP ||
++			     (extent_info && extent_info->is_new_extent)))
+ 				btrfs_abort_transaction(trans, ret);
+ 			break;
+ 		}
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7037e5855d2a8..17f0de5bb8733 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1182,7 +1182,10 @@ next:
+ 	/* look for a conflicting sequence number */
+ 	di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir),
+ 					 ref_index, name, namelen, 0);
+-	if (di && !IS_ERR(di)) {
++	if (IS_ERR(di)) {
++		if (PTR_ERR(di) != -ENOENT)
++			return PTR_ERR(di);
++	} else if (di) {
+ 		ret = drop_one_dir_item(trans, root, path, dir, di);
+ 		if (ret)
+ 			return ret;
+@@ -1192,7 +1195,9 @@ next:
+ 	/* look for a conflicting name */
+ 	di = btrfs_lookup_dir_item(trans, root, path, btrfs_ino(dir),
+ 				   name, namelen, 0);
+-	if (di && !IS_ERR(di)) {
++	if (IS_ERR(di)) {
++		return PTR_ERR(di);
++	} else if (di) {
+ 		ret = drop_one_dir_item(trans, root, path, dir, di);
+ 		if (ret)
+ 			return ret;
+@@ -1936,8 +1941,8 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 	struct btrfs_key log_key;
+ 	struct inode *dir;
+ 	u8 log_type;
+-	int exists;
+-	int ret = 0;
++	bool exists;
++	int ret;
+ 	bool update_size = (key->type == BTRFS_DIR_INDEX_KEY);
+ 	bool name_added = false;
+ 
+@@ -1957,12 +1962,12 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 		   name_len);
+ 
+ 	btrfs_dir_item_key_to_cpu(eb, di, &log_key);
+-	exists = btrfs_lookup_inode(trans, root, path, &log_key, 0);
+-	if (exists == 0)
+-		exists = 1;
+-	else
+-		exists = 0;
++	ret = btrfs_lookup_inode(trans, root, path, &log_key, 0);
+ 	btrfs_release_path(path);
++	if (ret < 0)
++		goto out;
++	exists = (ret == 0);
++	ret = 0;
+ 
+ 	if (key->type == BTRFS_DIR_ITEM_KEY) {
+ 		dst_di = btrfs_lookup_dir_item(trans, root, path, key->objectid,
+@@ -1977,7 +1982,14 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+-	if (IS_ERR_OR_NULL(dst_di)) {
++
++	if (dst_di == ERR_PTR(-ENOENT))
++		dst_di = NULL;
++
++	if (IS_ERR(dst_di)) {
++		ret = PTR_ERR(dst_di);
++		goto out;
++	} else if (!dst_di) {
+ 		/* we need a sequence number to insert, so we only
+ 		 * do inserts for the BTRFS_DIR_INDEX_KEY types
+ 		 */
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 6bbae0c3bc0b9..681bea6716dde 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -9467,16 +9467,22 @@ struct mlx5_ifc_pcmr_reg_bits {
+ 	u8         reserved_at_0[0x8];
+ 	u8         local_port[0x8];
+ 	u8         reserved_at_10[0x10];
++
+ 	u8         entropy_force_cap[0x1];
+ 	u8         entropy_calc_cap[0x1];
+ 	u8         entropy_gre_calc_cap[0x1];
+-	u8         reserved_at_23[0x1b];
++	u8         reserved_at_23[0xf];
++	u8         rx_ts_over_crc_cap[0x1];
++	u8         reserved_at_33[0xb];
+ 	u8         fcs_cap[0x1];
+ 	u8         reserved_at_3f[0x1];
++
+ 	u8         entropy_force[0x1];
+ 	u8         entropy_calc[0x1];
+ 	u8         entropy_gre_calc[0x1];
+-	u8         reserved_at_43[0x1b];
++	u8         reserved_at_43[0xf];
++	u8         rx_ts_over_crc[0x1];
++	u8         reserved_at_53[0xb];
+ 	u8         fcs_chk[0x1];
+ 	u8         reserved_at_5f[0x1];
+ };
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 2f5ce4d4fdbff..4984093882372 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -600,10 +600,10 @@ struct ocelot_port {
+ 	/* The VLAN ID that will be transmitted as untagged, on egress */
+ 	struct ocelot_vlan		native_vlan;
+ 
++	unsigned int			ptp_skbs_in_flight;
+ 	u8				ptp_cmd;
+ 	struct sk_buff_head		tx_skbs;
+ 	u8				ts_id;
+-	spinlock_t			ts_id_lock;
+ 
+ 	phy_interface_t			phy_mode;
+ 
+@@ -677,6 +677,9 @@ struct ocelot {
+ 	struct ptp_clock		*ptp_clock;
+ 	struct ptp_clock_info		ptp_info;
+ 	struct hwtstamp_config		hwtstamp_config;
++	unsigned int			ptp_skbs_in_flight;
++	/* Protects the 2-step TX timestamp ID logic */
++	spinlock_t			ts_id_lock;
+ 	/* Protects the PTP interface state */
+ 	struct mutex			ptp_lock;
+ 	/* Protects the PTP clock */
+@@ -691,6 +694,7 @@ struct ocelot_policer {
+ 
+ struct ocelot_skb_cb {
+ 	struct sk_buff *clone;
++	unsigned int ptp_class; /* valid only for clones */
+ 	u8 ptp_cmd;
+ 	u8 ts_id;
+ };
+diff --git a/include/soc/mscc/ocelot_ptp.h b/include/soc/mscc/ocelot_ptp.h
+index ded497d72bdbb..f085884b1fa27 100644
+--- a/include/soc/mscc/ocelot_ptp.h
++++ b/include/soc/mscc/ocelot_ptp.h
+@@ -13,6 +13,9 @@
+ #include <linux/ptp_clock_kernel.h>
+ #include <soc/mscc/ocelot.h>
+ 
++#define OCELOT_MAX_PTP_ID		63
++#define OCELOT_PTP_FIFO_SIZE		128
++
+ #define PTP_PIN_CFG_RSZ			0x20
+ #define PTP_PIN_TOD_SEC_MSB_RSZ		PTP_PIN_CFG_RSZ
+ #define PTP_PIN_TOD_SEC_LSB_RSZ		PTP_PIN_CFG_RSZ
+diff --git a/kernel/module.c b/kernel/module.c
+index ed13917ea5f39..d184be24b9eac 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4484,8 +4484,10 @@ static void cfi_init(struct module *mod)
+ 	/* Fix init/exit functions to point to the CFI jump table */
+ 	if (init)
+ 		mod->init = *init;
++#ifdef CONFIG_MODULE_UNLOAD
+ 	if (exit)
+ 		mod->exit = *exit;
++#endif
+ 
+ 	cfi_module_add(mod, module_addr_min);
+ #endif
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index a1adb29ef5c18..f4aa00a58f3c6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1744,16 +1744,15 @@ void latency_fsnotify(struct trace_array *tr)
+ 	irq_work_queue(&tr->fsnotify_irqwork);
+ }
+ 
+-/*
+- * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \
+- *  defined(CONFIG_FSNOTIFY)
+- */
+-#else
++#elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)	\
++	|| defined(CONFIG_OSNOISE_TRACER)
+ 
+ #define trace_create_maxlat_file(tr, d_tracer)				\
+ 	trace_create_file("tracing_max_latency", 0644, d_tracer,	\
+ 			  &tr->max_latency, &tracing_max_lat_fops)
+ 
++#else
++#define trace_create_maxlat_file(tr, d_tracer)	 do { } while (0)
+ #endif
+ 
+ #ifdef CONFIG_TRACER_MAX_TRACE
+@@ -9457,9 +9456,7 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ 
+ 	create_trace_options_dir(tr);
+ 
+-#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
+ 	trace_create_maxlat_file(tr, d_tracer);
+-#endif
+ 
+ 	if (ftrace_create_function_files(tr, d_tracer))
+ 		MEM_FAIL(1, "Could not allocate function filter files");
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index 5ece05dfd8f2c..9ef9125713321 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -148,7 +148,7 @@ static int dsa_switch_bridge_leave(struct dsa_switch *ds,
+ 		if (extack._msg)
+ 			dev_err(ds->dev, "port %d: %s\n", info->port,
+ 				extack._msg);
+-		if (err && err != EOPNOTSUPP)
++		if (err && err != -EOPNOTSUPP)
+ 			return err;
+ 	}
+ 	return 0;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 4d2abdd3cd3b1..e7ea062876f7c 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -515,7 +515,6 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 
+ 		sk->sk_shutdown |= RCV_SHUTDOWN;
+ 		smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+-		set_bit(MPTCP_DATA_READY, &msk->flags);
+ 
+ 		switch (sk->sk_state) {
+ 		case TCP_ESTABLISHED:
+@@ -730,10 +729,9 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
+ 
+ 	/* Wake-up the reader only for in-sequence data */
+ 	mptcp_data_lock(sk);
+-	if (move_skbs_to_msk(msk, ssk)) {
+-		set_bit(MPTCP_DATA_READY, &msk->flags);
++	if (move_skbs_to_msk(msk, ssk))
+ 		sk->sk_data_ready(sk);
+-	}
++
+ 	mptcp_data_unlock(sk);
+ }
+ 
+@@ -838,7 +836,6 @@ static void mptcp_check_for_eof(struct mptcp_sock *msk)
+ 		sk->sk_shutdown |= RCV_SHUTDOWN;
+ 
+ 		smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+-		set_bit(MPTCP_DATA_READY, &msk->flags);
+ 		sk->sk_data_ready(sk);
+ 	}
+ 
+@@ -1701,21 +1698,6 @@ out:
+ 	return copied ? : ret;
+ }
+ 
+-static void mptcp_wait_data(struct sock *sk, long *timeo)
+-{
+-	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+-	struct mptcp_sock *msk = mptcp_sk(sk);
+-
+-	add_wait_queue(sk_sleep(sk), &wait);
+-	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+-
+-	sk_wait_event(sk, timeo,
+-		      test_bit(MPTCP_DATA_READY, &msk->flags), &wait);
+-
+-	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+-	remove_wait_queue(sk_sleep(sk), &wait);
+-}
+-
+ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
+ 				struct msghdr *msg,
+ 				size_t len, int flags,
+@@ -2019,19 +2001,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		}
+ 
+ 		pr_debug("block timeout %ld", timeo);
+-		mptcp_wait_data(sk, &timeo);
+-	}
+-
+-	if (skb_queue_empty_lockless(&sk->sk_receive_queue) &&
+-	    skb_queue_empty(&msk->receive_queue)) {
+-		/* entire backlog drained, clear DATA_READY. */
+-		clear_bit(MPTCP_DATA_READY, &msk->flags);
+-
+-		/* .. race-breaker: ssk might have gotten new data
+-		 * after last __mptcp_move_skbs() returned false.
+-		 */
+-		if (unlikely(__mptcp_move_skbs(msk)))
+-			set_bit(MPTCP_DATA_READY, &msk->flags);
++		sk_wait_data(sk, &timeo, NULL);
+ 	}
+ 
+ out_err:
+@@ -2040,9 +2010,9 @@ out_err:
+ 			tcp_recv_timestamp(msg, sk, &tss);
+ 	}
+ 
+-	pr_debug("msk=%p data_ready=%d rx queue empty=%d copied=%d",
+-		 msk, test_bit(MPTCP_DATA_READY, &msk->flags),
+-		 skb_queue_empty_lockless(&sk->sk_receive_queue), copied);
++	pr_debug("msk=%p rx queue empty=%d:%d copied=%d",
++		 msk, skb_queue_empty_lockless(&sk->sk_receive_queue),
++		 skb_queue_empty(&msk->receive_queue), copied);
+ 	if (!(flags & MSG_PEEK))
+ 		mptcp_rcv_space_adjust(msk, copied);
+ 
+@@ -2255,7 +2225,6 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
+ 	inet_sk_state_store(sk, TCP_CLOSE);
+ 	sk->sk_shutdown = SHUTDOWN_MASK;
+ 	smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+-	set_bit(MPTCP_DATA_READY, &msk->flags);
+ 	set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags);
+ 
+ 	mptcp_close_wake_up(sk);
+@@ -3272,8 +3241,14 @@ unlock_fail:
+ 
+ static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
+ {
+-	return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM :
+-	       0;
++	/* Concurrent splices from sk_receive_queue into receive_queue will
++	 * always show at least one non-empty queue when checked in this order.
++	 */
++	if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) &&
++	    skb_queue_empty_lockless(&msk->receive_queue))
++		return 0;
++
++	return EPOLLIN | EPOLLRDNORM;
+ }
+ 
+ static __poll_t mptcp_check_writeable(struct mptcp_sock *msk)
+@@ -3308,7 +3283,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
+ 	state = inet_sk_state_load(sk);
+ 	pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
+ 	if (state == TCP_LISTEN)
+-		return mptcp_check_readable(msk);
++		return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM : 0;
+ 
+ 	if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) {
+ 		mask |= mptcp_check_readable(msk);
+diff --git a/net/nfc/af_nfc.c b/net/nfc/af_nfc.c
+index 4a9e72073564a..581358dcbdf8d 100644
+--- a/net/nfc/af_nfc.c
++++ b/net/nfc/af_nfc.c
+@@ -60,6 +60,9 @@ int nfc_proto_register(const struct nfc_protocol *nfc_proto)
+ 		proto_tab[nfc_proto->id] = nfc_proto;
+ 	write_unlock(&proto_tab_lock);
+ 
++	if (rc)
++		proto_unregister(nfc_proto->proto);
++
+ 	return rc;
+ }
+ EXPORT_SYMBOL(nfc_proto_register);
+diff --git a/net/nfc/digital_core.c b/net/nfc/digital_core.c
+index 5044c7db577eb..c8fc9fcbb0a72 100644
+--- a/net/nfc/digital_core.c
++++ b/net/nfc/digital_core.c
+@@ -277,6 +277,7 @@ int digital_tg_configure_hw(struct nfc_digital_dev *ddev, int type, int param)
+ static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
+ {
+ 	struct digital_tg_mdaa_params *params;
++	int rc;
+ 
+ 	params = kzalloc(sizeof(*params), GFP_KERNEL);
+ 	if (!params)
+@@ -291,8 +292,12 @@ static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
+ 	get_random_bytes(params->nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);
+ 	params->sc = DIGITAL_SENSF_FELICA_SC;
+ 
+-	return digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
+-				500, digital_tg_recv_atr_req, NULL);
++	rc = digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
++			      500, digital_tg_recv_atr_req, NULL);
++	if (rc)
++		kfree(params);
++
++	return rc;
+ }
+ 
+ static int digital_tg_listen_md(struct nfc_digital_dev *ddev, u8 rf_tech)
+diff --git a/net/nfc/digital_technology.c b/net/nfc/digital_technology.c
+index 84d2345c75a3f..3adf4589852af 100644
+--- a/net/nfc/digital_technology.c
++++ b/net/nfc/digital_technology.c
+@@ -465,8 +465,12 @@ static int digital_in_send_sdd_req(struct nfc_digital_dev *ddev,
+ 	skb_put_u8(skb, sel_cmd);
+ 	skb_put_u8(skb, DIGITAL_SDD_REQ_SEL_PAR);
+ 
+-	return digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
+-				   target);
++	rc = digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
++				 target);
++	if (rc)
++		kfree_skb(skb);
++
++	return rc;
+ }
+ 
+ static void digital_in_recv_sens_res(struct nfc_digital_dev *ddev, void *arg,
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 8766ab5b87880..5eb3b1b7ae5e7 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -529,22 +529,28 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ 		for (i = tc.offset; i < tc.offset + tc.count; i++) {
+ 			struct netdev_queue *q = netdev_get_tx_queue(dev, i);
+ 			struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
+-			struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
+-			struct gnet_stats_queue __percpu *cpu_qstats = NULL;
+ 
+ 			spin_lock_bh(qdisc_lock(qdisc));
++
+ 			if (qdisc_is_percpu_stats(qdisc)) {
+-				cpu_bstats = qdisc->cpu_bstats;
+-				cpu_qstats = qdisc->cpu_qstats;
++				qlen = qdisc_qlen_sum(qdisc);
++
++				__gnet_stats_copy_basic(NULL, &bstats,
++							qdisc->cpu_bstats,
++							&qdisc->bstats);
++				__gnet_stats_copy_queue(&qstats,
++							qdisc->cpu_qstats,
++							&qdisc->qstats,
++							qlen);
++			} else {
++				qlen		+= qdisc->q.qlen;
++				bstats.bytes	+= qdisc->bstats.bytes;
++				bstats.packets	+= qdisc->bstats.packets;
++				qstats.backlog	+= qdisc->qstats.backlog;
++				qstats.drops	+= qdisc->qstats.drops;
++				qstats.requeues	+= qdisc->qstats.requeues;
++				qstats.overlimits += qdisc->qstats.overlimits;
+ 			}
+-
+-			qlen = qdisc_qlen_sum(qdisc);
+-			__gnet_stats_copy_basic(NULL, &sch->bstats,
+-						cpu_bstats, &qdisc->bstats);
+-			__gnet_stats_copy_queue(&sch->qstats,
+-						cpu_qstats,
+-						&qdisc->qstats,
+-						qlen);
+ 			spin_unlock_bh(qdisc_lock(qdisc));
+ 		}
+ 
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index b8fa8f1a72770..c7503fd649159 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -3697,7 +3697,7 @@ struct sctp_chunk *sctp_make_strreset_req(
+ 	outlen = (sizeof(outreq) + stream_len) * out;
+ 	inlen = (sizeof(inreq) + stream_len) * in;
+ 
+-	retval = sctp_make_reconf(asoc, outlen + inlen);
++	retval = sctp_make_reconf(asoc, SCTP_PAD4(outlen) + SCTP_PAD4(inlen));
+ 	if (!retval)
+ 		return NULL;
+ 
+diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
+index f23f558054a7c..99acd337ba90d 100644
+--- a/net/smc/smc_cdc.c
++++ b/net/smc/smc_cdc.c
+@@ -150,9 +150,11 @@ static int smcr_cdc_get_slot_and_msg_send(struct smc_connection *conn)
+ 
+ again:
+ 	link = conn->lnk;
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_cdc_get_free_slot(conn, link, &wr_buf, NULL, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 
+ 	spin_lock_bh(&conn->send_lock);
+ 	if (link != conn->lnk) {
+@@ -160,6 +162,7 @@ again:
+ 		spin_unlock_bh(&conn->send_lock);
+ 		smc_wr_tx_put_slot(link,
+ 				   (struct smc_wr_tx_pend_priv *)pend);
++		smc_wr_tx_link_put(link);
+ 		if (again)
+ 			return -ENOLINK;
+ 		again = true;
+@@ -167,6 +170,8 @@ again:
+ 	}
+ 	rc = smc_cdc_msg_send(conn, wr_buf, pend);
+ 	spin_unlock_bh(&conn->send_lock);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 116cfd6fac1ff..fa53b2146d8c0 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -949,7 +949,7 @@ struct smc_link *smc_switch_conns(struct smc_link_group *lgr,
+ 		to_lnk = &lgr->lnk[i];
+ 		break;
+ 	}
+-	if (!to_lnk) {
++	if (!to_lnk || !smc_wr_tx_link_hold(to_lnk)) {
+ 		smc_lgr_terminate_sched(lgr);
+ 		return NULL;
+ 	}
+@@ -981,24 +981,26 @@ again:
+ 		read_unlock_bh(&lgr->conns_lock);
+ 		/* pre-fetch buffer outside of send_lock, might sleep */
+ 		rc = smc_cdc_get_free_slot(conn, to_lnk, &wr_buf, NULL, &pend);
+-		if (rc) {
+-			smcr_link_down_cond_sched(to_lnk);
+-			return NULL;
+-		}
++		if (rc)
++			goto err_out;
+ 		/* avoid race with smcr_tx_sndbuf_nonempty() */
+ 		spin_lock_bh(&conn->send_lock);
+ 		smc_switch_link_and_count(conn, to_lnk);
+ 		rc = smc_switch_cursor(smc, pend, wr_buf);
+ 		spin_unlock_bh(&conn->send_lock);
+ 		sock_put(&smc->sk);
+-		if (rc) {
+-			smcr_link_down_cond_sched(to_lnk);
+-			return NULL;
+-		}
++		if (rc)
++			goto err_out;
+ 		goto again;
+ 	}
+ 	read_unlock_bh(&lgr->conns_lock);
++	smc_wr_tx_link_put(to_lnk);
+ 	return to_lnk;
++
++err_out:
++	smcr_link_down_cond_sched(to_lnk);
++	smc_wr_tx_link_put(to_lnk);
++	return NULL;
+ }
+ 
+ static void smcr_buf_unuse(struct smc_buf_desc *rmb_desc,
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 2e7560eba9812..72f4b72eb1753 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -383,9 +383,11 @@ int smc_llc_send_confirm_link(struct smc_link *link,
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	confllc = (struct smc_llc_msg_confirm_link *)wr_buf;
+ 	memset(confllc, 0, sizeof(*confllc));
+ 	confllc->hd.common.type = SMC_LLC_CONFIRM_LINK;
+@@ -402,6 +404,8 @@ int smc_llc_send_confirm_link(struct smc_link *link,
+ 	confllc->max_links = SMC_LLC_ADD_LNK_MAX_LINKS;
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -415,9 +419,11 @@ static int smc_llc_send_confirm_rkey(struct smc_link *send_link,
+ 	struct smc_link *link;
+ 	int i, rc, rtok_ix;
+ 
++	if (!smc_wr_tx_link_hold(send_link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(send_link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	rkeyllc = (struct smc_llc_msg_confirm_rkey *)wr_buf;
+ 	memset(rkeyllc, 0, sizeof(*rkeyllc));
+ 	rkeyllc->hd.common.type = SMC_LLC_CONFIRM_RKEY;
+@@ -444,6 +450,8 @@ static int smc_llc_send_confirm_rkey(struct smc_link *send_link,
+ 		(u64)sg_dma_address(rmb_desc->sgt[send_link->link_idx].sgl));
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(send_link, pend);
++put_out:
++	smc_wr_tx_link_put(send_link);
+ 	return rc;
+ }
+ 
+@@ -456,9 +464,11 @@ static int smc_llc_send_delete_rkey(struct smc_link *link,
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	rkeyllc = (struct smc_llc_msg_delete_rkey *)wr_buf;
+ 	memset(rkeyllc, 0, sizeof(*rkeyllc));
+ 	rkeyllc->hd.common.type = SMC_LLC_DELETE_RKEY;
+@@ -467,6 +477,8 @@ static int smc_llc_send_delete_rkey(struct smc_link *link,
+ 	rkeyllc->rkey[0] = htonl(rmb_desc->mr_rx[link->link_idx]->rkey);
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -480,9 +492,11 @@ int smc_llc_send_add_link(struct smc_link *link, u8 mac[], u8 gid[],
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	addllc = (struct smc_llc_msg_add_link *)wr_buf;
+ 
+ 	memset(addllc, 0, sizeof(*addllc));
+@@ -504,6 +518,8 @@ int smc_llc_send_add_link(struct smc_link *link, u8 mac[], u8 gid[],
+ 	}
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -517,9 +533,11 @@ int smc_llc_send_delete_link(struct smc_link *link, u8 link_del_id,
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	delllc = (struct smc_llc_msg_del_link *)wr_buf;
+ 
+ 	memset(delllc, 0, sizeof(*delllc));
+@@ -536,6 +554,8 @@ int smc_llc_send_delete_link(struct smc_link *link, u8 link_del_id,
+ 	delllc->reason = htonl(reason);
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -547,9 +567,11 @@ static int smc_llc_send_test_link(struct smc_link *link, u8 user_data[16])
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	testllc = (struct smc_llc_msg_test_link *)wr_buf;
+ 	memset(testllc, 0, sizeof(*testllc));
+ 	testllc->hd.common.type = SMC_LLC_TEST_LINK;
+@@ -557,6 +579,8 @@ static int smc_llc_send_test_link(struct smc_link *link, u8 user_data[16])
+ 	memcpy(testllc->user_data, user_data, sizeof(testllc->user_data));
+ 	/* send llc message */
+ 	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+@@ -567,13 +591,16 @@ static int smc_llc_send_message(struct smc_link *link, void *llcbuf)
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
+-	if (!smc_link_usable(link))
++	if (!smc_wr_tx_link_hold(link))
+ 		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
+-	return smc_wr_tx_send(link, pend);
++	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
++	return rc;
+ }
+ 
+ /* schedule an llc send on link, may wait for buffers,
+@@ -586,13 +613,16 @@ static int smc_llc_send_message_wait(struct smc_link *link, void *llcbuf)
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
+-	if (!smc_link_usable(link))
++	if (!smc_wr_tx_link_hold(link))
+ 		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
+-	return smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
++	rc = smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
++put_out:
++	smc_wr_tx_link_put(link);
++	return rc;
+ }
+ 
+ /********************************* receive ***********************************/
+@@ -672,9 +702,11 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 	struct smc_buf_desc *rmb;
+ 	u8 n;
+ 
++	if (!smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
+ 	if (rc)
+-		return rc;
++		goto put_out;
+ 	addc_llc = (struct smc_llc_msg_add_link_cont *)wr_buf;
+ 	memset(addc_llc, 0, sizeof(*addc_llc));
+ 
+@@ -706,7 +738,10 @@ static int smc_llc_add_link_cont(struct smc_link *link,
+ 	addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
+ 	if (lgr->role == SMC_CLNT)
+ 		addc_llc->hd.flags |= SMC_LLC_FLAG_RESP;
+-	return smc_wr_tx_send(link, pend);
++	rc = smc_wr_tx_send(link, pend);
++put_out:
++	smc_wr_tx_link_put(link);
++	return rc;
+ }
+ 
+ static int smc_llc_cli_rkey_exchange(struct smc_link *link,
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index c79361dfcdfb9..738a4a99c8279 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -496,7 +496,7 @@ static int smc_tx_rdma_writes(struct smc_connection *conn,
+ /* Wakeup sndbuf consumers from any context (IRQ or process)
+  * since there is more data to transmit; usable snd_wnd as max transmit
+  */
+-static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
++static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ {
+ 	struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
+ 	struct smc_link *link = conn->lnk;
+@@ -505,8 +505,11 @@ static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ 	struct smc_wr_buf *wr_buf;
+ 	int rc;
+ 
++	if (!link || !smc_wr_tx_link_hold(link))
++		return -ENOLINK;
+ 	rc = smc_cdc_get_free_slot(conn, link, &wr_buf, &wr_rdma_buf, &pend);
+ 	if (rc < 0) {
++		smc_wr_tx_link_put(link);
+ 		if (rc == -EBUSY) {
+ 			struct smc_sock *smc =
+ 				container_of(conn, struct smc_sock, conn);
+@@ -547,22 +550,7 @@ static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ 
+ out_unlock:
+ 	spin_unlock_bh(&conn->send_lock);
+-	return rc;
+-}
+-
+-static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+-{
+-	struct smc_link *link = conn->lnk;
+-	int rc = -ENOLINK;
+-
+-	if (!link)
+-		return rc;
+-
+-	atomic_inc(&link->wr_tx_refcnt);
+-	if (smc_link_usable(link))
+-		rc = _smcr_tx_sndbuf_nonempty(conn);
+-	if (atomic_dec_and_test(&link->wr_tx_refcnt))
+-		wake_up_all(&link->wr_tx_wait);
++	smc_wr_tx_link_put(link);
+ 	return rc;
+ }
+ 
+diff --git a/net/smc/smc_wr.h b/net/smc/smc_wr.h
+index 423b8709f1c9e..2bc626f230a56 100644
+--- a/net/smc/smc_wr.h
++++ b/net/smc/smc_wr.h
+@@ -60,6 +60,20 @@ static inline void smc_wr_tx_set_wr_id(atomic_long_t *wr_tx_id, long val)
+ 	atomic_long_set(wr_tx_id, val);
+ }
+ 
++static inline bool smc_wr_tx_link_hold(struct smc_link *link)
++{
++	if (!smc_link_usable(link))
++		return false;
++	atomic_inc(&link->wr_tx_refcnt);
++	return true;
++}
++
++static inline void smc_wr_tx_link_put(struct smc_link *link)
++{
++	if (atomic_dec_and_test(&link->wr_tx_refcnt))
++		wake_up_all(&link->wr_tx_wait);
++}
++
+ static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk)
+ {
+ 	wake_up_all(&lnk->wr_tx_wait);
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index 8f6b13ae46bfc..7d631aaa0ae11 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -189,7 +189,7 @@ if ($arch =~ /(x86(_64)?)|(i386)/) {
+ $local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\S+)";
+ $weak_regex = "^[0-9a-fA-F]+\\s+([wW])\\s+(\\S+)";
+ $section_regex = "Disassembly of section\\s+(\\S+):";
+-$function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:";
++$function_regex = "^([0-9a-fA-F]+)\\s+<([^^]*?)>:";
+ $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s(mcount|__fentry__)\$";
+ $section_type = '@progbits';
+ $mcount_adjust = 0;
+diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
+index a59de24695ec9..dfe5a64e19d2e 100644
+--- a/sound/core/pcm_compat.c
++++ b/sound/core/pcm_compat.c
+@@ -468,6 +468,76 @@ static int snd_pcm_ioctl_sync_ptr_x32(struct snd_pcm_substream *substream,
+ }
+ #endif /* CONFIG_X86_X32 */
+ 
++#ifdef __BIG_ENDIAN
++typedef char __pad_before_u32[4];
++typedef char __pad_after_u32[0];
++#else
++typedef char __pad_before_u32[0];
++typedef char __pad_after_u32[4];
++#endif
++
++/* PCM 2.0.15 API definition had a bug in mmap control; it puts the avail_min
++ * at the wrong offset due to a typo in padding type.
++ * The bug hits only 32bit.
++ * A workaround for incorrect read/write is needed only in 32bit compat mode.
++ */
++struct __snd_pcm_mmap_control64_buggy {
++	__pad_before_u32 __pad1;
++	__u32 appl_ptr;
++	__pad_before_u32 __pad2;	/* SiC! here is the bug */
++	__pad_before_u32 __pad3;
++	__u32 avail_min;
++	__pad_after_uframe __pad4;
++};
++
++static int snd_pcm_ioctl_sync_ptr_buggy(struct snd_pcm_substream *substream,
++					struct snd_pcm_sync_ptr __user *_sync_ptr)
++{
++	struct snd_pcm_runtime *runtime = substream->runtime;
++	struct snd_pcm_sync_ptr sync_ptr;
++	struct __snd_pcm_mmap_control64_buggy *sync_cp;
++	volatile struct snd_pcm_mmap_status *status;
++	volatile struct snd_pcm_mmap_control *control;
++	int err;
++
++	memset(&sync_ptr, 0, sizeof(sync_ptr));
++	sync_cp = (struct __snd_pcm_mmap_control64_buggy *)&sync_ptr.c.control;
++	if (get_user(sync_ptr.flags, (unsigned __user *)&(_sync_ptr->flags)))
++		return -EFAULT;
++	if (copy_from_user(sync_cp, &(_sync_ptr->c.control), sizeof(*sync_cp)))
++		return -EFAULT;
++	status = runtime->status;
++	control = runtime->control;
++	if (sync_ptr.flags & SNDRV_PCM_SYNC_PTR_HWSYNC) {
++		err = snd_pcm_hwsync(substream);
++		if (err < 0)
++			return err;
++	}
++	snd_pcm_stream_lock_irq(substream);
++	if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_APPL)) {
++		err = pcm_lib_apply_appl_ptr(substream, sync_cp->appl_ptr);
++		if (err < 0) {
++			snd_pcm_stream_unlock_irq(substream);
++			return err;
++		}
++	} else {
++		sync_cp->appl_ptr = control->appl_ptr;
++	}
++	if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN))
++		control->avail_min = sync_cp->avail_min;
++	else
++		sync_cp->avail_min = control->avail_min;
++	sync_ptr.s.status.state = status->state;
++	sync_ptr.s.status.hw_ptr = status->hw_ptr;
++	sync_ptr.s.status.tstamp = status->tstamp;
++	sync_ptr.s.status.suspended_state = status->suspended_state;
++	sync_ptr.s.status.audio_tstamp = status->audio_tstamp;
++	snd_pcm_stream_unlock_irq(substream);
++	if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr)))
++		return -EFAULT;
++	return 0;
++}
++
+ /*
+  */
+ enum {
+@@ -537,7 +607,7 @@ static long snd_pcm_ioctl_compat(struct file *file, unsigned int cmd, unsigned l
+ 		if (in_x32_syscall())
+ 			return snd_pcm_ioctl_sync_ptr_x32(substream, argp);
+ #endif /* CONFIG_X86_X32 */
+-		return snd_pcm_common_ioctl(file, substream, cmd, argp);
++		return snd_pcm_ioctl_sync_ptr_buggy(substream, argp);
+ 	case SNDRV_PCM_IOCTL_HW_REFINE32:
+ 		return snd_pcm_ioctl_hw_params_compat(substream, 1, argp);
+ 	case SNDRV_PCM_IOCTL_HW_PARAMS32:
+diff --git a/sound/core/seq_device.c b/sound/core/seq_device.c
+index 382275c5b1937..7f3fd8eb016fe 100644
+--- a/sound/core/seq_device.c
++++ b/sound/core/seq_device.c
+@@ -156,6 +156,8 @@ static int snd_seq_device_dev_free(struct snd_device *device)
+ 	struct snd_seq_device *dev = device->device_data;
+ 
+ 	cancel_autoload_drivers();
++	if (dev->private_free)
++		dev->private_free(dev);
+ 	put_device(&dev->dev);
+ 	return 0;
+ }
+@@ -183,11 +185,7 @@ static int snd_seq_device_dev_disconnect(struct snd_device *device)
+ 
+ static void snd_seq_dev_release(struct device *dev)
+ {
+-	struct snd_seq_device *sdev = to_seq_dev(dev);
+-
+-	if (sdev->private_free)
+-		sdev->private_free(sdev);
+-	kfree(sdev);
++	kfree(to_seq_dev(dev));
+ }
+ 
+ /*
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0b9230a274b0a..8e6ff50f0f94f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -527,6 +527,8 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ 	struct alc_spec *spec = codec->spec;
+ 
+ 	switch (codec->core.vendor_id) {
++	case 0x10ec0236:
++	case 0x10ec0256:
+ 	case 0x10ec0283:
+ 	case 0x10ec0286:
+ 	case 0x10ec0288:
+@@ -2549,7 +2551,8 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
+ 	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
+@@ -3540,7 +3543,8 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ 	 * when booting with headset plugged. So skip setting it for the codec alc257
+ 	 */
+-	if (codec->core.vendor_id != 0x10ec0257)
++	if (spec->codec_variant != ALC269_TYPE_ALC257 &&
++	    spec->codec_variant != ALC269_TYPE_ALC256)
+ 		alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (!spec->no_shutup_pins)
+@@ -6462,6 +6466,24 @@ static void alc287_fixup_legion_15imhg05_speakers(struct hda_codec *codec,
+ /* for alc285_fixup_ideapad_s740_coef() */
+ #include "ideapad_s740_helper.c"
+ 
++static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec,
++							    const struct hda_fixup *fix,
++							    int action)
++{
++	/*
++	* A certain other OS sets these coeffs to different values. On at least one TongFang
++	* barebone these settings might survive even a cold reboot. So to restore a clean slate the
++	* values are explicitly reset to default here. Without this, the external microphone is
++	* always in a plugged-in state, while the internal microphone is always in an unplugged
++	* state, breaking the ability to use the internal microphone.
++	*/
++	alc_write_coef_idx(codec, 0x24, 0x0000);
++	alc_write_coef_idx(codec, 0x26, 0x0000);
++	alc_write_coef_idx(codec, 0x29, 0x3000);
++	alc_write_coef_idx(codec, 0x37, 0xfe05);
++	alc_write_coef_idx(codec, 0x45, 0x5089);
++}
++
+ enum {
+ 	ALC269_FIXUP_GPIO2,
+ 	ALC269_FIXUP_SONY_VAIO,
+@@ -6676,7 +6698,8 @@ enum {
+ 	ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,
+ 	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+-	ALC287_FIXUP_13S_GEN2_SPEAKERS
++	ALC287_FIXUP_13S_GEN2_SPEAKERS,
++	ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8357,7 +8380,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.v.verbs = (const struct hda_verb[]) {
+ 			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x41 },
+-			{ 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x2 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0 },
+@@ -8374,6 +8397,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE,
+ 	},
++	[ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc256_fixup_tongfang_reset_persistent_settings,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8465,6 +8492,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -8802,6 +8832,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ 	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ 	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
++	SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -10179,6 +10210,9 @@ enum {
+ 	ALC671_FIXUP_HP_HEADSET_MIC2,
+ 	ALC662_FIXUP_ACER_X2660G_HEADSET_MODE,
+ 	ALC662_FIXUP_ACER_NITRO_HEADSET_MODE,
++	ALC668_FIXUP_ASUS_NO_HEADSET_MIC,
++	ALC668_FIXUP_HEADSET_MIC,
++	ALC668_FIXUP_MIC_DET_COEF,
+ };
+ 
+ static const struct hda_fixup alc662_fixups[] = {
+@@ -10562,6 +10596,29 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC662_FIXUP_USI_FUNC
+ 	},
++	[ALC668_FIXUP_ASUS_NO_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x1b, 0x04a1112c },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC668_FIXUP_HEADSET_MIC
++	},
++	[ALC668_FIXUP_HEADSET_MIC] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_headset_mic,
++		.chained = true,
++		.chain_id = ALC668_FIXUP_MIC_DET_COEF
++	},
++	[ALC668_FIXUP_MIC_DET_COEF] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x15 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x0d60 },
++			{}
++		},
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+@@ -10597,6 +10654,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+ 	SND_PCI_QUIRK(0x1043, 0x17bd, "ASUS N751", ALC668_FIXUP_ASUS_Nx51),
++	SND_PCI_QUIRK(0x1043, 0x185d, "ASUS G551JW", ALC668_FIXUP_ASUS_NO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1963, "ASUS X71SL", ALC662_FIXUP_ASUS_MODE8),
+ 	SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index 3d5848d5481be..53ebabf424722 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -2450,6 +2450,8 @@ static int scarlett2_update_monitor_other(struct usb_mixer_interface *mixer)
+ 		err = scarlett2_usb_get_config(mixer,
+ 					       SCARLETT2_CONFIG_TALKBACK_MAP,
+ 					       1, &bitmap);
++		if (err < 0)
++			return err;
+ 		for (i = 0; i < num_mixes; i++, bitmap >>= 1)
+ 			private->talkback_map[i] = bitmap & 1;
+ 	}
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 19bb499c17daa..147b831e1a82d 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -77,6 +77,48 @@
+ /* E-Mu 0204 USB */
+ { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) },
+ 
++/*
++ * Creative Technology, Ltd Live! Cam Sync HD [VF0770]
++ * The device advertises 8 formats, but only a rate of 48kHz is honored by the
++ * hardware and 24 bits give chopped audio, so only report the one working
++ * combination.
++ */
++{
++	USB_DEVICE(0x041e, 0x4095),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = &(const struct snd_usb_audio_quirk[]) {
++			{
++				.ifnum = 2,
++				.type = QUIRK_AUDIO_STANDARD_MIXER,
++			},
++			{
++				.ifnum = 3,
++				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
++				.data = &(const struct audioformat) {
++					.formats = SNDRV_PCM_FMTBIT_S16_LE,
++					.channels = 2,
++					.fmt_bits = 16,
++					.iface = 3,
++					.altsetting = 4,
++					.altset_idx = 4,
++					.endpoint = 0x82,
++					.ep_attr = 0x05,
++					.rates = SNDRV_PCM_RATE_48000,
++					.rate_min = 48000,
++					.rate_max = 48000,
++					.nr_rates = 1,
++					.rate_table = (unsigned int[]) { 48000 },
++				},
++			},
++			{
++				.ifnum = -1
++			},
++		},
++	},
++},
++
+ /*
+  * HP Wireless Audio
+  * When not ignored, causes instability issues for some users, forcing them to


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-21 12:16 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-21 12:16 UTC (permalink / raw
  To: gentoo-commits

commit:     71051e9eef4cbfa065ca130cdd9573d3d30a40fa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 21 12:16:05 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 21 12:16:05 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=71051e9e

shiftfs: UID/GID shifting overlay filesystem for containers

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                     |    4 +
 5000_shiftfs-ubuntu-21.10.patch | 2249 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 2253 insertions(+)

diff --git a/0000_README b/0000_README
index 1bea116..5ebc554 100644
--- a/0000_README
+++ b/0000_README
@@ -131,6 +131,10 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch:  5000_shiftfs-ubuntu-21.10.patch
+From:   https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/impish
+Desc:   UID/GID shifting overlay filesystem for containers
+
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.

diff --git a/5000_shiftfs-ubuntu-21.10.patch b/5000_shiftfs-ubuntu-21.10.patch
new file mode 100644
index 0000000..279a4af
--- /dev/null
+++ b/5000_shiftfs-ubuntu-21.10.patch
@@ -0,0 +1,2249 @@
+--- /dev/null	2021-10-21 05:07:35.701085311 -0400
++++ b/fs/shiftfs.c	2021-10-21 07:46:35.494532885 -0400
+@@ -0,0 +1,2201 @@
++#include <linux/btrfs.h>
++#include <linux/capability.h>
++#include <linux/cred.h>
++#include <linux/mount.h>
++#include <linux/fdtable.h>
++#include <linux/file.h>
++#include <linux/fs.h>
++#include <linux/namei.h>
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/magic.h>
++#include <linux/parser.h>
++#include <linux/security.h>
++#include <linux/seq_file.h>
++#include <linux/statfs.h>
++#include <linux/slab.h>
++#include <linux/user_namespace.h>
++#include <linux/uidgid.h>
++#include <linux/xattr.h>
++#include <linux/posix_acl.h>
++#include <linux/posix_acl_xattr.h>
++#include <linux/uio.h>
++#include <linux/fiemap.h>
++
++struct shiftfs_super_info {
++	struct vfsmount *mnt;
++	struct user_namespace *userns;
++	/* creds of process who created the super block */
++	const struct cred *creator_cred;
++	bool mark;
++	unsigned int passthrough;
++	unsigned int passthrough_mark;
++};
++
++static void shiftfs_fill_inode(struct inode *inode, unsigned long ino,
++			       umode_t mode, dev_t dev, struct dentry *dentry);
++
++#define SHIFTFS_PASSTHROUGH_NONE 0
++#define SHIFTFS_PASSTHROUGH_STAT 1
++#define SHIFTFS_PASSTHROUGH_IOCTL 2
++#define SHIFTFS_PASSTHROUGH_ALL                                                \
++	(SHIFTFS_PASSTHROUGH_STAT | SHIFTFS_PASSTHROUGH_IOCTL)
++
++static inline bool shiftfs_passthrough_ioctls(struct shiftfs_super_info *info)
++{
++	if (!(info->passthrough & SHIFTFS_PASSTHROUGH_IOCTL))
++		return false;
++
++	return true;
++}
++
++static inline bool shiftfs_passthrough_statfs(struct shiftfs_super_info *info)
++{
++	if (!(info->passthrough & SHIFTFS_PASSTHROUGH_STAT))
++		return false;
++
++	return true;
++}
++
++enum {
++	OPT_MARK,
++	OPT_PASSTHROUGH,
++	OPT_LAST,
++};
++
++/* global filesystem options */
++static const match_table_t tokens = {
++	{ OPT_MARK, "mark" },
++	{ OPT_PASSTHROUGH, "passthrough=%u" },
++	{ OPT_LAST, NULL }
++};
++
++static const struct cred *shiftfs_override_creds(const struct super_block *sb)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++
++	return override_creds(sbinfo->creator_cred);
++}
++
++static inline void shiftfs_revert_object_creds(const struct cred *oldcred,
++					       struct cred *newcred)
++{
++	revert_creds(oldcred);
++	put_cred(newcred);
++}
++
++static kuid_t shift_kuid(struct user_namespace *from, struct user_namespace *to,
++			 kuid_t kuid)
++{
++	uid_t uid = from_kuid(from, kuid);
++	return make_kuid(to, uid);
++}
++
++static kgid_t shift_kgid(struct user_namespace *from, struct user_namespace *to,
++			 kgid_t kgid)
++{
++	gid_t gid = from_kgid(from, kgid);
++	return make_kgid(to, gid);
++}
++
++static int shiftfs_override_object_creds(const struct super_block *sb,
++					 const struct cred **oldcred,
++					 struct cred **newcred,
++					 struct dentry *dentry, umode_t mode,
++					 bool hardlink)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	kuid_t fsuid = current_fsuid();
++	kgid_t fsgid = current_fsgid();
++
++	*oldcred = shiftfs_override_creds(sb);
++
++	*newcred = prepare_creds();
++	if (!*newcred) {
++		revert_creds(*oldcred);
++		return -ENOMEM;
++	}
++
++	(*newcred)->fsuid = shift_kuid(sb->s_user_ns, sbinfo->userns, fsuid);
++	(*newcred)->fsgid = shift_kgid(sb->s_user_ns, sbinfo->userns, fsgid);
++
++	if (!hardlink) {
++		int err = security_dentry_create_files_as(dentry, mode,
++							  &dentry->d_name,
++							  *oldcred, *newcred);
++		if (err) {
++			shiftfs_revert_object_creds(*oldcred, *newcred);
++			return err;
++		}
++	}
++
++	put_cred(override_creds(*newcred));
++	return 0;
++}
++
++static void shiftfs_copyattr(struct inode *from, struct inode *to)
++{
++	struct user_namespace *from_ns = from->i_sb->s_user_ns;
++	struct user_namespace *to_ns = to->i_sb->s_user_ns;
++
++	to->i_uid = shift_kuid(from_ns, to_ns, from->i_uid);
++	to->i_gid = shift_kgid(from_ns, to_ns, from->i_gid);
++	to->i_mode = from->i_mode;
++	to->i_atime = from->i_atime;
++	to->i_mtime = from->i_mtime;
++	to->i_ctime = from->i_ctime;
++	i_size_write(to, i_size_read(from));
++}
++
++static void shiftfs_copyflags(struct inode *from, struct inode *to)
++{
++	unsigned int mask = S_SYNC | S_IMMUTABLE | S_APPEND | S_NOATIME;
++
++	inode_set_flags(to, from->i_flags & mask, mask);
++}
++
++static void shiftfs_file_accessed(struct file *file)
++{
++	struct inode *upperi, *loweri;
++
++	if (file->f_flags & O_NOATIME)
++		return;
++
++	upperi = file_inode(file);
++	loweri = upperi->i_private;
++
++	if (!loweri)
++		return;
++
++	upperi->i_mtime = loweri->i_mtime;
++	upperi->i_ctime = loweri->i_ctime;
++
++	touch_atime(&file->f_path);
++}
++
++static int shiftfs_parse_mount_options(struct shiftfs_super_info *sbinfo,
++				       char *options)
++{
++	char *p;
++	substring_t args[MAX_OPT_ARGS];
++
++	sbinfo->mark = false;
++	sbinfo->passthrough = 0;
++
++	while ((p = strsep(&options, ",")) != NULL) {
++		int err, intarg, token;
++
++		if (!*p)
++			continue;
++
++		token = match_token(p, tokens, args);
++		switch (token) {
++		case OPT_MARK:
++			sbinfo->mark = true;
++			break;
++		case OPT_PASSTHROUGH:
++			err = match_int(&args[0], &intarg);
++			if (err)
++				return err;
++
++			if (intarg & ~SHIFTFS_PASSTHROUGH_ALL)
++				return -EINVAL;
++
++			sbinfo->passthrough = intarg;
++			break;
++		default:
++			return -EINVAL;
++		}
++	}
++
++	return 0;
++}
++
++static void shiftfs_d_release(struct dentry *dentry)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (lowerd)
++		dput(lowerd);
++}
++
++static struct dentry *shiftfs_d_real(struct dentry *dentry,
++				     const struct inode *inode)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (inode && d_inode(dentry) == inode)
++		return dentry;
++
++	lowerd = d_real(lowerd, inode);
++	if (lowerd && (!inode || inode == d_inode(lowerd)))
++		return lowerd;
++
++	WARN(1, "shiftfs_d_real(%pd4, %s:%lu): real dentry not found\n", dentry,
++	     inode ? inode->i_sb->s_id : "NULL", inode ? inode->i_ino : 0);
++	return dentry;
++}
++
++static int shiftfs_d_weak_revalidate(struct dentry *dentry, unsigned int flags)
++{
++	int err = 1;
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (d_is_negative(lowerd) != d_is_negative(dentry))
++		return 0;
++
++	if ((lowerd->d_flags & DCACHE_OP_WEAK_REVALIDATE))
++		err = lowerd->d_op->d_weak_revalidate(lowerd, flags);
++
++	if (d_really_is_positive(dentry)) {
++		struct inode *inode = d_inode(dentry);
++		struct inode *loweri = d_inode(lowerd);
++
++		shiftfs_copyattr(loweri, inode);
++	}
++
++	return err;
++}
++
++static int shiftfs_d_revalidate(struct dentry *dentry, unsigned int flags)
++{
++	int err = 1;
++	struct dentry *lowerd = dentry->d_fsdata;
++
++	if (d_unhashed(lowerd) ||
++	    ((d_is_negative(lowerd) != d_is_negative(dentry))))
++		return 0;
++
++	if (flags & LOOKUP_RCU)
++		return -ECHILD;
++
++	if ((lowerd->d_flags & DCACHE_OP_REVALIDATE))
++		err = lowerd->d_op->d_revalidate(lowerd, flags);
++
++	if (d_really_is_positive(dentry)) {
++		struct inode *inode = d_inode(dentry);
++		struct inode *loweri = d_inode(lowerd);
++
++		shiftfs_copyattr(loweri, inode);
++	}
++
++	return err;
++}
++
++static const struct dentry_operations shiftfs_dentry_ops = {
++	.d_release	   = shiftfs_d_release,
++	.d_real		   = shiftfs_d_real,
++	.d_revalidate	   = shiftfs_d_revalidate,
++	.d_weak_revalidate = shiftfs_d_weak_revalidate,
++};
++
++static const char *shiftfs_get_link(struct dentry *dentry, struct inode *inode,
++				    struct delayed_call *done)
++{
++	const char *p;
++	const struct cred *oldcred;
++	struct dentry *lowerd;
++
++	/* RCU lookup not supported */
++	if (!dentry)
++		return ERR_PTR(-ECHILD);
++
++	lowerd = dentry->d_fsdata;
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	p = vfs_get_link(lowerd, done);
++	revert_creds(oldcred);
++
++	return p;
++}
++
++static int shiftfs_setxattr(struct user_namespace *mnt_ns,
++			    struct dentry *dentry, struct inode *inode,
++			    const char *name, const void *value,
++			    size_t size, int flags)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_setxattr(&init_user_ns, lowerd, name, value, size, flags);
++	revert_creds(oldcred);
++
++	shiftfs_copyattr(lowerd->d_inode, inode);
++
++	return err;
++}
++
++static int shiftfs_xattr_get(const struct xattr_handler *handler,
++			     struct dentry *dentry, struct inode *inode,
++			     const char *name, void *value, size_t size)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_getxattr(&init_user_ns, lowerd, name, value, size);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static ssize_t shiftfs_listxattr(struct dentry *dentry, char *list,
++				 size_t size)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_listxattr(lowerd, list, size);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_removexattr(struct user_namespace *mnt_ns, struct dentry *dentry, const char *name)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	int err;
++	const struct cred *oldcred;
++
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = vfs_removexattr(&init_user_ns, lowerd, name);
++	revert_creds(oldcred);
++
++	/* update c/mtime */
++	shiftfs_copyattr(lowerd->d_inode, d_inode(dentry));
++
++	return err;
++}
++
++static int shiftfs_xattr_set(const struct xattr_handler *handler,
++			     struct user_namespace *mnt_ns,
++			     struct dentry *dentry, struct inode *inode,
++			     const char *name, const void *value, size_t size,
++			     int flags)
++{
++	if (!value)
++		return shiftfs_removexattr(&init_user_ns, dentry, name);
++	return shiftfs_setxattr(&init_user_ns, dentry, inode, name, value, size,
++				flags);
++}
++
++static int shiftfs_inode_test(struct inode *inode, void *data)
++{
++	return inode->i_private == data;
++}
++
++static int shiftfs_inode_set(struct inode *inode, void *data)
++{
++	inode->i_private = data;
++	return 0;
++}
++
++static int shiftfs_create_object(struct user_namespace *mnt_ns,
++				 struct inode *diri, struct dentry *dentry,
++				 umode_t mode, const char *symlink,
++				 struct dentry *hardlink, bool excl)
++{
++	int err;
++	const struct cred *oldcred;
++	struct cred *newcred;
++	void *loweri_iop_ptr = NULL;
++	umode_t modei = mode;
++	struct super_block *dir_sb = diri->i_sb;
++	struct dentry *lowerd_new = dentry->d_fsdata;
++	struct inode *inode = NULL, *loweri_dir = diri->i_private;
++	const struct inode_operations *loweri_dir_iop = loweri_dir->i_op;
++	struct dentry *lowerd_link = NULL;
++
++	if (hardlink) {
++		loweri_iop_ptr = loweri_dir_iop->link;
++	} else {
++		switch (mode & S_IFMT) {
++		case S_IFDIR:
++			loweri_iop_ptr = loweri_dir_iop->mkdir;
++			break;
++		case S_IFREG:
++			loweri_iop_ptr = loweri_dir_iop->create;
++			break;
++		case S_IFLNK:
++			loweri_iop_ptr = loweri_dir_iop->symlink;
++			break;
++		case S_IFSOCK:
++			/* fall through */
++		case S_IFIFO:
++			loweri_iop_ptr = loweri_dir_iop->mknod;
++			break;
++		}
++	}
++	if (!loweri_iop_ptr) {
++		err = -EINVAL;
++		goto out_iput;
++	}
++
++	inode_lock_nested(loweri_dir, I_MUTEX_PARENT);
++
++	if (!hardlink) {
++		inode = new_inode(dir_sb);
++		if (!inode) {
++			err = -ENOMEM;
++			goto out_iput;
++		}
++
++		/*
++		 * new_inode() will have added the new inode to the super
++		 * block's list of inodes. Further below we will call
++		 * inode_insert5() Which would perform the same operation again
++		 * thereby corrupting the list. To avoid this raise I_CREATING
++		 * in i_state which will cause inode_insert5() to skip this
++		 * step. I_CREATING will be cleared by d_instantiate_new()
++		 * below.
++		 */
++		spin_lock(&inode->i_lock);
++		inode->i_state |= I_CREATING;
++		spin_unlock(&inode->i_lock);
++
++		inode_init_owner(&init_user_ns, inode, diri, mode);
++		modei = inode->i_mode;
++	}
++
++	err = shiftfs_override_object_creds(dentry->d_sb, &oldcred, &newcred,
++					    dentry, modei, hardlink != NULL);
++	if (err)
++		goto out_iput;
++
++	if (hardlink) {
++		lowerd_link = hardlink->d_fsdata;
++		err = vfs_link(lowerd_link, &init_user_ns, loweri_dir, lowerd_new, NULL);
++	} else {
++		switch (modei & S_IFMT) {
++		case S_IFDIR:
++			err = vfs_mkdir(&init_user_ns, loweri_dir, lowerd_new, modei);
++			break;
++		case S_IFREG:
++			err = vfs_create(&init_user_ns, loweri_dir, lowerd_new, modei, excl);
++			break;
++		case S_IFLNK:
++			err = vfs_symlink(&init_user_ns, loweri_dir, lowerd_new, symlink);
++			break;
++		case S_IFSOCK:
++			/* fall through */
++		case S_IFIFO:
++			err = vfs_mknod(&init_user_ns, loweri_dir, lowerd_new, modei, 0);
++			break;
++		default:
++			err = -EINVAL;
++			break;
++		}
++	}
++
++	shiftfs_revert_object_creds(oldcred, newcred);
++
++	if (!err && WARN_ON(!lowerd_new->d_inode))
++		err = -EIO;
++	if (err)
++		goto out_iput;
++
++	if (hardlink) {
++		inode = d_inode(hardlink);
++		ihold(inode);
++
++		/* copy up times from lower inode */
++		shiftfs_copyattr(d_inode(lowerd_link), inode);
++		set_nlink(d_inode(hardlink), d_inode(lowerd_link)->i_nlink);
++		d_instantiate(dentry, inode);
++	} else {
++		struct inode *inode_tmp;
++		struct inode *loweri_new = d_inode(lowerd_new);
++
++		inode_tmp = inode_insert5(inode, (unsigned long)loweri_new,
++					  shiftfs_inode_test, shiftfs_inode_set,
++					  loweri_new);
++		if (unlikely(inode_tmp != inode)) {
++			pr_err_ratelimited("shiftfs: newly created inode found in cache\n");
++			iput(inode_tmp);
++			err = -EINVAL;
++			goto out_iput;
++		}
++
++		ihold(loweri_new);
++		shiftfs_fill_inode(inode, loweri_new->i_ino, loweri_new->i_mode,
++				   0, lowerd_new);
++		d_instantiate_new(dentry, inode);
++	}
++
++	shiftfs_copyattr(loweri_dir, diri);
++	if (loweri_iop_ptr == loweri_dir_iop->mkdir)
++		set_nlink(diri, loweri_dir->i_nlink);
++
++	inode = NULL;
++
++out_iput:
++	iput(inode);
++	inode_unlock(loweri_dir);
++
++	return err;
++}
++
++static int shiftfs_create(struct user_namespace *mnt_ns, struct inode *dir,
++			  struct dentry *dentry, umode_t mode, bool excl)
++{
++	mode |= S_IFREG;
++
++	return shiftfs_create_object(&init_user_ns, dir, dentry, mode, NULL, NULL,
++				     excl);
++}
++
++static int shiftfs_mkdir(struct user_namespace *mnt_ns, struct inode *dir,
++			 struct dentry *dentry, umode_t mode)
++{
++	mode |= S_IFDIR;
++
++	return shiftfs_create_object(&init_user_ns, dir, dentry, mode, NULL, NULL,
++				     false);
++}
++
++static int shiftfs_link(struct dentry *hardlink, struct inode *dir,
++			struct dentry *dentry)
++{
++	return shiftfs_create_object(&init_user_ns, dir, dentry, 0, NULL,
++				     hardlink, false);
++}
++
++static int shiftfs_mknod(struct user_namespace *mnt_ns, struct inode *dir,
++			 struct dentry *dentry, umode_t mode, dev_t rdev)
++{
++	if (!S_ISFIFO(mode) && !S_ISSOCK(mode))
++		return -EPERM;
++
++	return shiftfs_create_object(&init_user_ns, dir, dentry, mode, NULL, NULL,
++				     false);
++}
++
++static int shiftfs_symlink(struct user_namespace *mnt_ns, struct inode *dir,
++			   struct dentry *dentry, const char *symlink)
++{
++	return shiftfs_create_object(&init_user_ns, dir, dentry, S_IFLNK, symlink, NULL,
++				     false);
++}
++
++static int shiftfs_rm(struct inode *dir, struct dentry *dentry, bool rmdir)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	struct inode *loweri = dir->i_private;
++	struct inode *inode = d_inode(dentry);
++	int err;
++	const struct cred *oldcred;
++
++	dget(lowerd);
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	inode_lock_nested(loweri, I_MUTEX_PARENT);
++	if (rmdir)
++		err = vfs_rmdir(&init_user_ns, loweri, lowerd);
++	else
++		err = vfs_unlink(&init_user_ns, loweri, lowerd, NULL);
++	revert_creds(oldcred);
++
++	if (!err) {
++		d_drop(dentry);
++
++		if (rmdir)
++			clear_nlink(inode);
++		else
++			drop_nlink(inode);
++	}
++	inode_unlock(loweri);
++
++	shiftfs_copyattr(loweri, dir);
++	dput(lowerd);
++
++	return err;
++}
++
++static int shiftfs_unlink(struct inode *dir, struct dentry *dentry)
++{
++	return shiftfs_rm(dir, dentry, false);
++}
++
++static int shiftfs_rmdir(struct inode *dir, struct dentry *dentry)
++{
++	return shiftfs_rm(dir, dentry, true);
++}
++
++static int shiftfs_rename(struct user_namespace *mnt_ns,
++			  struct inode *olddir, struct dentry *old,
++			  struct inode *newdir, struct dentry *new,
++			  unsigned int flags)
++{
++	struct dentry *lowerd_dir_old = old->d_parent->d_fsdata,
++		      *lowerd_dir_new = new->d_parent->d_fsdata,
++		      *lowerd_old = old->d_fsdata, *lowerd_new = new->d_fsdata,
++		      *trapd;
++	struct inode *loweri_dir_old = lowerd_dir_old->d_inode,
++		     *loweri_dir_new = lowerd_dir_new->d_inode;
++	int err = -EINVAL;
++	const struct cred *oldcred;
++	struct renamedata rd = {};
++
++	trapd = lock_rename(lowerd_dir_new, lowerd_dir_old);
++
++	if (trapd == lowerd_old || trapd == lowerd_new)
++		goto out_unlock;
++
++	oldcred = shiftfs_override_creds(old->d_sb);
++	rd.old_mnt_userns = &init_user_ns;
++	rd.old_dir = loweri_dir_old;
++	rd.old_dentry = lowerd_old;
++	rd.new_mnt_userns = &init_user_ns;
++	rd.new_dir = loweri_dir_new;
++	rd.new_dentry = lowerd_new;
++	rd.flags = flags;
++	err = vfs_rename(&rd);
++	revert_creds(oldcred);
++
++	shiftfs_copyattr(loweri_dir_old, olddir);
++	shiftfs_copyattr(loweri_dir_new, newdir);
++
++out_unlock:
++	unlock_rename(lowerd_dir_new, lowerd_dir_old);
++
++	return err;
++}
++
++static struct dentry *shiftfs_lookup(struct inode *dir, struct dentry *dentry,
++				     unsigned int flags)
++{
++	struct dentry *new;
++	struct inode *newi;
++	const struct cred *oldcred;
++	struct dentry *lowerd = dentry->d_parent->d_fsdata;
++	struct inode *inode = NULL, *loweri = lowerd->d_inode;
++
++	inode_lock(loweri);
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	new = lookup_one_len(dentry->d_name.name, lowerd, dentry->d_name.len);
++	revert_creds(oldcred);
++	inode_unlock(loweri);
++
++	if (IS_ERR(new))
++		return new;
++
++	dentry->d_fsdata = new;
++
++	newi = new->d_inode;
++	if (!newi)
++		goto out;
++
++	inode = iget5_locked(dentry->d_sb, (unsigned long)newi,
++			     shiftfs_inode_test, shiftfs_inode_set, newi);
++	if (!inode) {
++		dput(new);
++		return ERR_PTR(-ENOMEM);
++	}
++	if (inode->i_state & I_NEW) {
++		/*
++		 * inode->i_private set by shiftfs_inode_set(), but we still
++		 * need to take a reference
++		*/
++		ihold(newi);
++		shiftfs_fill_inode(inode, newi->i_ino, newi->i_mode, 0, new);
++		unlock_new_inode(inode);
++	}
++
++out:
++	return d_splice_alias(inode, dentry);
++}
++
++static int shiftfs_permission(struct user_namespace *mnt_ns,
++			      struct inode *inode, int mask)
++{
++	int err;
++	const struct cred *oldcred;
++	struct inode *loweri = inode->i_private;
++
++	if (!loweri) {
++		WARN_ON(!(mask & MAY_NOT_BLOCK));
++		return -ECHILD;
++	}
++
++	err = generic_permission(&init_user_ns, inode, mask);
++	if (err)
++		return err;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	err = inode_permission(&init_user_ns, loweri, mask);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_fiemap(struct inode *inode,
++			  struct fiemap_extent_info *fieinfo, u64 start,
++			  u64 len)
++{
++	int err;
++	const struct cred *oldcred;
++	struct inode *loweri = inode->i_private;
++
++	if (!loweri->i_op->fiemap)
++		return -EOPNOTSUPP;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	if (fieinfo->fi_flags & FIEMAP_FLAG_SYNC)
++		filemap_write_and_wait(loweri->i_mapping);
++	err = loweri->i_op->fiemap(loweri, fieinfo, start, len);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_tmpfile(struct user_namespace *mnt_ns, struct inode *dir,
++			   struct dentry *dentry, umode_t mode)
++{
++	int err;
++	const struct cred *oldcred;
++	struct dentry *lowerd = dentry->d_fsdata;
++	struct inode *loweri = dir->i_private;
++
++	if (!loweri->i_op->tmpfile)
++		return -EOPNOTSUPP;
++
++	oldcred = shiftfs_override_creds(dir->i_sb);
++	err = loweri->i_op->tmpfile(&init_user_ns, loweri, lowerd, mode);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++static int shiftfs_setattr(struct user_namespace *mnt_ns,
++			   struct dentry *dentry, struct iattr *attr)
++{
++	struct dentry *lowerd = dentry->d_fsdata;
++	struct inode *loweri = lowerd->d_inode;
++	struct iattr newattr;
++	const struct cred *oldcred;
++	struct super_block *sb = dentry->d_sb;
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	int err;
++
++	err = setattr_prepare(&init_user_ns, dentry, attr);
++	if (err)
++		return err;
++
++	newattr = *attr;
++	newattr.ia_uid = shift_kuid(sb->s_user_ns, sbinfo->userns, attr->ia_uid);
++	newattr.ia_gid = shift_kgid(sb->s_user_ns, sbinfo->userns, attr->ia_gid);
++
++	/*
++	 * mode change is for clearing setuid/setgid bits. Allow lower fs
++	 * to interpret this in its own way.
++	 */
++	if (newattr.ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID))
++		newattr.ia_valid &= ~ATTR_MODE;
++
++	inode_lock(loweri);
++	oldcred = shiftfs_override_creds(dentry->d_sb);
++	err = notify_change(&init_user_ns, lowerd, &newattr, NULL);
++	revert_creds(oldcred);
++	inode_unlock(loweri);
++
++	shiftfs_copyattr(loweri, d_inode(dentry));
++
++	return err;
++}
++
++static int shiftfs_getattr(struct user_namespace *mnt_ns,
++			   const struct path *path, struct kstat *stat,
++			   u32 request_mask, unsigned int query_flags)
++{
++	struct inode *inode = path->dentry->d_inode;
++	struct dentry *lowerd = path->dentry->d_fsdata;
++	struct inode *loweri = lowerd->d_inode;
++	struct shiftfs_super_info *info = path->dentry->d_sb->s_fs_info;
++	struct path newpath = { .mnt = info->mnt, .dentry = lowerd };
++	struct user_namespace *from_ns = loweri->i_sb->s_user_ns;
++	struct user_namespace *to_ns = inode->i_sb->s_user_ns;
++	const struct cred *oldcred;
++	int err;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	err = vfs_getattr(&newpath, stat, request_mask, query_flags);
++	revert_creds(oldcred);
++
++	if (err)
++		return err;
++
++	/* transform the underlying id */
++	stat->uid = shift_kuid(from_ns, to_ns, stat->uid);
++	stat->gid = shift_kgid(from_ns, to_ns, stat->gid);
++	return 0;
++}
++
++#ifdef CONFIG_SHIFT_FS_POSIX_ACL
++
++static int
++shift_acl_ids(struct user_namespace *from, struct user_namespace *to,
++	      struct posix_acl *acl)
++{
++	int i;
++
++	for (i = 0; i < acl->a_count; i++) {
++		struct posix_acl_entry *e = &acl->a_entries[i];
++		switch(e->e_tag) {
++		case ACL_USER:
++			e->e_uid = shift_kuid(from, to, e->e_uid);
++			if (!uid_valid(e->e_uid))
++				return -EOVERFLOW;
++			break;
++		case ACL_GROUP:
++			e->e_gid = shift_kgid(from, to, e->e_gid);
++			if (!gid_valid(e->e_gid))
++				return -EOVERFLOW;
++			break;
++		}
++	}
++	return 0;
++}
++
++static void
++shift_acl_xattr_ids(struct user_namespace *from, struct user_namespace *to,
++		    void *value, size_t size)
++{
++	struct posix_acl_xattr_header *header = value;
++	struct posix_acl_xattr_entry *entry = (void *)(header + 1), *end;
++	int count;
++	kuid_t kuid;
++	kgid_t kgid;
++
++	if (!value)
++		return;
++	if (size < sizeof(struct posix_acl_xattr_header))
++		return;
++	if (header->a_version != cpu_to_le32(POSIX_ACL_XATTR_VERSION))
++		return;
++
++	count = posix_acl_xattr_count(size);
++	if (count < 0)
++		return;
++	if (count == 0)
++		return;
++
++	for (end = entry + count; entry != end; entry++) {
++		switch(le16_to_cpu(entry->e_tag)) {
++		case ACL_USER:
++			kuid = make_kuid(&init_user_ns, le32_to_cpu(entry->e_id));
++			kuid = shift_kuid(from, to, kuid);
++			entry->e_id = cpu_to_le32(from_kuid(&init_user_ns, kuid));
++			break;
++		case ACL_GROUP:
++			kgid = make_kgid(&init_user_ns, le32_to_cpu(entry->e_id));
++			kgid = shift_kgid(from, to, kgid);
++			entry->e_id = cpu_to_le32(from_kgid(&init_user_ns, kgid));
++			break;
++		default:
++			break;
++		}
++	}
++}
++
++static struct posix_acl *shiftfs_get_acl(struct inode *inode, int type)
++{
++	struct inode *loweri = inode->i_private;
++	const struct cred *oldcred;
++	struct posix_acl *lower_acl, *acl = NULL;
++	struct user_namespace *from_ns = loweri->i_sb->s_user_ns;
++	struct user_namespace *to_ns = inode->i_sb->s_user_ns;
++	int size;
++	int err;
++
++	if (!IS_POSIXACL(loweri))
++		return NULL;
++
++	oldcred = shiftfs_override_creds(inode->i_sb);
++	lower_acl = get_acl(loweri, type);
++	revert_creds(oldcred);
++
++	if (lower_acl && !IS_ERR(lower_acl)) {
++		/* XXX: export posix_acl_clone? */
++		size = sizeof(struct posix_acl) +
++		       lower_acl->a_count * sizeof(struct posix_acl_entry);
++		acl = kmemdup(lower_acl, size, GFP_KERNEL);
++		posix_acl_release(lower_acl);
++
++		if (!acl)
++			return ERR_PTR(-ENOMEM);
++
++		refcount_set(&acl->a_refcount, 1);
++
++		err = shift_acl_ids(from_ns, to_ns, acl);
++		if (err) {
++			kfree(acl);
++			return ERR_PTR(err);
++		}
++	}
++
++	return acl;
++}
++
++static int
++shiftfs_posix_acl_xattr_get(const struct xattr_handler *handler,
++			   struct dentry *dentry, struct inode *inode,
++			   const char *name, void *buffer, size_t size)
++{
++	struct inode *loweri = inode->i_private;
++	int ret;
++
++	ret = shiftfs_xattr_get(NULL, dentry, inode, handler->name,
++				buffer, size);
++	if (ret < 0)
++		return ret;
++
++	inode_lock(loweri);
++	shift_acl_xattr_ids(loweri->i_sb->s_user_ns, inode->i_sb->s_user_ns,
++			    buffer, size);
++	inode_unlock(loweri);
++	return ret;
++}
++
++static int
++shiftfs_posix_acl_xattr_set(const struct xattr_handler *handler,
++			    struct user_namespace *mnt_ns,
++			    struct dentry *dentry, struct inode *inode,
++			    const char *name, const void *value,
++			    size_t size, int flags)
++{
++	struct inode *loweri = inode->i_private;
++	int err;
++
++	if (!IS_POSIXACL(loweri) || !loweri->i_op->set_acl)
++		return -EOPNOTSUPP;
++	if (handler->flags == ACL_TYPE_DEFAULT && !S_ISDIR(inode->i_mode))
++		return value ? -EACCES : 0;
++	if (!inode_owner_or_capable(&init_user_ns, inode))
++		return -EPERM;
++
++	if (value) {
++		shift_acl_xattr_ids(inode->i_sb->s_user_ns,
++				    loweri->i_sb->s_user_ns,
++				    (void *)value, size);
++		err = shiftfs_setxattr(&init_user_ns, dentry, inode, handler->name, value,
++				       size, flags);
++	} else {
++		err = shiftfs_removexattr(&init_user_ns, dentry, handler->name);
++	}
++
++	if (!err)
++		shiftfs_copyattr(loweri, inode);
++
++	return err;
++}
++
++static const struct xattr_handler
++shiftfs_posix_acl_access_xattr_handler = {
++	.name = XATTR_NAME_POSIX_ACL_ACCESS,
++	.flags = ACL_TYPE_ACCESS,
++	.get = shiftfs_posix_acl_xattr_get,
++	.set = shiftfs_posix_acl_xattr_set,
++};
++
++static const struct xattr_handler
++shiftfs_posix_acl_default_xattr_handler = {
++	.name = XATTR_NAME_POSIX_ACL_DEFAULT,
++	.flags = ACL_TYPE_DEFAULT,
++	.get = shiftfs_posix_acl_xattr_get,
++	.set = shiftfs_posix_acl_xattr_set,
++};
++
++#else /* !CONFIG_SHIFT_FS_POSIX_ACL */
++
++#define shiftfs_get_acl NULL
++
++#endif /* CONFIG_SHIFT_FS_POSIX_ACL */
++
++static const struct inode_operations shiftfs_dir_inode_operations = {
++	.lookup		= shiftfs_lookup,
++	.mkdir		= shiftfs_mkdir,
++	.symlink	= shiftfs_symlink,
++	.unlink		= shiftfs_unlink,
++	.rmdir		= shiftfs_rmdir,
++	.rename		= shiftfs_rename,
++	.link		= shiftfs_link,
++	.setattr	= shiftfs_setattr,
++	.create		= shiftfs_create,
++	.mknod		= shiftfs_mknod,
++	.permission	= shiftfs_permission,
++	.getattr	= shiftfs_getattr,
++	.listxattr	= shiftfs_listxattr,
++	.get_acl	= shiftfs_get_acl,
++};
++
++static const struct inode_operations shiftfs_file_inode_operations = {
++	.fiemap		= shiftfs_fiemap,
++	.getattr	= shiftfs_getattr,
++	.get_acl	= shiftfs_get_acl,
++	.listxattr	= shiftfs_listxattr,
++	.permission	= shiftfs_permission,
++	.setattr	= shiftfs_setattr,
++	.tmpfile	= shiftfs_tmpfile,
++};
++
++static const struct inode_operations shiftfs_special_inode_operations = {
++	.getattr	= shiftfs_getattr,
++	.get_acl	= shiftfs_get_acl,
++	.listxattr	= shiftfs_listxattr,
++	.permission	= shiftfs_permission,
++	.setattr	= shiftfs_setattr,
++};
++
++static const struct inode_operations shiftfs_symlink_inode_operations = {
++	.getattr	= shiftfs_getattr,
++	.get_link	= shiftfs_get_link,
++	.listxattr	= shiftfs_listxattr,
++	.setattr	= shiftfs_setattr,
++};
++
++static struct file *shiftfs_open_realfile(const struct file *file,
++					  struct inode *realinode)
++{
++	struct file *realfile;
++	const struct cred *old_cred;
++	struct inode *inode = file_inode(file);
++	struct dentry *lowerd = file->f_path.dentry->d_fsdata;
++	struct shiftfs_super_info *info = inode->i_sb->s_fs_info;
++	struct path realpath = { .mnt = info->mnt, .dentry = lowerd };
++
++	old_cred = shiftfs_override_creds(inode->i_sb);
++	realfile = open_with_fake_path(&realpath, file->f_flags, realinode,
++				       info->creator_cred);
++	revert_creds(old_cred);
++
++	return realfile;
++}
++
++#define SHIFTFS_SETFL_MASK (O_APPEND | O_NONBLOCK | O_NDELAY | O_DIRECT)
++
++static int shiftfs_change_flags(struct file *file, unsigned int flags)
++{
++	struct inode *inode = file_inode(file);
++	int err;
++
++	/* if some flag changed that cannot be changed then something's amiss */
++	if (WARN_ON((file->f_flags ^ flags) & ~SHIFTFS_SETFL_MASK))
++		return -EIO;
++
++	flags &= SHIFTFS_SETFL_MASK;
++
++	if (((flags ^ file->f_flags) & O_APPEND) && IS_APPEND(inode))
++		return -EPERM;
++
++	if (flags & O_DIRECT) {
++		if (!file->f_mapping->a_ops ||
++		    !file->f_mapping->a_ops->direct_IO)
++			return -EINVAL;
++	}
++
++	if (file->f_op->check_flags) {
++		err = file->f_op->check_flags(flags);
++		if (err)
++			return err;
++	}
++
++	spin_lock(&file->f_lock);
++	file->f_flags = (file->f_flags & ~SHIFTFS_SETFL_MASK) | flags;
++	spin_unlock(&file->f_lock);
++
++	return 0;
++}
++
++static int shiftfs_open(struct inode *inode, struct file *file)
++{
++	struct file *realfile;
++
++	realfile = shiftfs_open_realfile(file, inode->i_private);
++	if (IS_ERR(realfile))
++		return PTR_ERR(realfile);
++
++	file->private_data = realfile;
++	/* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO. */
++	file->f_mapping = realfile->f_mapping;
++
++	return 0;
++}
++
++static int shiftfs_dir_open(struct inode *inode, struct file *file)
++{
++	struct file *realfile;
++	const struct cred *oldcred;
++	struct dentry *lowerd = file->f_path.dentry->d_fsdata;
++	struct shiftfs_super_info *info = inode->i_sb->s_fs_info;
++	struct path realpath = { .mnt = info->mnt, .dentry = lowerd };
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	realfile = dentry_open(&realpath, file->f_flags | O_NOATIME,
++			       info->creator_cred);
++	revert_creds(oldcred);
++	if (IS_ERR(realfile))
++		return PTR_ERR(realfile);
++
++	file->private_data = realfile;
++
++	return 0;
++}
++
++static int shiftfs_release(struct inode *inode, struct file *file)
++{
++	struct file *realfile = file->private_data;
++
++	if (realfile)
++		fput(realfile);
++
++	return 0;
++}
++
++static int shiftfs_dir_release(struct inode *inode, struct file *file)
++{
++	return shiftfs_release(inode, file);
++}
++
++static loff_t shiftfs_dir_llseek(struct file *file, loff_t offset, int whence)
++{
++	struct file *realfile = file->private_data;
++
++	return vfs_llseek(realfile, offset, whence);
++}
++
++static loff_t shiftfs_file_llseek(struct file *file, loff_t offset, int whence)
++{
++	struct inode *realinode = file_inode(file)->i_private;
++
++	return generic_file_llseek_size(file, offset, whence,
++					realinode->i_sb->s_maxbytes,
++					i_size_read(realinode));
++}
++
++/* XXX: Need to figure out what to to about atime updates, maybe other
++ * timestamps too ... ref. ovl_file_accessed() */
++
++static rwf_t shiftfs_iocb_to_rwf(struct kiocb *iocb)
++{
++	int ifl = iocb->ki_flags;
++	rwf_t flags = 0;
++
++	if (ifl & IOCB_NOWAIT)
++		flags |= RWF_NOWAIT;
++	if (ifl & IOCB_HIPRI)
++		flags |= RWF_HIPRI;
++	if (ifl & IOCB_DSYNC)
++		flags |= RWF_DSYNC;
++	if (ifl & IOCB_SYNC)
++		flags |= RWF_SYNC;
++
++	return flags;
++}
++
++static int shiftfs_real_fdget(const struct file *file, struct fd *lowerfd)
++{
++	struct file *realfile;
++
++	if (file->f_op->open != shiftfs_open &&
++	    file->f_op->open != shiftfs_dir_open)
++		return -EINVAL;
++
++	realfile = file->private_data;
++	lowerfd->flags = 0;
++	lowerfd->file = realfile;
++
++	/* Did the flags change since open? */
++	if (unlikely(file->f_flags & ~lowerfd->file->f_flags))
++		return shiftfs_change_flags(lowerfd->file, file->f_flags);
++
++	return 0;
++}
++
++static ssize_t shiftfs_read_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++	struct file *file = iocb->ki_filp;
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	ssize_t ret;
++
++	if (!iov_iter_count(iter))
++		return 0;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_iter_read(lowerfd.file, iter, &iocb->ki_pos,
++			    shiftfs_iocb_to_rwf(iocb));
++	revert_creds(oldcred);
++
++	shiftfs_file_accessed(file);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static ssize_t shiftfs_write_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++	struct file *file = iocb->ki_filp;
++	struct inode *inode = file_inode(file);
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	ssize_t ret;
++
++	if (!iov_iter_count(iter))
++		return 0;
++
++	inode_lock(inode);
++	/* Update mode */
++	shiftfs_copyattr(inode->i_private, inode);
++	ret = file_remove_privs(file);
++	if (ret)
++		goto out_unlock;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		goto out_unlock;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	file_start_write(lowerfd.file);
++	ret = vfs_iter_write(lowerfd.file, iter, &iocb->ki_pos,
++			     shiftfs_iocb_to_rwf(iocb));
++	file_end_write(lowerfd.file);
++	revert_creds(oldcred);
++
++	/* Update size */
++	shiftfs_copyattr(inode->i_private, inode);
++
++	fdput(lowerfd);
++
++out_unlock:
++	inode_unlock(inode);
++	return ret;
++}
++
++static int shiftfs_fsync(struct file *file, loff_t start, loff_t end,
++			 int datasync)
++{
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	int ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_fsync_range(lowerfd.file, start, end, datasync);
++	revert_creds(oldcred);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static int shiftfs_mmap(struct file *file, struct vm_area_struct *vma)
++{
++	struct file *realfile = file->private_data;
++	const struct cred *oldcred;
++	int ret;
++
++	if (!realfile->f_op->mmap)
++		return -ENODEV;
++
++	if (WARN_ON(file != vma->vm_file))
++		return -EIO;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	vma->vm_file = get_file(realfile);
++	ret = call_mmap(vma->vm_file, vma);
++	revert_creds(oldcred);
++
++	shiftfs_file_accessed(file);
++
++	if (ret) {
++		/*
++		 * Drop refcount from new vm_file value and restore original
++		 * vm_file value
++		 */
++		vma->vm_file = file;
++		fput(realfile);
++	} else {
++		/* Drop refcount from previous vm_file value */
++		fput(file);
++	}
++
++	return ret;
++}
++
++static long shiftfs_fallocate(struct file *file, int mode, loff_t offset,
++			      loff_t len)
++{
++	struct inode *inode = file_inode(file);
++	struct inode *loweri = inode->i_private;
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	int ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_fallocate(lowerfd.file, mode, offset, len);
++	revert_creds(oldcred);
++
++	/* Update size */
++	shiftfs_copyattr(loweri, inode);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static int shiftfs_fadvise(struct file *file, loff_t offset, loff_t len,
++			   int advice)
++{
++	struct fd lowerfd;
++	const struct cred *oldcred;
++	int ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		return ret;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	ret = vfs_fadvise(lowerfd.file, offset, len, advice);
++	revert_creds(oldcred);
++
++	fdput(lowerfd);
++	return ret;
++}
++
++static int shiftfs_override_ioctl_creds(int cmd, const struct super_block *sb,
++					const struct cred **oldcred,
++					struct cred **newcred)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	kuid_t fsuid = current_fsuid();
++	kgid_t fsgid = current_fsgid();
++
++	*oldcred = shiftfs_override_creds(sb);
++
++	*newcred = prepare_creds();
++	if (!*newcred) {
++		revert_creds(*oldcred);
++		return -ENOMEM;
++	}
++
++	(*newcred)->fsuid = shift_kuid(sb->s_user_ns, sbinfo->userns, fsuid);
++	(*newcred)->fsgid = shift_kgid(sb->s_user_ns, sbinfo->userns, fsgid);
++
++	/* clear all caps to prevent bypassing capable() checks */
++	cap_clear((*newcred)->cap_bset);
++	cap_clear((*newcred)->cap_effective);
++	cap_clear((*newcred)->cap_inheritable);
++	cap_clear((*newcred)->cap_permitted);
++
++	if (cmd == BTRFS_IOC_SNAP_DESTROY) {
++		kuid_t kuid_root = make_kuid(sb->s_user_ns, 0);
++		/*
++		 * Allow the root user in the container to remove subvolumes
++		 * from other users.
++		 */
++		if (uid_valid(kuid_root) && uid_eq(fsuid, kuid_root))
++			cap_raise((*newcred)->cap_effective, CAP_DAC_OVERRIDE);
++	}
++
++	put_cred(override_creds(*newcred));
++	return 0;
++}
++
++static inline void shiftfs_revert_ioctl_creds(const struct cred *oldcred,
++					      struct cred *newcred)
++{
++	return shiftfs_revert_object_creds(oldcred, newcred);
++}
++
++static inline bool is_btrfs_snap_ioctl(int cmd)
++{
++	if ((cmd == BTRFS_IOC_SNAP_CREATE) || (cmd == BTRFS_IOC_SNAP_CREATE_V2))
++		return true;
++
++	return false;
++}
++
++static int shiftfs_btrfs_ioctl_fd_restore(int cmd, int fd, void __user *arg,
++					  struct btrfs_ioctl_vol_args *v1,
++					  struct btrfs_ioctl_vol_args_v2 *v2)
++{
++	int ret;
++
++	if (!is_btrfs_snap_ioctl(cmd))
++		return 0;
++
++	if (cmd == BTRFS_IOC_SNAP_CREATE)
++		ret = copy_to_user(arg, v1, sizeof(*v1));
++	else
++		ret = copy_to_user(arg, v2, sizeof(*v2));
++
++	close_fd(fd);
++	kfree(v1);
++	kfree(v2);
++
++	return ret ? -EFAULT: 0;
++}
++
++static int shiftfs_btrfs_ioctl_fd_replace(int cmd, void __user *arg,
++					  struct btrfs_ioctl_vol_args **b1,
++					  struct btrfs_ioctl_vol_args_v2 **b2,
++					  int *newfd)
++{
++	int oldfd, ret;
++	struct fd src;
++	struct fd lfd = {};
++	struct btrfs_ioctl_vol_args *v1 = NULL;
++	struct btrfs_ioctl_vol_args_v2 *v2 = NULL;
++
++	*b1 = NULL;
++	*b2 = NULL;
++
++	if (!is_btrfs_snap_ioctl(cmd))
++		return 0;
++
++	if (cmd == BTRFS_IOC_SNAP_CREATE) {
++		v1 = memdup_user(arg, sizeof(*v1));
++		if (IS_ERR(v1))
++			return PTR_ERR(v1);
++		oldfd = v1->fd;
++	} else {
++		v2 = memdup_user(arg, sizeof(*v2));
++		if (IS_ERR(v2))
++			return PTR_ERR(v2);
++		oldfd = v2->fd;
++	}
++
++	src = fdget(oldfd);
++	if (!src.file) {
++		ret = -EINVAL;
++		goto err_free;
++	}
++
++	ret = shiftfs_real_fdget(src.file, &lfd);
++	if (ret) {
++		fdput(src);
++		goto err_free;
++	}
++
++	/*
++	 * shiftfs_real_fdget() does not take a reference to lfd.file, so
++	 * take a reference here to offset the one which will be put by
++	 * close_fd(), and make sure that reference is put on fdput(lfd).
++	 */
++	get_file(lfd.file);
++	lfd.flags |= FDPUT_FPUT;
++	fdput(src);
++
++	*newfd = get_unused_fd_flags(lfd.file->f_flags);
++	if (*newfd < 0) {
++		fdput(lfd);
++		ret = *newfd;
++		goto err_free;
++	}
++
++	fd_install(*newfd, lfd.file);
++
++	if (cmd == BTRFS_IOC_SNAP_CREATE) {
++		v1->fd = *newfd;
++		ret = copy_to_user(arg, v1, sizeof(*v1));
++		v1->fd = oldfd;
++	} else {
++		v2->fd = *newfd;
++		ret = copy_to_user(arg, v2, sizeof(*v2));
++		v2->fd = oldfd;
++	}
++
++	if (!ret) {
++		*b1 = v1;
++		*b2 = v2;
++	} else {
++		shiftfs_btrfs_ioctl_fd_restore(cmd, *newfd, arg, v1, v2);
++		ret = -EFAULT;
++	}
++
++	return ret;
++
++err_free:
++	kfree(v1);
++	kfree(v2);
++
++	return ret;
++}
++
++static long shiftfs_real_ioctl(struct file *file, unsigned int cmd,
++			       unsigned long arg)
++{
++	struct fd lowerfd;
++	struct cred *newcred;
++	const struct cred *oldcred;
++	int newfd = -EBADF;
++	long err = 0, ret = 0;
++	void __user *argp = (void __user *)arg;
++	struct super_block *sb = file->f_path.dentry->d_sb;
++	struct btrfs_ioctl_vol_args *btrfs_v1 = NULL;
++	struct btrfs_ioctl_vol_args_v2 *btrfs_v2 = NULL;
++
++	ret = shiftfs_btrfs_ioctl_fd_replace(cmd, argp, &btrfs_v1, &btrfs_v2,
++					     &newfd);
++	if (ret < 0)
++		return ret;
++
++	ret = shiftfs_real_fdget(file, &lowerfd);
++	if (ret)
++		goto out_restore;
++
++	ret = shiftfs_override_ioctl_creds(cmd, sb, &oldcred, &newcred);
++	if (ret)
++		goto out_fdput;
++
++	ret = vfs_ioctl(lowerfd.file, cmd, arg);
++
++	shiftfs_revert_ioctl_creds(oldcred, newcred);
++
++	shiftfs_copyattr(file_inode(lowerfd.file), file_inode(file));
++	shiftfs_copyflags(file_inode(lowerfd.file), file_inode(file));
++
++out_fdput:
++	fdput(lowerfd);
++
++out_restore:
++	err = shiftfs_btrfs_ioctl_fd_restore(cmd, newfd, argp,
++					     btrfs_v1, btrfs_v2);
++	if (!ret)
++		ret = err;
++
++	return ret;
++}
++
++static bool in_ioctl_whitelist(int flag, unsigned long arg)
++{
++	void __user *argp = (void __user *)arg;
++	u64 flags = 0;
++
++	switch (flag) {
++	case BTRFS_IOC_FS_INFO:
++		return true;
++	case BTRFS_IOC_SNAP_CREATE:
++		return true;
++	case BTRFS_IOC_SNAP_CREATE_V2:
++		return true;
++	case BTRFS_IOC_SUBVOL_CREATE:
++		return true;
++	case BTRFS_IOC_SUBVOL_CREATE_V2:
++		return true;
++	case BTRFS_IOC_SUBVOL_GETFLAGS:
++		return true;
++	case BTRFS_IOC_SUBVOL_SETFLAGS:
++		if (copy_from_user(&flags, argp, sizeof(flags)))
++			return false;
++
++		if (flags & ~BTRFS_SUBVOL_RDONLY)
++			return false;
++
++		return true;
++	case BTRFS_IOC_SNAP_DESTROY:
++		return true;
++	}
++
++	return false;
++}
++
++static long shiftfs_ioctl(struct file *file, unsigned int cmd,
++			  unsigned long arg)
++{
++	switch (cmd) {
++	case FS_IOC_GETVERSION:
++		/* fall through */
++	case FS_IOC_GETFLAGS:
++		/* fall through */
++	case FS_IOC_SETFLAGS:
++		break;
++	default:
++		if (!in_ioctl_whitelist(cmd, arg) ||
++		    !shiftfs_passthrough_ioctls(file->f_path.dentry->d_sb->s_fs_info))
++			return -ENOTTY;
++	}
++
++	return shiftfs_real_ioctl(file, cmd, arg);
++}
++
++static long shiftfs_compat_ioctl(struct file *file, unsigned int cmd,
++				 unsigned long arg)
++{
++	switch (cmd) {
++	case FS_IOC32_GETVERSION:
++		/* fall through */
++	case FS_IOC32_GETFLAGS:
++		/* fall through */
++	case FS_IOC32_SETFLAGS:
++		break;
++	default:
++		if (!in_ioctl_whitelist(cmd, arg) ||
++		    !shiftfs_passthrough_ioctls(file->f_path.dentry->d_sb->s_fs_info))
++			return -ENOIOCTLCMD;
++	}
++
++	return shiftfs_real_ioctl(file, cmd, arg);
++}
++
++enum shiftfs_copyop {
++	SHIFTFS_COPY,
++	SHIFTFS_CLONE,
++	SHIFTFS_DEDUPE,
++};
++
++static ssize_t shiftfs_copyfile(struct file *file_in, loff_t pos_in,
++				struct file *file_out, loff_t pos_out, u64 len,
++				unsigned int flags, enum shiftfs_copyop op)
++{
++	ssize_t ret;
++	struct fd real_in, real_out;
++	const struct cred *oldcred;
++	struct inode *inode_out = file_inode(file_out);
++	struct inode *loweri = inode_out->i_private;
++
++	ret = shiftfs_real_fdget(file_out, &real_out);
++	if (ret)
++		return ret;
++
++	ret = shiftfs_real_fdget(file_in, &real_in);
++	if (ret) {
++		fdput(real_out);
++		return ret;
++	}
++
++	oldcred = shiftfs_override_creds(inode_out->i_sb);
++	switch (op) {
++	case SHIFTFS_COPY:
++		ret = vfs_copy_file_range(real_in.file, pos_in, real_out.file,
++					  pos_out, len, flags);
++		break;
++
++	case SHIFTFS_CLONE:
++		ret = vfs_clone_file_range(real_in.file, pos_in, real_out.file,
++					   pos_out, len, flags);
++		break;
++
++	case SHIFTFS_DEDUPE:
++		ret = vfs_dedupe_file_range_one(real_in.file, pos_in,
++						real_out.file, pos_out, len,
++						flags);
++		break;
++	}
++	revert_creds(oldcred);
++
++	/* Update size */
++	shiftfs_copyattr(loweri, inode_out);
++
++	fdput(real_in);
++	fdput(real_out);
++
++	return ret;
++}
++
++static ssize_t shiftfs_copy_file_range(struct file *file_in, loff_t pos_in,
++				       struct file *file_out, loff_t pos_out,
++				       size_t len, unsigned int flags)
++{
++	return shiftfs_copyfile(file_in, pos_in, file_out, pos_out, len, flags,
++				SHIFTFS_COPY);
++}
++
++static loff_t shiftfs_remap_file_range(struct file *file_in, loff_t pos_in,
++				       struct file *file_out, loff_t pos_out,
++				       loff_t len, unsigned int remap_flags)
++{
++	enum shiftfs_copyop op;
++
++	if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY))
++		return -EINVAL;
++
++	if (remap_flags & REMAP_FILE_DEDUP)
++		op = SHIFTFS_DEDUPE;
++	else
++		op = SHIFTFS_CLONE;
++
++	return shiftfs_copyfile(file_in, pos_in, file_out, pos_out, len,
++				remap_flags, op);
++}
++
++static int shiftfs_iterate_shared(struct file *file, struct dir_context *ctx)
++{
++	const struct cred *oldcred;
++	int err = -ENOTDIR;
++	struct file *realfile = file->private_data;
++
++	oldcred = shiftfs_override_creds(file->f_path.dentry->d_sb);
++	err = iterate_dir(realfile, ctx);
++	revert_creds(oldcred);
++
++	return err;
++}
++
++const struct file_operations shiftfs_file_operations = {
++	.open			= shiftfs_open,
++	.release		= shiftfs_release,
++	.llseek			= shiftfs_file_llseek,
++	.read_iter		= shiftfs_read_iter,
++	.write_iter		= shiftfs_write_iter,
++	.fsync			= shiftfs_fsync,
++	.mmap			= shiftfs_mmap,
++	.fallocate		= shiftfs_fallocate,
++	.fadvise		= shiftfs_fadvise,
++	.unlocked_ioctl		= shiftfs_ioctl,
++	.compat_ioctl		= shiftfs_compat_ioctl,
++	.copy_file_range	= shiftfs_copy_file_range,
++	.remap_file_range	= shiftfs_remap_file_range,
++	.splice_read		= generic_file_splice_read,
++	.splice_write		= iter_file_splice_write,
++};
++
++const struct file_operations shiftfs_dir_operations = {
++	.open			= shiftfs_dir_open,
++	.release		= shiftfs_dir_release,
++	.compat_ioctl		= shiftfs_compat_ioctl,
++	.fsync			= shiftfs_fsync,
++	.iterate_shared		= shiftfs_iterate_shared,
++	.llseek			= shiftfs_dir_llseek,
++	.read			= generic_read_dir,
++	.unlocked_ioctl		= shiftfs_ioctl,
++};
++
++static const struct address_space_operations shiftfs_aops = {
++	/* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO */
++	.direct_IO	= noop_direct_IO,
++};
++
++static void shiftfs_fill_inode(struct inode *inode, unsigned long ino,
++			       umode_t mode, dev_t dev, struct dentry *dentry)
++{
++	struct inode *loweri;
++
++	inode->i_ino = ino;
++	inode->i_flags |= S_NOCMTIME;
++
++	mode &= S_IFMT;
++	inode->i_mode = mode;
++	switch (mode & S_IFMT) {
++	case S_IFDIR:
++		inode->i_op = &shiftfs_dir_inode_operations;
++		inode->i_fop = &shiftfs_dir_operations;
++		break;
++	case S_IFLNK:
++		inode->i_op = &shiftfs_symlink_inode_operations;
++		break;
++	case S_IFREG:
++		inode->i_op = &shiftfs_file_inode_operations;
++		inode->i_fop = &shiftfs_file_operations;
++		inode->i_mapping->a_ops = &shiftfs_aops;
++		break;
++	default:
++		inode->i_op = &shiftfs_special_inode_operations;
++		init_special_inode(inode, mode, dev);
++		break;
++	}
++
++	if (!dentry)
++		return;
++
++	loweri = dentry->d_inode;
++	if (!loweri->i_op->get_link)
++		inode->i_opflags |= IOP_NOFOLLOW;
++
++	shiftfs_copyattr(loweri, inode);
++	shiftfs_copyflags(loweri, inode);
++	set_nlink(inode, loweri->i_nlink);
++}
++
++static int shiftfs_show_options(struct seq_file *m, struct dentry *dentry)
++{
++	struct super_block *sb = dentry->d_sb;
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++
++	if (sbinfo->mark)
++		seq_show_option(m, "mark", NULL);
++
++	if (sbinfo->passthrough)
++		seq_printf(m, ",passthrough=%u", sbinfo->passthrough);
++
++	return 0;
++}
++
++static int shiftfs_statfs(struct dentry *dentry, struct kstatfs *buf)
++{
++	struct super_block *sb = dentry->d_sb;
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++	struct dentry *root = sb->s_root;
++	struct dentry *realroot = root->d_fsdata;
++	struct path realpath = { .mnt = sbinfo->mnt, .dentry = realroot };
++	int err;
++
++	err = vfs_statfs(&realpath, buf);
++	if (err)
++		return err;
++
++	if (!shiftfs_passthrough_statfs(sbinfo))
++		buf->f_type = sb->s_magic;
++
++	return 0;
++}
++
++static void shiftfs_evict_inode(struct inode *inode)
++{
++	struct inode *loweri = inode->i_private;
++
++	clear_inode(inode);
++
++	if (loweri)
++		iput(loweri);
++}
++
++static void shiftfs_put_super(struct super_block *sb)
++{
++	struct shiftfs_super_info *sbinfo = sb->s_fs_info;
++
++	if (sbinfo) {
++		mntput(sbinfo->mnt);
++		put_cred(sbinfo->creator_cred);
++		kfree(sbinfo);
++	}
++}
++
++static const struct xattr_handler shiftfs_xattr_handler = {
++	.prefix = "",
++	.get    = shiftfs_xattr_get,
++	.set    = shiftfs_xattr_set,
++};
++
++const struct xattr_handler *shiftfs_xattr_handlers[] = {
++#ifdef CONFIG_SHIFT_FS_POSIX_ACL
++	&shiftfs_posix_acl_access_xattr_handler,
++	&shiftfs_posix_acl_default_xattr_handler,
++#endif
++	&shiftfs_xattr_handler,
++	NULL
++};
++
++static inline bool passthrough_is_subset(int old_flags, int new_flags)
++{
++	if ((new_flags & old_flags) != new_flags)
++		return false;
++
++	return true;
++}
++
++static int shiftfs_super_check_flags(unsigned long old_flags,
++				     unsigned long new_flags)
++{
++	if ((old_flags & SB_RDONLY) && !(new_flags & SB_RDONLY))
++		return -EPERM;
++
++	if ((old_flags & SB_NOSUID) && !(new_flags & SB_NOSUID))
++		return -EPERM;
++
++	if ((old_flags & SB_NODEV) && !(new_flags & SB_NODEV))
++		return -EPERM;
++
++	if ((old_flags & SB_NOEXEC) && !(new_flags & SB_NOEXEC))
++		return -EPERM;
++
++	if ((old_flags & SB_NOATIME) && !(new_flags & SB_NOATIME))
++		return -EPERM;
++
++	if ((old_flags & SB_NODIRATIME) && !(new_flags & SB_NODIRATIME))
++		return -EPERM;
++
++	if (!(old_flags & SB_POSIXACL) && (new_flags & SB_POSIXACL))
++		return -EPERM;
++
++	return 0;
++}
++
++static int shiftfs_remount(struct super_block *sb, int *flags, char *data)
++{
++	int err;
++	struct shiftfs_super_info new = {};
++	struct shiftfs_super_info *info = sb->s_fs_info;
++
++	err = shiftfs_parse_mount_options(&new, data);
++	if (err)
++		return err;
++
++	err = shiftfs_super_check_flags(sb->s_flags, *flags);
++	if (err)
++		return err;
++
++	/* Mark mount option cannot be changed. */
++	if (info->mark || (info->mark != new.mark))
++		return -EPERM;
++
++	if (info->passthrough != new.passthrough) {
++		/* Don't allow exceeding passthrough options of mark mount. */
++		if (!passthrough_is_subset(info->passthrough_mark,
++					   info->passthrough))
++			return -EPERM;
++
++		info->passthrough = new.passthrough;
++	}
++
++	return 0;
++}
++
++static const struct super_operations shiftfs_super_ops = {
++	.put_super	= shiftfs_put_super,
++	.show_options	= shiftfs_show_options,
++	.statfs		= shiftfs_statfs,
++	.remount_fs	= shiftfs_remount,
++	.evict_inode	= shiftfs_evict_inode,
++};
++
++struct shiftfs_data {
++	void *data;
++	const char *path;
++};
++
++static void shiftfs_super_force_flags(struct super_block *sb,
++				      unsigned long lower_flags)
++{
++	sb->s_flags |= lower_flags & (SB_RDONLY | SB_NOSUID | SB_NODEV |
++				      SB_NOEXEC | SB_NOATIME | SB_NODIRATIME);
++
++	if (!(lower_flags & SB_POSIXACL))
++		sb->s_flags &= ~SB_POSIXACL;
++}
++
++static int shiftfs_fill_super(struct super_block *sb, void *raw_data,
++			      int silent)
++{
++	int err;
++	struct path path = {};
++	struct shiftfs_super_info *sbinfo_mp;
++	char *name = NULL;
++	struct inode *inode = NULL;
++	struct dentry *dentry = NULL;
++	struct shiftfs_data *data = raw_data;
++	struct shiftfs_super_info *sbinfo = NULL;
++
++	if (!data->path)
++		return -EINVAL;
++
++	sb->s_fs_info = kzalloc(sizeof(*sbinfo), GFP_KERNEL);
++	if (!sb->s_fs_info)
++		return -ENOMEM;
++	sbinfo = sb->s_fs_info;
++
++	err = shiftfs_parse_mount_options(sbinfo, data->data);
++	if (err)
++		return err;
++
++	/* to mount a mark, must be userns admin */
++	if (!sbinfo->mark && !ns_capable(current_user_ns(), CAP_SYS_ADMIN))
++		return -EPERM;
++
++	name = kstrdup(data->path, GFP_KERNEL);
++	if (!name)
++		return -ENOMEM;
++
++	err = kern_path(name, LOOKUP_FOLLOW, &path);
++	if (err)
++		goto out_free_name;
++
++	if (!S_ISDIR(path.dentry->d_inode->i_mode)) {
++		err = -ENOTDIR;
++		goto out_put_path;
++	}
++
++	sb->s_flags |= SB_POSIXACL;
++
++	if (sbinfo->mark) {
++		struct cred *cred_tmp;
++		struct super_block *lower_sb = path.mnt->mnt_sb;
++
++		/* to mark a mount point, must root wrt lower s_user_ns */
++		if (!ns_capable(lower_sb->s_user_ns, CAP_SYS_ADMIN)) {
++			err = -EPERM;
++			goto out_put_path;
++		}
++
++		/*
++		 * this part is visible unshifted, so make sure no
++		 * executables that could be used to give suid
++		 * privileges
++		 */
++		sb->s_iflags = SB_I_NOEXEC;
++
++		shiftfs_super_force_flags(sb, lower_sb->s_flags);
++
++		/*
++		 * Handle nesting of shiftfs mounts by referring this mark
++		 * mount back to the original mark mount. This is more
++		 * efficient and alleviates concerns about stack depth.
++		 */
++		if (lower_sb->s_magic == SHIFTFS_MAGIC) {
++			sbinfo_mp = lower_sb->s_fs_info;
++
++			/* Doesn't make sense to mark a mark mount */
++			if (sbinfo_mp->mark) {
++				err = -EINVAL;
++				goto out_put_path;
++			}
++
++			if (!passthrough_is_subset(sbinfo_mp->passthrough,
++						   sbinfo->passthrough)) {
++				err = -EPERM;
++				goto out_put_path;
++			}
++
++			sbinfo->mnt = mntget(sbinfo_mp->mnt);
++			dentry = dget(path.dentry->d_fsdata);
++			/*
++			 * Copy up the passthrough mount options from the
++			 * parent mark mountpoint.
++			 */
++			sbinfo->passthrough_mark = sbinfo_mp->passthrough_mark;
++			sbinfo->creator_cred = get_cred(sbinfo_mp->creator_cred);
++		} else {
++			sbinfo->mnt = mntget(path.mnt);
++			dentry = dget(path.dentry);
++			/*
++			 * For a new mark passthrough_mark and passthrough
++			 * are identical.
++			 */
++			sbinfo->passthrough_mark = sbinfo->passthrough;
++
++			/* We don't allow shiftfs on top of idmapped mounts. */
++			if (path.mnt->mnt_userns != &init_user_ns) {
++				err = -EPERM;
++				goto out_put_path;
++			}
++
++			cred_tmp = prepare_creds();
++			if (!cred_tmp) {
++				err = -ENOMEM;
++				goto out_put_path;
++			}
++			/* Don't override disk quota limits or use reserved space. */
++			cap_lower(cred_tmp->cap_effective, CAP_SYS_RESOURCE);
++			sbinfo->creator_cred = cred_tmp;
++		}
++	} else {
++		/*
++		 * This leg executes if we're admin capable in the namespace,
++		 * so be very careful.
++		 */
++		err = -EPERM;
++		if (path.dentry->d_sb->s_magic != SHIFTFS_MAGIC)
++			goto out_put_path;
++
++		sbinfo_mp = path.dentry->d_sb->s_fs_info;
++		if (!sbinfo_mp->mark)
++			goto out_put_path;
++
++		if (!passthrough_is_subset(sbinfo_mp->passthrough,
++					   sbinfo->passthrough))
++			goto out_put_path;
++
++		sbinfo->mnt = mntget(sbinfo_mp->mnt);
++		sbinfo->creator_cred = get_cred(sbinfo_mp->creator_cred);
++		dentry = dget(path.dentry->d_fsdata);
++		/*
++		 * Copy up passthrough settings from mark mountpoint so we can
++		 * verify when the overlay wants to remount with different
++		 * passthrough settings.
++		 */
++		sbinfo->passthrough_mark = sbinfo_mp->passthrough;
++		shiftfs_super_force_flags(sb, path.mnt->mnt_sb->s_flags);
++	}
++
++	sb->s_stack_depth = dentry->d_sb->s_stack_depth + 1;
++	if (sb->s_stack_depth > FILESYSTEM_MAX_STACK_DEPTH) {
++		printk(KERN_ERR "shiftfs: maximum stacking depth exceeded\n");
++		err = -EINVAL;
++		goto out_put_path;
++	}
++
++	inode = new_inode(sb);
++	if (!inode) {
++		err = -ENOMEM;
++		goto out_put_path;
++	}
++	shiftfs_fill_inode(inode, dentry->d_inode->i_ino, S_IFDIR, 0, dentry);
++
++	ihold(dentry->d_inode);
++	inode->i_private = dentry->d_inode;
++
++	sb->s_magic = SHIFTFS_MAGIC;
++	sb->s_maxbytes = MAX_LFS_FILESIZE;
++	sb->s_op = &shiftfs_super_ops;
++	sb->s_xattr = shiftfs_xattr_handlers;
++	sb->s_d_op = &shiftfs_dentry_ops;
++	sb->s_root = d_make_root(inode);
++	if (!sb->s_root) {
++		err = -ENOMEM;
++		goto out_put_path;
++	}
++
++	sb->s_root->d_fsdata = dentry;
++	sbinfo->userns = get_user_ns(dentry->d_sb->s_user_ns);
++	shiftfs_copyattr(dentry->d_inode, sb->s_root->d_inode);
++
++	dentry = NULL;
++	err = 0;
++
++out_put_path:
++	path_put(&path);
++
++out_free_name:
++	kfree(name);
++
++	dput(dentry);
++
++	return err;
++}
++
++static struct dentry *shiftfs_mount(struct file_system_type *fs_type,
++				    int flags, const char *dev_name, void *data)
++{
++	struct shiftfs_data d = { data, dev_name };
++
++	return mount_nodev(fs_type, flags, &d, shiftfs_fill_super);
++}
++
++static struct file_system_type shiftfs_type = {
++	.owner		= THIS_MODULE,
++	.name		= "shiftfs",
++	.mount		= shiftfs_mount,
++	.kill_sb	= kill_anon_super,
++	.fs_flags	= FS_USERNS_MOUNT,
++};
++
++static int __init shiftfs_init(void)
++{
++	return register_filesystem(&shiftfs_type);
++}
++
++static void __exit shiftfs_exit(void)
++{
++	unregister_filesystem(&shiftfs_type);
++}
++
++MODULE_ALIAS_FS("shiftfs");
++MODULE_AUTHOR("James Bottomley");
++MODULE_AUTHOR("Seth Forshee <seth.forshee@canonical.com>");
++MODULE_AUTHOR("Christian Brauner <christian.brauner@ubuntu.com>");
++MODULE_DESCRIPTION("id shifting filesystem");
++MODULE_LICENSE("GPL v2");
++module_init(shiftfs_init)
++module_exit(shiftfs_exit)
+--- a/include/uapi/linux/magic.h	2021-10-21 07:47:16.958061380 -0400
++++ b/include/uapi/linux/magic.h	2021-10-21 07:49:50.488781857 -0400
+@@ -99,4 +99,6 @@
+ #define PPC_CMM_MAGIC		0xc7571590
+ #define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
+ 
++#define SHIFTFS_MAGIC		0x6a656a62
++
+ #endif /* __LINUX_MAGIC_H__ */
+--- a/fs/Makefile	2021-10-21 07:50:07.698862434 -0400
++++ b/fs/Makefile	2021-10-21 07:50:39.679012077 -0400
+@@ -135,3 +135,4 @@ obj-$(CONFIG_EFIVAR_FS)		+= efivarfs/
+ obj-$(CONFIG_EROFS_FS)		+= erofs/
+ obj-$(CONFIG_VBOXSF_FS)		+= vboxsf/
+ obj-$(CONFIG_ZONEFS_FS)		+= zonefs/
++obj-$(CONFIG_SHIFT_FS)   += shiftfs.o
+--- a/fs/Kconfig	2021-10-21 07:51:37.352615023 -0400
++++ b/fs/Kconfig	2021-10-21 07:52:56.932986560 -0400
+@@ -123,6 +123,26 @@ source "fs/autofs/Kconfig"
+ source "fs/fuse/Kconfig"
+ source "fs/overlayfs/Kconfig"
+ 
++config SHIFT_FS
++	tristate "UID/GID shifting overlay filesystem for containers"
++	help
++  	This filesystem can overlay any mounted filesystem and shift
++  	the uid/gid the files appear at.  The idea is that
++  	unprivileged containers can use this to mount root volumes
++  	using this technique.
++
++config SHIFT_FS_POSIX_ACL
++	bool "shiftfs POSIX Access Control Lists"
++	depends on SHIFT_FS
++	select FS_POSIX_ACL
++	help
++  	POSIX Access Control Lists (ACLs) support permissions for users and
++  	groups beyond the owner/group/world scheme.
++
++  	If you don't know what Access Control Lists are, say N.
++
++
++
+ menu "Caches"
+ 
+ source "fs/netfs/Kconfig"


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-10-27 11:56 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-10-27 11:56 UTC (permalink / raw
  To: gentoo-commits

commit:     2661807959a80b1a8fe0501567b46f369fc44c98
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 27 11:56:26 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 27 11:56:26 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=26618079

Linux patch 5.14.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1014_linux-5.14.15.patch | 6672 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6676 insertions(+)

diff --git a/0000_README b/0000_README
index 5ebc554..ea788cb 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1013_linux-5.14.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.14
 
+Patch:  1014_linux-5.14.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-5.14.15.patch b/1014_linux-5.14.15.patch
new file mode 100644
index 0000000..5d7b8f7
--- /dev/null
+++ b/1014_linux-5.14.15.patch
@@ -0,0 +1,6672 @@
+diff --git a/Documentation/networking/devlink/ice.rst b/Documentation/networking/devlink/ice.rst
+index a432dc419fa40..5d97cee9457be 100644
+--- a/Documentation/networking/devlink/ice.rst
++++ b/Documentation/networking/devlink/ice.rst
+@@ -30,10 +30,11 @@ The ``ice`` driver reports the following versions
+         PHY, link, etc.
+     * - ``fw.mgmt.api``
+       - running
+-      - 1.5
+-      - 2-digit version number of the API exported over the AdminQ by the
+-        management firmware. Used by the driver to identify what commands
+-        are supported.
++      - 1.5.1
++      - 3-digit version number (major.minor.patch) of the API exported over
++        the AdminQ by the management firmware. Used by the driver to
++        identify what commands are supported. Historical versions of the
++        kernel only displayed a 2-digit version number (major.minor).
+     * - ``fw.mgmt.build``
+       - running
+       - 0x305d955f
+diff --git a/Makefile b/Makefile
+index f05668e1ffaba..e66341fba8a4e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 2fb7012c32463..110b305af27f1 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -93,6 +93,7 @@ config ARM
+ 	select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
+ 	select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG
+ 	select HAVE_FUNCTION_TRACER if !XIP_KERNEL
++	select HAVE_FUTEX_CMPXCHG if FUTEX
+ 	select HAVE_GCC_PLUGINS
+ 	select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
+ 	select HAVE_IRQ_TIME_ACCOUNTING
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+index 8034e5dacc808..949df688c5f18 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+@@ -71,7 +71,6 @@
+ 			isc: isc@f0008000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&pinctrl_isc_base &pinctrl_isc_data_8bit &pinctrl_isc_data_9_10 &pinctrl_isc_data_11_12>;
+-				status = "okay";
+ 			};
+ 
+ 			qspi1: spi@f0024000 {
+diff --git a/arch/arm/boot/dts/spear3xx.dtsi b/arch/arm/boot/dts/spear3xx.dtsi
+index f266b7b034823..cc88ebe7a60ce 100644
+--- a/arch/arm/boot/dts/spear3xx.dtsi
++++ b/arch/arm/boot/dts/spear3xx.dtsi
+@@ -47,7 +47,7 @@
+ 		};
+ 
+ 		gmac: eth@e0800000 {
+-			compatible = "st,spear600-gmac";
++			compatible = "snps,dwmac-3.40a";
+ 			reg = <0xe0800000 0x8000>;
+ 			interrupts = <23 22>;
+ 			interrupt-names = "macirq", "eth_wake_irq";
+diff --git a/arch/arm/boot/dts/vexpress-v2m.dtsi b/arch/arm/boot/dts/vexpress-v2m.dtsi
+index ec13ceb9ed362..79ba83d1f620c 100644
+--- a/arch/arm/boot/dts/vexpress-v2m.dtsi
++++ b/arch/arm/boot/dts/vexpress-v2m.dtsi
+@@ -19,7 +19,7 @@
+  */
+ 
+ / {
+-	bus@4000000 {
++	bus@40000000 {
+ 		motherboard {
+ 			model = "V2M-P1";
+ 			arm,hbi = <0x190>;
+diff --git a/arch/arm/boot/dts/vexpress-v2p-ca9.dts b/arch/arm/boot/dts/vexpress-v2p-ca9.dts
+index 4c58479558562..1317f0f58d53d 100644
+--- a/arch/arm/boot/dts/vexpress-v2p-ca9.dts
++++ b/arch/arm/boot/dts/vexpress-v2p-ca9.dts
+@@ -295,7 +295,7 @@
+ 		};
+ 	};
+ 
+-	smb: bus@4000000 {
++	smb: bus@40000000 {
+ 		compatible = "simple-bus";
+ 
+ 		#address-cells = <2>;
+diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
+index fb0f523d14921..0a048dc06a7d7 100644
+--- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h
++++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h
+@@ -24,6 +24,7 @@ struct hyp_pool {
+ 
+ /* Allocation */
+ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order);
++void hyp_split_page(struct hyp_page *page);
+ void hyp_get_page(struct hyp_pool *pool, void *addr);
+ void hyp_put_page(struct hyp_pool *pool, void *addr);
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+index a6ce991b14679..b79ce0059e7b7 100644
+--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
++++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+@@ -35,7 +35,18 @@ static const u8 pkvm_hyp_id = 1;
+ 
+ static void *host_s2_zalloc_pages_exact(size_t size)
+ {
+-	return hyp_alloc_pages(&host_s2_pool, get_order(size));
++	void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size));
++
++	hyp_split_page(hyp_virt_to_page(addr));
++
++	/*
++	 * The size of concatenated PGDs is always a power of two of PAGE_SIZE,
++	 * so there should be no need to free any of the tail pages to make the
++	 * allocation exact.
++	 */
++	WARN_ON(size != (PAGE_SIZE << get_order(size)));
++
++	return addr;
+ }
+ 
+ static void *host_s2_zalloc_page(void *pool)
+diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c
+index 41fc25bdfb346..a6e874e61a40e 100644
+--- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c
++++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c
+@@ -193,6 +193,20 @@ void hyp_get_page(struct hyp_pool *pool, void *addr)
+ 	hyp_spin_unlock(&pool->lock);
+ }
+ 
++void hyp_split_page(struct hyp_page *p)
++{
++	unsigned short order = p->order;
++	unsigned int i;
++
++	p->order = 0;
++	for (i = 1; i < (1 << order); i++) {
++		struct hyp_page *tail = p + i;
++
++		tail->order = 0;
++		hyp_set_page_refcounted(tail);
++	}
++}
++
+ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order)
+ {
+ 	unsigned short i = order;
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index 0625bf2353c22..3fcdacfee5799 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1477,8 +1477,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 		 * when updating the PG_mte_tagged page flag, see
+ 		 * sanitise_mte_tags for more details.
+ 		 */
+-		if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED)
+-			return -EINVAL;
++		if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) {
++			ret = -EINVAL;
++			break;
++		}
+ 
+ 		if (vma->vm_flags & VM_PFNMAP) {
+ 			/* IO region dirty page logging not allowed */
+diff --git a/arch/nios2/include/asm/irqflags.h b/arch/nios2/include/asm/irqflags.h
+index b3ec3e510706d..25acf27862f91 100644
+--- a/arch/nios2/include/asm/irqflags.h
++++ b/arch/nios2/include/asm/irqflags.h
+@@ -9,7 +9,7 @@
+ 
+ static inline unsigned long arch_local_save_flags(void)
+ {
+-	return RDCTL(CTL_STATUS);
++	return RDCTL(CTL_FSTATUS);
+ }
+ 
+ /*
+@@ -18,7 +18,7 @@ static inline unsigned long arch_local_save_flags(void)
+  */
+ static inline void arch_local_irq_restore(unsigned long flags)
+ {
+-	WRCTL(CTL_STATUS, flags);
++	WRCTL(CTL_FSTATUS, flags);
+ }
+ 
+ static inline void arch_local_irq_disable(void)
+diff --git a/arch/nios2/include/asm/registers.h b/arch/nios2/include/asm/registers.h
+index 183c720e454d9..95b67dd16f818 100644
+--- a/arch/nios2/include/asm/registers.h
++++ b/arch/nios2/include/asm/registers.h
+@@ -11,7 +11,7 @@
+ #endif
+ 
+ /* control register numbers */
+-#define CTL_STATUS	0
++#define CTL_FSTATUS	0
+ #define CTL_ESTATUS	1
+ #define CTL_BSTATUS	2
+ #define CTL_IENABLE	3
+diff --git a/arch/parisc/math-emu/fpudispatch.c b/arch/parisc/math-emu/fpudispatch.c
+index 7c46969ead9b1..01ed133227c25 100644
+--- a/arch/parisc/math-emu/fpudispatch.c
++++ b/arch/parisc/math-emu/fpudispatch.c
+@@ -310,12 +310,15 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1];
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 3: /* FABS */
+ 				switch (fmt) {
+ 				    case 2: /* illegal */
+@@ -325,13 +328,16 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					/* copy and clear sign bit */
+ 					fpregs[t] = fpregs[r1] & 0x7fffffff;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 6: /* FNEG */
+ 				switch (fmt) {
+ 				    case 2: /* illegal */
+@@ -341,13 +347,16 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					/* copy and invert sign bit */
+ 					fpregs[t] = fpregs[r1] ^ 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 7: /* FNEGABS */
+ 				switch (fmt) {
+ 				    case 2: /* illegal */
+@@ -357,13 +366,16 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					r1 &= ~3;
+ 					fpregs[t+3] = fpregs[r1+3];
+ 					fpregs[t+2] = fpregs[r1+2];
++					fallthrough;
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					/* copy and set sign bit */
+ 					fpregs[t] = fpregs[r1] | 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 4: /* FSQRT */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -376,6 +388,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 5: /* FRND */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -389,7 +402,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(MAJOR_0C_EXCP);
+ 				}
+ 		} /* end of switch (subop) */
+-
++		BUG();
+ 	case 1: /* class 1 */
+ 		df = extru(ir,fpdfpos,2); /* get dest format */
+ 		if ((df & 2) || (fmt & 2)) {
+@@ -419,6 +432,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* dbl/dbl */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FCNVXF */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -434,6 +448,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvxf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 2: /* FCNVFX */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -449,6 +464,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfx(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 3: /* FCNVFXT */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -464,6 +480,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfxt(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 5: /* FCNVUF (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -479,6 +496,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvuf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 6: /* FCNVFU (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -494,6 +512,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfu(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 7: /* FCNVFUT (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -509,10 +528,11 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 					return(dbl_to_dbl_fcnvfut(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 4: /* undefined */
+ 				return(MAJOR_0C_EXCP);
+ 		} /* end of switch subop */
+-
++		BUG();
+ 	case 2: /* class 2 */
+ 		fpu_type_flags=fpregs[FPU_TYPE_FLAG_POS];
+ 		r2 = extru(ir, fpr2pos, 5) * sizeof(double)/sizeof(u_int);
+@@ -590,6 +610,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FTEST */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -609,8 +630,10 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3:
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 		    } /* end of switch subop */
+ 		} /* end of else for PA1.0 & PA1.1 */
++		BUG();
+ 	case 3: /* class 3 */
+ 		r2 = extru(ir,fpr2pos,5) * sizeof(double)/sizeof(u_int);
+ 		if (r2 == 0)
+@@ -633,6 +656,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FSUB */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -645,6 +669,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 2: /* FMPY */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -657,6 +682,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 3: /* FDIV */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -669,6 +695,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 			case 4: /* FREM */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -681,6 +708,7 @@ decode_0c(u_int ir, u_int class, u_int subop, u_int fpregs[])
+ 				    case 3: /* quad not implemented */
+ 					return(MAJOR_0C_EXCP);
+ 				}
++				BUG();
+ 		} /* end of class 3 switch */
+ 	} /* end of switch(class) */
+ 
+@@ -736,10 +764,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1];
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 3: /* FABS */
+ 				switch (fmt) {
+ 				    case 2:
+@@ -747,10 +777,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1] & 0x7fffffff;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 6: /* FNEG */
+ 				switch (fmt) {
+ 				    case 2:
+@@ -758,10 +790,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1] ^ 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 7: /* FNEGABS */
+ 				switch (fmt) {
+ 				    case 2:
+@@ -769,10 +803,12 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				    case 1: /* double */
+ 					fpregs[t+1] = fpregs[r1+1];
++					fallthrough;
+ 				    case 0: /* single */
+ 					fpregs[t] = fpregs[r1] | 0x80000000;
+ 					return(NOEXCEPTION);
+ 				}
++				BUG();
+ 			case 4: /* FSQRT */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -785,6 +821,7 @@ u_int fpregs[];
+ 				    case 3:
+ 					return(MAJOR_0E_EXCP);
+ 				}
++				BUG();
+ 			case 5: /* FRMD */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -798,7 +835,7 @@ u_int fpregs[];
+ 					return(MAJOR_0E_EXCP);
+ 				}
+ 		} /* end of switch (subop */
+-	
++		BUG();
+ 	case 1: /* class 1 */
+ 		df = extru(ir,fpdfpos,2); /* get dest format */
+ 		/*
+@@ -826,6 +863,7 @@ u_int fpregs[];
+ 				    case 3: /* dbl/dbl */
+ 					return(MAJOR_0E_EXCP);
+ 				}
++				BUG();
+ 			case 1: /* FCNVXF */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -841,6 +879,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvxf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 2: /* FCNVFX */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -856,6 +895,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfx(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 3: /* FCNVFXT */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -871,6 +911,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfxt(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 5: /* FCNVUF (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -886,6 +927,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvuf(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 6: /* FCNVFU (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -901,6 +943,7 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfu(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 7: /* FCNVFUT (PA2.0 only) */
+ 				switch(fmt) {
+ 				    case 0: /* sgl/sgl */
+@@ -916,9 +959,11 @@ u_int fpregs[];
+ 					return(dbl_to_dbl_fcnvfut(&fpregs[r1],0,
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 4: /* undefined */
+ 				return(MAJOR_0C_EXCP);
+ 		} /* end of switch subop */
++		BUG();
+ 	case 2: /* class 2 */
+ 		/*
+ 		 * Be careful out there.
+@@ -994,6 +1039,7 @@ u_int fpregs[];
+ 				}
+ 		    } /* end of switch subop */
+ 		} /* end of else for PA1.0 & PA1.1 */
++		BUG();
+ 	case 3: /* class 3 */
+ 		/*
+ 		 * Be careful out there.
+@@ -1026,6 +1072,7 @@ u_int fpregs[];
+ 					return(dbl_fadd(&fpregs[r1],&fpregs[r2],
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 1: /* FSUB */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -1035,6 +1082,7 @@ u_int fpregs[];
+ 					return(dbl_fsub(&fpregs[r1],&fpregs[r2],
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 2: /* FMPY or XMPYU */
+ 				/*
+ 				 * check for integer multiply (x bit set)
+@@ -1071,6 +1119,7 @@ u_int fpregs[];
+ 					       &fpregs[r2],&fpregs[t],status));
+ 				    }
+ 				}
++				BUG();
+ 			case 3: /* FDIV */
+ 				switch (fmt) {
+ 				    case 0:
+@@ -1080,6 +1129,7 @@ u_int fpregs[];
+ 					return(dbl_fdiv(&fpregs[r1],&fpregs[r2],
+ 						&fpregs[t],status));
+ 				}
++				BUG();
+ 			case 4: /* FREM */
+ 				switch (fmt) {
+ 				    case 0:
+diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
+index a95f63788c6b1..4ba834599c4d4 100644
+--- a/arch/powerpc/include/asm/code-patching.h
++++ b/arch/powerpc/include/asm/code-patching.h
+@@ -23,6 +23,7 @@
+ #define BRANCH_ABSOLUTE	0x2
+ 
+ bool is_offset_in_branch_range(long offset);
++bool is_offset_in_cond_branch_range(long offset);
+ int create_branch(struct ppc_inst *instr, const u32 *addr,
+ 		  unsigned long target, int flags);
+ int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
+diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h
+index 792eefaf230b8..27574f218b371 100644
+--- a/arch/powerpc/include/asm/security_features.h
++++ b/arch/powerpc/include/asm/security_features.h
+@@ -39,6 +39,11 @@ static inline bool security_ftr_enabled(u64 feature)
+ 	return !!(powerpc_security_features & feature);
+ }
+ 
++#ifdef CONFIG_PPC_BOOK3S_64
++enum stf_barrier_type stf_barrier_type_get(void);
++#else
++static inline enum stf_barrier_type stf_barrier_type_get(void) { return STF_BARRIER_NONE; }
++#endif
+ 
+ // Features indicating support for Spectre/Meltdown mitigations
+ 
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index abb719b21cae7..3d97fb833834d 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -126,14 +126,16 @@ _GLOBAL(idle_return_gpr_loss)
+ /*
+  * This is the sequence required to execute idle instructions, as
+  * specified in ISA v2.07 (and earlier). MSR[IR] and MSR[DR] must be 0.
+- *
+- * The 0(r1) slot is used to save r2 in isa206, so use that here.
++ * We have to store a GPR somewhere, ptesync, then reload it, and create
++ * a false dependency on the result of the load. It doesn't matter which
++ * GPR we store, or where we store it. We have already stored r2 to the
++ * stack at -8(r1) in isa206_idle_insn_mayloss, so use that.
+  */
+ #define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST)			\
+ 	/* Magic NAP/SLEEP/WINKLE mode enter sequence */	\
+-	std	r2,0(r1);					\
++	std	r2,-8(r1);					\
+ 	ptesync;						\
+-	ld	r2,0(r1);					\
++	ld	r2,-8(r1);					\
+ 236:	cmpd	cr0,r2,r2;					\
+ 	bne	236b;						\
+ 	IDLE_INST;						\
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index cc51fa52e7831..e723ff77cc9be 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -263,6 +263,11 @@ static int __init handle_no_stf_barrier(char *p)
+ 
+ early_param("no_stf_barrier", handle_no_stf_barrier);
+ 
++enum stf_barrier_type stf_barrier_type_get(void)
++{
++	return stf_enabled_flush_types;
++}
++
+ /* This is the generic flag used by other architectures */
+ static int __init handle_ssbd(char *p)
+ {
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 12c75b95646a5..3c5eb9dc101b2 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1703,8 +1703,6 @@ void __cpu_die(unsigned int cpu)
+ 
+ void arch_cpu_idle_dead(void)
+ {
+-	sched_preempt_enable_no_resched();
+-
+ 	/*
+ 	 * Disable on the down path. This will be re-enabled by
+ 	 * start_secondary() via start_secondary_resume() below
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index dd18e1c447512..bbcc82b828da8 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -255,13 +255,16 @@ kvm_novcpu_exit:
+  * r3 contains the SRR1 wakeup value, SRR1 is trashed.
+  */
+ _GLOBAL(idle_kvm_start_guest)
+-	ld	r4,PACAEMERGSP(r13)
+ 	mfcr	r5
+ 	mflr	r0
+-	std	r1,0(r4)
+-	std	r5,8(r4)
+-	std	r0,16(r4)
+-	subi	r1,r4,STACK_FRAME_OVERHEAD
++	std	r5, 8(r1)	// Save CR in caller's frame
++	std	r0, 16(r1)	// Save LR in caller's frame
++	// Create frame on emergency stack
++	ld	r4, PACAEMERGSP(r13)
++	stdu	r1, -SWITCH_FRAME_SIZE(r4)
++	// Switch to new frame on emergency stack
++	mr	r1, r4
++	std	r3, 32(r1)	// Save SRR1 wakeup value
+ 	SAVE_NVGPRS(r1)
+ 
+ 	/*
+@@ -313,6 +316,10 @@ kvm_unsplit_wakeup:
+ 
+ kvm_secondary_got_guest:
+ 
++	// About to go to guest, clear saved SRR1
++	li	r0, 0
++	std	r0, 32(r1)
++
+ 	/* Set HSTATE_DSCR(r13) to something sensible */
+ 	ld	r6, PACA_DSCR_DEFAULT(r13)
+ 	std	r6, HSTATE_DSCR(r13)
+@@ -392,13 +399,12 @@ kvm_no_guest:
+ 	mfspr	r4, SPRN_LPCR
+ 	rlwimi	r4, r3, 0, LPCR_PECE0 | LPCR_PECE1
+ 	mtspr	SPRN_LPCR, r4
+-	/* set up r3 for return */
+-	mfspr	r3,SPRN_SRR1
++	// Return SRR1 wakeup value, or 0 if we went into the guest
++	ld	r3, 32(r1)
+ 	REST_NVGPRS(r1)
+-	addi	r1, r1, STACK_FRAME_OVERHEAD
+-	ld	r0, 16(r1)
+-	ld	r5, 8(r1)
+-	ld	r1, 0(r1)
++	ld	r1, 0(r1)	// Switch back to caller stack
++	ld	r0, 16(r1)	// Reload LR
++	ld	r5, 8(r1)	// Reload CR
+ 	mtlr	r0
+ 	mtcr	r5
+ 	blr
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index f9a3019e37b43..c5ed988238352 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -228,6 +228,11 @@ bool is_offset_in_branch_range(long offset)
+ 	return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
+ }
+ 
++bool is_offset_in_cond_branch_range(long offset)
++{
++	return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3);
++}
++
+ /*
+  * Helper to check if a given instruction is a conditional branch
+  * Derived from the conditional checks in analyse_instr()
+@@ -280,7 +285,7 @@ int create_cond_branch(struct ppc_inst *instr, const u32 *addr,
+ 		offset = offset - (unsigned long)addr;
+ 
+ 	/* Check we can represent the target in the instruction format */
+-	if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
++	if (!is_offset_in_cond_branch_range(offset))
+ 		return 1;
+ 
+ 	/* Mask out the flags and target, so they don't step on each other. */
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index 99fad093f43ec..7e9b978b768ed 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -24,16 +24,30 @@
+ #define EMIT(instr)		PLANT_INSTR(image, ctx->idx, instr)
+ 
+ /* Long jump; (unconditional 'branch') */
+-#define PPC_JMP(dest)		EMIT(PPC_INST_BRANCH |			      \
+-				     (((dest) - (ctx->idx * 4)) & 0x03fffffc))
++#define PPC_JMP(dest)							      \
++	do {								      \
++		long offset = (long)(dest) - (ctx->idx * 4);		      \
++		if (!is_offset_in_branch_range(offset)) {		      \
++			pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);			\
++			return -ERANGE;					      \
++		}							      \
++		EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));		      \
++	} while (0)
++
+ /* blr; (unconditional 'branch' with link) to absolute address */
+ #define PPC_BL_ABS(dest)	EMIT(PPC_INST_BL |			      \
+ 				     (((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc))
+ /* "cond" here covers BO:BI fields. */
+-#define PPC_BCC_SHORT(cond, dest)	EMIT(PPC_INST_BRANCH_COND |	      \
+-					     (((cond) & 0x3ff) << 16) |	      \
+-					     (((dest) - (ctx->idx * 4)) &     \
+-					      0xfffc))
++#define PPC_BCC_SHORT(cond, dest)					      \
++	do {								      \
++		long offset = (long)(dest) - (ctx->idx * 4);		      \
++		if (!is_offset_in_cond_branch_range(offset)) {		      \
++			pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);		\
++			return -ERANGE;					      \
++		}							      \
++		EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));					\
++	} while (0)
++
+ /* Sign-extended 32-bit immediate load */
+ #define PPC_LI32(d, i)		do {					      \
+ 		if ((int)(uintptr_t)(i) >= -32768 &&			      \
+@@ -78,11 +92,6 @@
+ #define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
+ #endif
+ 
+-static inline bool is_nearbranch(int offset)
+-{
+-	return (offset < 32768) && (offset >= -32768);
+-}
+-
+ /*
+  * The fly in the ointment of code size changing from pass to pass is
+  * avoided by padding the short branch case with a NOP.	 If code size differs
+@@ -91,7 +100,7 @@ static inline bool is_nearbranch(int offset)
+  * state.
+  */
+ #define PPC_BCC(cond, dest)	do {					      \
+-		if (is_nearbranch((dest) - (ctx->idx * 4))) {		      \
++		if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) {	\
+ 			PPC_BCC_SHORT(cond, dest);			      \
+ 			EMIT(PPC_RAW_NOP());				      \
+ 		} else {						      \
+diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
+index 7b713edfa7e26..b63b35e45e558 100644
+--- a/arch/powerpc/net/bpf_jit64.h
++++ b/arch/powerpc/net/bpf_jit64.h
+@@ -16,18 +16,18 @@
+  * with our redzone usage.
+  *
+  *		[	prev sp		] <-------------
+- *		[   nv gpr save area	] 6*8		|
++ *		[   nv gpr save area	] 5*8		|
+  *		[    tail_call_cnt	] 8		|
+- *		[    local_tmp_var	] 8		|
++ *		[    local_tmp_var	] 16		|
+  * fp (r31) -->	[   ebpf stack space	] upto 512	|
+  *		[     frame header	] 32/112	|
+  * sp (r1) --->	[    stack pointer	] --------------
+  */
+ 
+ /* for gpr non volatile registers BPG_REG_6 to 10 */
+-#define BPF_PPC_STACK_SAVE	(6*8)
++#define BPF_PPC_STACK_SAVE	(5*8)
+ /* for bpf JIT code internal usage */
+-#define BPF_PPC_STACK_LOCALS	16
++#define BPF_PPC_STACK_LOCALS	24
+ /* stack frame excluding BPF stack, ensure this is quadword aligned */
+ #define BPF_PPC_STACKFRAME	(STACK_FRAME_MIN_SIZE + \
+ 				 BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE)
+diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
+index 53aefee3fe70b..fcbf7a917c566 100644
+--- a/arch/powerpc/net/bpf_jit_comp.c
++++ b/arch/powerpc/net/bpf_jit_comp.c
+@@ -210,7 +210,11 @@ skip_init_ctx:
+ 		/* Now build the prologue, body code & epilogue for real. */
+ 		cgctx.idx = 0;
+ 		bpf_jit_build_prologue(code_base, &cgctx);
+-		bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass);
++		if (bpf_jit_build_body(fp, code_base, &cgctx, addrs, extra_pass)) {
++			bpf_jit_binary_free(bpf_hdr);
++			fp = org_fp;
++			goto out_addrs;
++		}
+ 		bpf_jit_build_epilogue(code_base, &cgctx);
+ 
+ 		if (bpf_jit_enable > 1)
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index a7759aa8043d2..0da31d41d4131 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -200,7 +200,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
+ 	}
+ }
+ 
+-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
++static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+ {
+ 	/*
+ 	 * By now, the eBPF program has already setup parameters in r3-r6
+@@ -261,7 +261,9 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	bpf_jit_emit_common_epilogue(image, ctx);
+ 
+ 	EMIT(PPC_RAW_BCTR());
++
+ 	/* out: */
++	return 0;
+ }
+ 
+ /* Assemble the body code between the prologue & epilogue */
+@@ -1090,7 +1092,9 @@ cond_branch:
+ 		 */
+ 		case BPF_JMP | BPF_TAIL_CALL:
+ 			ctx->seen |= SEEN_TAILCALL;
+-			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++			if (ret < 0)
++				return ret;
+ 			break;
+ 
+ 		default:
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index dff4a2930970b..8b5157ccfebae 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -15,6 +15,7 @@
+ #include <linux/if_vlan.h>
+ #include <asm/kprobes.h>
+ #include <linux/bpf.h>
++#include <asm/security_features.h>
+ 
+ #include "bpf_jit64.h"
+ 
+@@ -35,9 +36,9 @@ static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
+  *		[	prev sp		] <-------------
+  *		[	  ...       	] 		|
+  * sp (r1) --->	[    stack pointer	] --------------
+- *		[   nv gpr save area	] 6*8
++ *		[   nv gpr save area	] 5*8
+  *		[    tail_call_cnt	] 8
+- *		[    local_tmp_var	] 8
++ *		[    local_tmp_var	] 16
+  *		[   unused red zone	] 208 bytes protected
+  */
+ static int bpf_jit_stack_local(struct codegen_context *ctx)
+@@ -45,12 +46,12 @@ static int bpf_jit_stack_local(struct codegen_context *ctx)
+ 	if (bpf_has_stack_frame(ctx))
+ 		return STACK_FRAME_MIN_SIZE + ctx->stack_size;
+ 	else
+-		return -(BPF_PPC_STACK_SAVE + 16);
++		return -(BPF_PPC_STACK_SAVE + 24);
+ }
+ 
+ static int bpf_jit_stack_tailcallcnt(struct codegen_context *ctx)
+ {
+-	return bpf_jit_stack_local(ctx) + 8;
++	return bpf_jit_stack_local(ctx) + 16;
+ }
+ 
+ static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg)
+@@ -206,7 +207,7 @@ void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 fun
+ 	EMIT(PPC_RAW_BCTRL());
+ }
+ 
+-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
++static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+ {
+ 	/*
+ 	 * By now, the eBPF program has already setup parameters in r3, r4 and r5
+@@ -267,13 +268,38 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	bpf_jit_emit_common_epilogue(image, ctx);
+ 
+ 	EMIT(PPC_RAW_BCTR());
++
+ 	/* out: */
++	return 0;
+ }
+ 
++/*
++ * We spill into the redzone always, even if the bpf program has its own stackframe.
++ * Offsets hardcoded based on BPF_PPC_STACK_SAVE -- see bpf_jit_stack_local()
++ */
++void bpf_stf_barrier(void);
++
++asm (
++"		.global bpf_stf_barrier		;"
++"	bpf_stf_barrier:			;"
++"		std	21,-64(1)		;"
++"		std	22,-56(1)		;"
++"		sync				;"
++"		ld	21,-64(1)		;"
++"		ld	22,-56(1)		;"
++"		ori	31,31,0			;"
++"		.rept 14			;"
++"		b	1f			;"
++"	1:					;"
++"		.endr				;"
++"		blr				;"
++);
++
+ /* Assemble the body code between the prologue & epilogue */
+ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *ctx,
+ 		       u32 *addrs, bool extra_pass)
+ {
++	enum stf_barrier_type stf_barrier = stf_barrier_type_get();
+ 	const struct bpf_insn *insn = fp->insnsi;
+ 	int flen = fp->len;
+ 	int i, ret;
+@@ -644,6 +670,29 @@ emit_clear:
+ 		 * BPF_ST NOSPEC (speculation barrier)
+ 		 */
+ 		case BPF_ST | BPF_NOSPEC:
++			if (!security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) ||
++					!security_ftr_enabled(SEC_FTR_STF_BARRIER))
++				break;
++
++			switch (stf_barrier) {
++			case STF_BARRIER_EIEIO:
++				EMIT(PPC_RAW_EIEIO() | 0x02000000);
++				break;
++			case STF_BARRIER_SYNC_ORI:
++				EMIT(PPC_RAW_SYNC());
++				EMIT(PPC_RAW_LD(b2p[TMP_REG_1], _R13, 0));
++				EMIT(PPC_RAW_ORI(_R31, _R31, 0));
++				break;
++			case STF_BARRIER_FALLBACK:
++				EMIT(PPC_RAW_MFLR(b2p[TMP_REG_1]));
++				PPC_LI64(12, dereference_kernel_function_descriptor(bpf_stf_barrier));
++				EMIT(PPC_RAW_MTCTR(12));
++				EMIT(PPC_RAW_BCTRL());
++				EMIT(PPC_RAW_MTLR(b2p[TMP_REG_1]));
++				break;
++			case STF_BARRIER_NONE:
++				break;
++			}
+ 			break;
+ 
+ 		/*
+@@ -1006,7 +1055,9 @@ cond_branch:
+ 		 */
+ 		case BPF_JMP | BPF_TAIL_CALL:
+ 			ctx->seen |= SEEN_TAILCALL;
+-			bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++			if (ret < 0)
++				return ret;
+ 			break;
+ 
+ 		default:
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 5509b224c2eca..abe4d45c2f471 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -207,6 +207,8 @@ int zpci_enable_device(struct zpci_dev *);
+ int zpci_disable_device(struct zpci_dev *);
+ int zpci_scan_configured_device(struct zpci_dev *zdev, u32 fh);
+ int zpci_deconfigure_device(struct zpci_dev *zdev);
++void zpci_device_reserved(struct zpci_dev *zdev);
++bool zpci_is_device_configured(struct zpci_dev *zdev);
+ 
+ int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
+ int zpci_unregister_ioat(struct zpci_dev *, u8);
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 34839bad33e4d..d7ef98218c80c 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -92,7 +92,7 @@ void zpci_remove_reserved_devices(void)
+ 	spin_unlock(&zpci_list_lock);
+ 
+ 	list_for_each_entry_safe(zdev, tmp, &remove, entry)
+-		zpci_zdev_put(zdev);
++		zpci_device_reserved(zdev);
+ }
+ 
+ int pci_domain_nr(struct pci_bus *bus)
+@@ -744,6 +744,14 @@ error:
+ 	return ERR_PTR(rc);
+ }
+ 
++bool zpci_is_device_configured(struct zpci_dev *zdev)
++{
++	enum zpci_state state = zdev->state;
++
++	return state != ZPCI_FN_STATE_RESERVED &&
++		state != ZPCI_FN_STATE_STANDBY;
++}
++
+ /**
+  * zpci_scan_configured_device() - Scan a freshly configured zpci_dev
+  * @zdev: The zpci_dev to be configured
+@@ -810,6 +818,31 @@ int zpci_deconfigure_device(struct zpci_dev *zdev)
+ 	return 0;
+ }
+ 
++/**
++ * zpci_device_reserved() - Mark device as resverved
++ * @zdev: the zpci_dev that was reserved
++ *
++ * Handle the case that a given zPCI function was reserved by another system.
++ * After a call to this function the zpci_dev can not be found via
++ * get_zdev_by_fid() anymore but may still be accessible via existing
++ * references though it will not be functional anymore.
++ */
++void zpci_device_reserved(struct zpci_dev *zdev)
++{
++	if (zdev->has_hp_slot)
++		zpci_exit_slot(zdev);
++	/*
++	 * Remove device from zpci_list as it is going away. This also
++	 * makes sure we ignore subsequent zPCI events for this device.
++	 */
++	spin_lock(&zpci_list_lock);
++	list_del(&zdev->entry);
++	spin_unlock(&zpci_list_lock);
++	zdev->state = ZPCI_FN_STATE_RESERVED;
++	zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
++	zpci_zdev_put(zdev);
++}
++
+ void zpci_release_device(struct kref *kref)
+ {
+ 	struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+@@ -829,17 +862,20 @@ void zpci_release_device(struct kref *kref)
+ 	case ZPCI_FN_STATE_STANDBY:
+ 		if (zdev->has_hp_slot)
+ 			zpci_exit_slot(zdev);
+-		zpci_cleanup_bus_resources(zdev);
++		spin_lock(&zpci_list_lock);
++		list_del(&zdev->entry);
++		spin_unlock(&zpci_list_lock);
++		zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
++		fallthrough;
++	case ZPCI_FN_STATE_RESERVED:
++		if (zdev->has_resources)
++			zpci_cleanup_bus_resources(zdev);
+ 		zpci_bus_device_unregister(zdev);
+ 		zpci_destroy_iommu(zdev);
+ 		fallthrough;
+ 	default:
+ 		break;
+ 	}
+-
+-	spin_lock(&zpci_list_lock);
+-	list_del(&zdev->entry);
+-	spin_unlock(&zpci_list_lock);
+ 	zpci_dbg(3, "rem fid:%x\n", zdev->fid);
+ 	kfree(zdev);
+ }
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index cd447b96b4b1b..9b26617ca1c59 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -137,7 +137,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 			/* The 0x0304 event may immediately reserve the device */
+ 			if (!clp_get_state(zdev->fid, &state) &&
+ 			    state == ZPCI_FN_STATE_RESERVED) {
+-				zpci_zdev_put(zdev);
++				zpci_device_reserved(zdev);
+ 			}
+ 		}
+ 		break;
+@@ -148,7 +148,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ 	case 0x0308: /* Standby -> Reserved */
+ 		if (!zdev)
+ 			break;
+-		zpci_zdev_put(zdev);
++		zpci_device_reserved(zdev);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/arch/sh/include/asm/pgtable-3level.h b/arch/sh/include/asm/pgtable-3level.h
+index 56bf35c2f29c2..cdced80a7ffa3 100644
+--- a/arch/sh/include/asm/pgtable-3level.h
++++ b/arch/sh/include/asm/pgtable-3level.h
+@@ -34,7 +34,7 @@ typedef struct { unsigned long long pmd; } pmd_t;
+ 
+ static inline pmd_t *pud_pgtable(pud_t pud)
+ {
+-	return (pmd_t *)pud_val(pud);
++	return (pmd_t *)(unsigned long)pud_val(pud);
+ }
+ 
+ /* only used by the stubbed out hugetlb gup code, should never be called */
+diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
+index c853b28efa334..96c775abe31ff 100644
+--- a/arch/x86/events/msr.c
++++ b/arch/x86/events/msr.c
+@@ -68,6 +68,7 @@ static bool test_intel(int idx, void *data)
+ 	case INTEL_FAM6_BROADWELL_D:
+ 	case INTEL_FAM6_BROADWELL_G:
+ 	case INTEL_FAM6_BROADWELL_X:
++	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+ 
+ 	case INTEL_FAM6_ATOM_SILVERMONT:
+ 	case INTEL_FAM6_ATOM_SILVERMONT_D:
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index af6ce8d4c86a8..471b35d0b121e 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -695,7 +695,8 @@ struct kvm_vcpu_arch {
+ 
+ 	struct kvm_pio_request pio;
+ 	void *pio_data;
+-	void *guest_ins_data;
++	void *sev_pio_data;
++	unsigned sev_pio_count;
+ 
+ 	u8 event_exit_inst_len;
+ 
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index c268fb59f7794..6719a8041f594 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4465,10 +4465,10 @@ static void update_pkru_bitmask(struct kvm_mmu *mmu)
+ 	unsigned bit;
+ 	bool wp;
+ 
+-	if (!is_cr4_pke(mmu)) {
+-		mmu->pkru_mask = 0;
++	mmu->pkru_mask = 0;
++
++	if (!is_cr4_pke(mmu))
+ 		return;
+-	}
+ 
+ 	wp = is_cr0_wp(mmu);
+ 
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index cb166bde449bd..9959888cb10c8 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -619,7 +619,12 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
+ 	vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
+ 	vmsa.address = __sme_pa(svm->vmsa);
+ 	vmsa.len = PAGE_SIZE;
+-	return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
++	ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
++	if (ret)
++	  return ret;
++
++	vcpu->arch.guest_state_protected = true;
++	return 0;
+ }
+ 
+ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
+@@ -1480,6 +1485,13 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 		goto e_free_trans;
+ 	}
+ 
++	/*
++	 * Flush (on non-coherent CPUs) before RECEIVE_UPDATE_DATA, the PSP
++	 * encrypts the written data with the guest's key, and the cache may
++	 * contain dirty, unencrypted data.
++	 */
++	sev_clflush_pages(guest_page, n);
++
+ 	/* The RECEIVE_UPDATE_DATA command requires C-bit to be always set. */
+ 	data.guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) + offset;
+ 	data.guest_address |= sev_me_mask;
+@@ -2584,7 +2596,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ 		return -EINVAL;
+ 
+ 	return kvm_sev_es_string_io(&svm->vcpu, size, port,
+-				    svm->ghcb_sa, svm->ghcb_sa_len, in);
++				    svm->ghcb_sa, svm->ghcb_sa_len / size, in);
+ }
+ 
+ void sev_es_init_vmcb(struct vcpu_svm *svm)
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index bd0fe94c29207..820f807f8a4fb 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -191,7 +191,7 @@ struct vcpu_svm {
+ 
+ 	/* SEV-ES scratch area support */
+ 	void *ghcb_sa;
+-	u64 ghcb_sa_len;
++	u32 ghcb_sa_len;
+ 	bool ghcb_sa_sync;
+ 	bool ghcb_sa_free;
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 55de1eb135f92..3cb2f4739e324 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6288,18 +6288,13 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+ 
+ 		/*
+ 		 * If we are running L2 and L1 has a new pending interrupt
+-		 * which can be injected, we should re-evaluate
+-		 * what should be done with this new L1 interrupt.
+-		 * If L1 intercepts external-interrupts, we should
+-		 * exit from L2 to L1. Otherwise, interrupt should be
+-		 * delivered directly to L2.
++		 * which can be injected, this may cause a vmexit or it may
++		 * be injected into L2.  Either way, this interrupt will be
++		 * processed via KVM_REQ_EVENT, not RVI, because we do not use
++		 * virtual interrupt delivery to inject L1 interrupts into L2.
+ 		 */
+-		if (is_guest_mode(vcpu) && max_irr_updated) {
+-			if (nested_exit_on_intr(vcpu))
+-				kvm_vcpu_exiting_guest_mode(vcpu);
+-			else
+-				kvm_make_request(KVM_REQ_EVENT, vcpu);
+-		}
++		if (is_guest_mode(vcpu) && max_irr_updated)
++			kvm_make_request(KVM_REQ_EVENT, vcpu);
+ 	} else {
+ 		max_irr = kvm_lapic_find_highest_irr(vcpu);
+ 	}
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4b0e866e9f086..8e9df0e00f3dd 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6907,7 +6907,7 @@ static int kernel_pio(struct kvm_vcpu *vcpu, void *pd)
+ }
+ 
+ static int emulator_pio_in_out(struct kvm_vcpu *vcpu, int size,
+-			       unsigned short port, void *val,
++			       unsigned short port,
+ 			       unsigned int count, bool in)
+ {
+ 	vcpu->arch.pio.port = port;
+@@ -6915,10 +6915,8 @@ static int emulator_pio_in_out(struct kvm_vcpu *vcpu, int size,
+ 	vcpu->arch.pio.count  = count;
+ 	vcpu->arch.pio.size = size;
+ 
+-	if (!kernel_pio(vcpu, vcpu->arch.pio_data)) {
+-		vcpu->arch.pio.count = 0;
++	if (!kernel_pio(vcpu, vcpu->arch.pio_data))
+ 		return 1;
+-	}
+ 
+ 	vcpu->run->exit_reason = KVM_EXIT_IO;
+ 	vcpu->run->io.direction = in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT;
+@@ -6930,26 +6928,39 @@ static int emulator_pio_in_out(struct kvm_vcpu *vcpu, int size,
+ 	return 0;
+ }
+ 
+-static int emulator_pio_in(struct kvm_vcpu *vcpu, int size,
+-			   unsigned short port, void *val, unsigned int count)
++static int __emulator_pio_in(struct kvm_vcpu *vcpu, int size,
++			     unsigned short port, unsigned int count)
+ {
+-	int ret;
++	WARN_ON(vcpu->arch.pio.count);
++	memset(vcpu->arch.pio_data, 0, size * count);
++	return emulator_pio_in_out(vcpu, size, port, count, true);
++}
+ 
+-	if (vcpu->arch.pio.count)
+-		goto data_avail;
++static void complete_emulator_pio_in(struct kvm_vcpu *vcpu, void *val)
++{
++	int size = vcpu->arch.pio.size;
++	unsigned count = vcpu->arch.pio.count;
++	memcpy(val, vcpu->arch.pio_data, size * count);
++	trace_kvm_pio(KVM_PIO_IN, vcpu->arch.pio.port, size, count, vcpu->arch.pio_data);
++	vcpu->arch.pio.count = 0;
++}
+ 
+-	memset(vcpu->arch.pio_data, 0, size * count);
++static int emulator_pio_in(struct kvm_vcpu *vcpu, int size,
++			   unsigned short port, void *val, unsigned int count)
++{
++	if (vcpu->arch.pio.count) {
++		/* Complete previous iteration.  */
++	} else {
++		int r = __emulator_pio_in(vcpu, size, port, count);
++		if (!r)
++			return r;
+ 
+-	ret = emulator_pio_in_out(vcpu, size, port, val, count, true);
+-	if (ret) {
+-data_avail:
+-		memcpy(val, vcpu->arch.pio_data, size * count);
+-		trace_kvm_pio(KVM_PIO_IN, port, size, count, vcpu->arch.pio_data);
+-		vcpu->arch.pio.count = 0;
+-		return 1;
++		/* Results already available, fall through.  */
+ 	}
+ 
+-	return 0;
++	WARN_ON(count != vcpu->arch.pio.count);
++	complete_emulator_pio_in(vcpu, val);
++	return 1;
+ }
+ 
+ static int emulator_pio_in_emulated(struct x86_emulate_ctxt *ctxt,
+@@ -6964,9 +6975,15 @@ static int emulator_pio_out(struct kvm_vcpu *vcpu, int size,
+ 			    unsigned short port, const void *val,
+ 			    unsigned int count)
+ {
++	int ret;
++
+ 	memcpy(vcpu->arch.pio_data, val, size * count);
+ 	trace_kvm_pio(KVM_PIO_OUT, port, size, count, vcpu->arch.pio_data);
+-	return emulator_pio_in_out(vcpu, size, port, (void *)val, count, false);
++	ret = emulator_pio_in_out(vcpu, size, port, count, false);
++	if (ret)
++                vcpu->arch.pio.count = 0;
++
++        return ret;
+ }
+ 
+ static int emulator_pio_out_emulated(struct x86_emulate_ctxt *ctxt,
+@@ -9637,14 +9654,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
+ 			break;
+ 
+-                if (unlikely(kvm_vcpu_exit_request(vcpu))) {
++		if (vcpu->arch.apicv_active)
++			static_call(kvm_x86_sync_pir_to_irr)(vcpu);
++
++		if (unlikely(kvm_vcpu_exit_request(vcpu))) {
+ 			exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
+ 			break;
+ 		}
+-
+-		if (vcpu->arch.apicv_active)
+-			static_call(kvm_x86_sync_pir_to_irr)(vcpu);
+-        }
++	}
+ 
+ 	/*
+ 	 * Do this here before restoring debug registers on the host.  And
+@@ -12320,44 +12337,81 @@ int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned int bytes,
+ }
+ EXPORT_SYMBOL_GPL(kvm_sev_es_mmio_read);
+ 
+-static int complete_sev_es_emulated_ins(struct kvm_vcpu *vcpu)
++static int kvm_sev_es_outs(struct kvm_vcpu *vcpu, unsigned int size,
++			   unsigned int port);
++
++static int complete_sev_es_emulated_outs(struct kvm_vcpu *vcpu)
+ {
+-	memcpy(vcpu->arch.guest_ins_data, vcpu->arch.pio_data,
+-	       vcpu->arch.pio.count * vcpu->arch.pio.size);
+-	vcpu->arch.pio.count = 0;
++	int size = vcpu->arch.pio.size;
++	int port = vcpu->arch.pio.port;
+ 
++	vcpu->arch.pio.count = 0;
++	if (vcpu->arch.sev_pio_count)
++		return kvm_sev_es_outs(vcpu, size, port);
+ 	return 1;
+ }
+ 
+ static int kvm_sev_es_outs(struct kvm_vcpu *vcpu, unsigned int size,
+-			   unsigned int port, void *data,  unsigned int count)
++			   unsigned int port)
+ {
+-	int ret;
+-
+-	ret = emulator_pio_out_emulated(vcpu->arch.emulate_ctxt, size, port,
+-					data, count);
+-	if (ret)
+-		return ret;
++	for (;;) {
++		unsigned int count =
++			min_t(unsigned int, PAGE_SIZE / size, vcpu->arch.sev_pio_count);
++		int ret = emulator_pio_out(vcpu, size, port, vcpu->arch.sev_pio_data, count);
++
++		/* memcpy done already by emulator_pio_out.  */
++		vcpu->arch.sev_pio_count -= count;
++		vcpu->arch.sev_pio_data += count * vcpu->arch.pio.size;
++		if (!ret)
++			break;
+ 
+-	vcpu->arch.pio.count = 0;
++		/* Emulation done by the kernel.  */
++		if (!vcpu->arch.sev_pio_count)
++			return 1;
++	}
+ 
++	vcpu->arch.complete_userspace_io = complete_sev_es_emulated_outs;
+ 	return 0;
+ }
+ 
+ static int kvm_sev_es_ins(struct kvm_vcpu *vcpu, unsigned int size,
+-			  unsigned int port, void *data, unsigned int count)
++			  unsigned int port);
++
++static void advance_sev_es_emulated_ins(struct kvm_vcpu *vcpu)
+ {
+-	int ret;
++	unsigned count = vcpu->arch.pio.count;
++	complete_emulator_pio_in(vcpu, vcpu->arch.sev_pio_data);
++	vcpu->arch.sev_pio_count -= count;
++	vcpu->arch.sev_pio_data += count * vcpu->arch.pio.size;
++}
+ 
+-	ret = emulator_pio_in_emulated(vcpu->arch.emulate_ctxt, size, port,
+-				       data, count);
+-	if (ret) {
+-		vcpu->arch.pio.count = 0;
+-	} else {
+-		vcpu->arch.guest_ins_data = data;
+-		vcpu->arch.complete_userspace_io = complete_sev_es_emulated_ins;
++static int complete_sev_es_emulated_ins(struct kvm_vcpu *vcpu)
++{
++	int size = vcpu->arch.pio.size;
++	int port = vcpu->arch.pio.port;
++
++	advance_sev_es_emulated_ins(vcpu);
++	if (vcpu->arch.sev_pio_count)
++		return kvm_sev_es_ins(vcpu, size, port);
++	return 1;
++}
++
++static int kvm_sev_es_ins(struct kvm_vcpu *vcpu, unsigned int size,
++			  unsigned int port)
++{
++	for (;;) {
++		unsigned int count =
++			min_t(unsigned int, PAGE_SIZE / size, vcpu->arch.sev_pio_count);
++		if (!__emulator_pio_in(vcpu, size, port, count))
++			break;
++
++		/* Emulation done by the kernel.  */
++		advance_sev_es_emulated_ins(vcpu);
++		if (!vcpu->arch.sev_pio_count)
++			return 1;
+ 	}
+ 
++	vcpu->arch.complete_userspace_io = complete_sev_es_emulated_ins;
+ 	return 0;
+ }
+ 
+@@ -12365,8 +12419,10 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size,
+ 			 unsigned int port, void *data,  unsigned int count,
+ 			 int in)
+ {
+-	return in ? kvm_sev_es_ins(vcpu, size, port, data, count)
+-		  : kvm_sev_es_outs(vcpu, size, port, data, count);
++	vcpu->arch.sev_pio_data = data;
++	vcpu->arch.sev_pio_count = count;
++	return in ? kvm_sev_es_ins(vcpu, size, port)
++		  : kvm_sev_es_outs(vcpu, size, port);
+ }
+ EXPORT_SYMBOL_GPL(kvm_sev_es_string_io);
+ 
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index c79bd0af2e8c2..f252faf5028f2 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -52,9 +52,6 @@ DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
+ DEFINE_PER_CPU(uint32_t, xen_vcpu_id);
+ EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);
+ 
+-enum xen_domain_type xen_domain_type = XEN_NATIVE;
+-EXPORT_SYMBOL_GPL(xen_domain_type);
+-
+ unsigned long *machine_to_phys_mapping = (void *)MACH2PHYS_VIRT_START;
+ EXPORT_SYMBOL(machine_to_phys_mapping);
+ unsigned long  machine_to_phys_nr;
+@@ -69,9 +66,11 @@ __read_mostly int xen_have_vector_callback;
+ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+ 
+ /*
+- * NB: needs to live in .data because it's used by xen_prepare_pvh which runs
+- * before clearing the bss.
++ * NB: These need to live in .data or alike because they're used by
++ * xen_prepare_pvh() which runs before clearing the bss.
+  */
++enum xen_domain_type __ro_after_init xen_domain_type = XEN_NATIVE;
++EXPORT_SYMBOL_GPL(xen_domain_type);
+ uint32_t xen_start_flags __section(".data") = 0;
+ EXPORT_SYMBOL(xen_start_flags);
+ 
+diff --git a/arch/xtensa/platforms/xtfpga/setup.c b/arch/xtensa/platforms/xtfpga/setup.c
+index 4f7d6142d41fa..538e6748e85a7 100644
+--- a/arch/xtensa/platforms/xtfpga/setup.c
++++ b/arch/xtensa/platforms/xtfpga/setup.c
+@@ -51,8 +51,12 @@ void platform_power_off(void)
+ 
+ void platform_restart(void)
+ {
+-	/* Flush and reset the mmu, simulate a processor reset, and
+-	 * jump to the reset vector. */
++	/* Try software reset first. */
++	WRITE_ONCE(*(u32 *)XTFPGA_SWRST_VADDR, 0xdead);
++
++	/* If software reset did not work, flush and reset the mmu,
++	 * simulate a processor reset, and jump to the reset vector.
++	 */
+ 	cpu_reset();
+ 	/* control never gets here */
+ }
+@@ -66,7 +70,7 @@ void __init platform_calibrate_ccount(void)
+ 
+ #endif
+ 
+-#ifdef CONFIG_OF
++#ifdef CONFIG_USE_OF
+ 
+ static void __init xtfpga_clk_setup(struct device_node *np)
+ {
+@@ -284,4 +288,4 @@ static int __init xtavnet_init(void)
+  */
+ arch_initcall(xtavnet_init);
+ 
+-#endif /* CONFIG_OF */
++#endif /* CONFIG_USE_OF */
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 28e11decbac58..8e4dcf6036f60 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1916,10 +1916,11 @@ void blk_cgroup_bio_start(struct bio *bio)
+ {
+ 	int rwd = blk_cgroup_io_type(bio), cpu;
+ 	struct blkg_iostat_set *bis;
++	unsigned long flags;
+ 
+ 	cpu = get_cpu();
+ 	bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu);
+-	u64_stats_update_begin(&bis->sync);
++	flags = u64_stats_update_begin_irqsave(&bis->sync);
+ 
+ 	/*
+ 	 * If the bio is flagged with BIO_CGROUP_ACCT it means this is a split
+@@ -1931,7 +1932,7 @@ void blk_cgroup_bio_start(struct bio *bio)
+ 	}
+ 	bis->cur.ios[rwd]++;
+ 
+-	u64_stats_update_end(&bis->sync);
++	u64_stats_update_end_irqrestore(&bis->sync, flags);
+ 	if (cgroup_subsys_on_dfl(io_cgrp_subsys))
+ 		cgroup_rstat_updated(bio->bi_blkg->blkcg->css.cgroup, cpu);
+ 	put_cpu();
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 4b66d2776edac..3b38d15723de1 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -129,6 +129,7 @@ static const char *const blk_queue_flag_name[] = {
+ 	QUEUE_FLAG_NAME(PCI_P2PDMA),
+ 	QUEUE_FLAG_NAME(ZONE_RESETALL),
+ 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
++	QUEUE_FLAG_NAME(HCTX_ACTIVE),
+ 	QUEUE_FLAG_NAME(NOWAIT),
+ };
+ #undef QUEUE_FLAG_NAME
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 3c3693c34f061..7f3c3932b723e 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -270,12 +270,6 @@ deadline_move_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
+ 	deadline_remove_request(rq->q, per_prio, rq);
+ }
+ 
+-/* Number of requests queued for a given priority level. */
+-static u32 dd_queued(struct deadline_data *dd, enum dd_prio prio)
+-{
+-	return dd_sum(dd, inserted, prio) - dd_sum(dd, completed, prio);
+-}
+-
+ /*
+  * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
+  * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
+@@ -953,6 +947,12 @@ static int dd_async_depth_show(void *data, struct seq_file *m)
+ 	return 0;
+ }
+ 
++/* Number of requests queued for a given priority level. */
++static u32 dd_queued(struct deadline_data *dd, enum dd_prio prio)
++{
++	return dd_sum(dd, inserted, prio) - dd_sum(dd, completed, prio);
++}
++
+ static int dd_queued_show(void *data, struct seq_file *m)
+ {
+ 	struct request_queue *q = data;
+diff --git a/drivers/base/test/Makefile b/drivers/base/test/Makefile
+index 64b2f3d744d51..7f76fee6f989d 100644
+--- a/drivers/base/test/Makefile
++++ b/drivers/base/test/Makefile
+@@ -2,4 +2,4 @@
+ obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE)	+= test_async_driver_probe.o
+ 
+ obj-$(CONFIG_DRIVER_PE_KUNIT_TEST) += property-entry-test.o
+-CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all
++CFLAGS_property-entry-test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index d60096b3b2c2a..cd8cc7d31b49c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2342,10 +2342,6 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 	if (r)
+ 		goto init_failed;
+ 
+-	r = amdgpu_amdkfd_resume_iommu(adev);
+-	if (r)
+-		goto init_failed;
+-
+ 	r = amdgpu_device_ip_hw_init_phase1(adev);
+ 	if (r)
+ 		goto init_failed;
+@@ -2384,6 +2380,10 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 	if (!adev->gmc.xgmi.pending_reset)
+ 		amdgpu_amdkfd_device_init(adev);
+ 
++	r = amdgpu_amdkfd_resume_iommu(adev);
++	if (r)
++		goto init_failed;
++
+ 	amdgpu_fru_get_product_info(adev);
+ 
+ init_failed:
+diff --git a/drivers/gpu/drm/amd/display/Kconfig b/drivers/gpu/drm/amd/display/Kconfig
+index 7dffc04a557ea..127667e549c19 100644
+--- a/drivers/gpu/drm/amd/display/Kconfig
++++ b/drivers/gpu/drm/amd/display/Kconfig
+@@ -25,6 +25,8 @@ config DRM_AMD_DC_HDCP
+ 
+ config DRM_AMD_DC_SI
+ 	bool "AMD DC support for Southern Islands ASICs"
++	depends on DRM_AMDGPU_SI
++	depends on DRM_AMD_DC
+ 	default n
+ 	help
+ 	  Choose this option to enable new AMD DC support for SI asics
+diff --git a/drivers/gpu/drm/kmb/kmb_crtc.c b/drivers/gpu/drm/kmb/kmb_crtc.c
+index 44327bc629ca0..06613ffeaaf85 100644
+--- a/drivers/gpu/drm/kmb/kmb_crtc.c
++++ b/drivers/gpu/drm/kmb/kmb_crtc.c
+@@ -66,7 +66,8 @@ static const struct drm_crtc_funcs kmb_crtc_funcs = {
+ 	.disable_vblank = kmb_crtc_disable_vblank,
+ };
+ 
+-static void kmb_crtc_set_mode(struct drm_crtc *crtc)
++static void kmb_crtc_set_mode(struct drm_crtc *crtc,
++			      struct drm_atomic_state *old_state)
+ {
+ 	struct drm_device *dev = crtc->dev;
+ 	struct drm_display_mode *m = &crtc->state->adjusted_mode;
+@@ -75,7 +76,7 @@ static void kmb_crtc_set_mode(struct drm_crtc *crtc)
+ 	unsigned int val = 0;
+ 
+ 	/* Initialize mipi */
+-	kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz);
++	kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz, old_state);
+ 	drm_info(dev,
+ 		 "vfp= %d vbp= %d vsync_len=%d hfp=%d hbp=%d hsync_len=%d\n",
+ 		 m->crtc_vsync_start - m->crtc_vdisplay,
+@@ -138,7 +139,7 @@ static void kmb_crtc_atomic_enable(struct drm_crtc *crtc,
+ 	struct kmb_drm_private *kmb = crtc_to_kmb_priv(crtc);
+ 
+ 	clk_prepare_enable(kmb->kmb_clk.clk_lcd);
+-	kmb_crtc_set_mode(crtc);
++	kmb_crtc_set_mode(crtc, state);
+ 	drm_crtc_vblank_on(crtc);
+ }
+ 
+@@ -185,11 +186,45 @@ static void kmb_crtc_atomic_flush(struct drm_crtc *crtc,
+ 	spin_unlock_irq(&crtc->dev->event_lock);
+ }
+ 
++static enum drm_mode_status
++		kmb_crtc_mode_valid(struct drm_crtc *crtc,
++				    const struct drm_display_mode *mode)
++{
++	int refresh;
++	struct drm_device *dev = crtc->dev;
++	int vfp = mode->vsync_start - mode->vdisplay;
++
++	if (mode->vdisplay < KMB_CRTC_MAX_HEIGHT) {
++		drm_dbg(dev, "height = %d less than %d",
++			mode->vdisplay, KMB_CRTC_MAX_HEIGHT);
++		return MODE_BAD_VVALUE;
++	}
++	if (mode->hdisplay < KMB_CRTC_MAX_WIDTH) {
++		drm_dbg(dev, "width = %d less than %d",
++			mode->hdisplay, KMB_CRTC_MAX_WIDTH);
++		return MODE_BAD_HVALUE;
++	}
++	refresh = drm_mode_vrefresh(mode);
++	if (refresh < KMB_MIN_VREFRESH || refresh > KMB_MAX_VREFRESH) {
++		drm_dbg(dev, "refresh = %d less than %d or greater than %d",
++			refresh, KMB_MIN_VREFRESH, KMB_MAX_VREFRESH);
++		return MODE_BAD;
++	}
++
++	if (vfp < KMB_CRTC_MIN_VFP) {
++		drm_dbg(dev, "vfp = %d less than %d", vfp, KMB_CRTC_MIN_VFP);
++		return MODE_BAD;
++	}
++
++	return MODE_OK;
++}
++
+ static const struct drm_crtc_helper_funcs kmb_crtc_helper_funcs = {
+ 	.atomic_begin = kmb_crtc_atomic_begin,
+ 	.atomic_enable = kmb_crtc_atomic_enable,
+ 	.atomic_disable = kmb_crtc_atomic_disable,
+ 	.atomic_flush = kmb_crtc_atomic_flush,
++	.mode_valid = kmb_crtc_mode_valid,
+ };
+ 
+ int kmb_setup_crtc(struct drm_device *drm)
+diff --git a/drivers/gpu/drm/kmb/kmb_drv.c b/drivers/gpu/drm/kmb/kmb_drv.c
+index f54392ec4faba..d3091bf38cc02 100644
+--- a/drivers/gpu/drm/kmb/kmb_drv.c
++++ b/drivers/gpu/drm/kmb/kmb_drv.c
+@@ -173,10 +173,10 @@ static int kmb_setup_mode_config(struct drm_device *drm)
+ 	ret = drmm_mode_config_init(drm);
+ 	if (ret)
+ 		return ret;
+-	drm->mode_config.min_width = KMB_MIN_WIDTH;
+-	drm->mode_config.min_height = KMB_MIN_HEIGHT;
+-	drm->mode_config.max_width = KMB_MAX_WIDTH;
+-	drm->mode_config.max_height = KMB_MAX_HEIGHT;
++	drm->mode_config.min_width = KMB_FB_MIN_WIDTH;
++	drm->mode_config.min_height = KMB_FB_MIN_HEIGHT;
++	drm->mode_config.max_width = KMB_FB_MAX_WIDTH;
++	drm->mode_config.max_height = KMB_FB_MAX_HEIGHT;
+ 	drm->mode_config.funcs = &kmb_mode_config_funcs;
+ 
+ 	ret = kmb_setup_crtc(drm);
+@@ -381,7 +381,7 @@ static irqreturn_t handle_lcd_irq(struct drm_device *dev)
+ 		if (val & LAYER3_DMA_FIFO_UNDERFLOW)
+ 			drm_dbg(&kmb->drm,
+ 				"LAYER3:GL1 DMA UNDERFLOW val = 0x%lx", val);
+-		if (val & LAYER3_DMA_FIFO_UNDERFLOW)
++		if (val & LAYER3_DMA_FIFO_OVERFLOW)
+ 			drm_dbg(&kmb->drm,
+ 				"LAYER3:GL1 DMA OVERFLOW val = 0x%lx", val);
+ 	}
+diff --git a/drivers/gpu/drm/kmb/kmb_drv.h b/drivers/gpu/drm/kmb/kmb_drv.h
+index ebbaa5f422d59..bf085e95b28f4 100644
+--- a/drivers/gpu/drm/kmb/kmb_drv.h
++++ b/drivers/gpu/drm/kmb/kmb_drv.h
+@@ -20,6 +20,18 @@
+ #define DRIVER_MAJOR			1
+ #define DRIVER_MINOR			1
+ 
++/* Platform definitions */
++#define KMB_CRTC_MIN_VFP		4
++#define KMB_CRTC_MAX_WIDTH		1920 /* max width in pixels */
++#define KMB_CRTC_MAX_HEIGHT		1080 /* max height in pixels */
++#define KMB_CRTC_MIN_WIDTH		1920
++#define KMB_CRTC_MIN_HEIGHT		1080
++#define KMB_FB_MAX_WIDTH		1920
++#define KMB_FB_MAX_HEIGHT		1080
++#define KMB_FB_MIN_WIDTH		1
++#define KMB_FB_MIN_HEIGHT		1
++#define KMB_MIN_VREFRESH		59    /*vertical refresh in Hz */
++#define KMB_MAX_VREFRESH		60    /*vertical refresh in Hz */
+ #define KMB_LCD_DEFAULT_CLK		200000000
+ #define KMB_SYS_CLK_MHZ			500
+ 
+@@ -45,6 +57,7 @@ struct kmb_drm_private {
+ 	spinlock_t			irq_lock;
+ 	int				irq_lcd;
+ 	int				sys_clk_mhz;
++	struct disp_cfg			init_disp_cfg[KMB_MAX_PLANES];
+ 	struct layer_status		plane_status[KMB_MAX_PLANES];
+ 	int				kmb_under_flow;
+ 	int				kmb_flush_done;
+diff --git a/drivers/gpu/drm/kmb/kmb_dsi.c b/drivers/gpu/drm/kmb/kmb_dsi.c
+index 231041b269f53..756490589e0ad 100644
+--- a/drivers/gpu/drm/kmb/kmb_dsi.c
++++ b/drivers/gpu/drm/kmb/kmb_dsi.c
+@@ -482,6 +482,10 @@ static u32 mipi_tx_fg_section_cfg(struct kmb_dsi *kmb_dsi,
+ 	return 0;
+ }
+ 
++#define CLK_DIFF_LOW 50
++#define CLK_DIFF_HI 60
++#define SYSCLK_500  500
++
+ static void mipi_tx_fg_cfg_regs(struct kmb_dsi *kmb_dsi, u8 frame_gen,
+ 				struct mipi_tx_frame_timing_cfg *fg_cfg)
+ {
+@@ -492,7 +496,12 @@ static void mipi_tx_fg_cfg_regs(struct kmb_dsi *kmb_dsi, u8 frame_gen,
+ 	/* 500 Mhz system clock minus 50 to account for the difference in
+ 	 * MIPI clock speed in RTL tests
+ 	 */
+-	sysclk = kmb_dsi->sys_clk_mhz - 50;
++	if (kmb_dsi->sys_clk_mhz == SYSCLK_500) {
++		sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_LOW;
++	} else {
++		/* 700 Mhz clk*/
++		sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_HI;
++	}
+ 
+ 	/* PPL-Pixel Packing Layer, LLP-Low Level Protocol
+ 	 * Frame genartor timing parameters are clocked on the system clock,
+@@ -1322,7 +1331,8 @@ static u32 mipi_tx_init_dphy(struct kmb_dsi *kmb_dsi,
+ 	return 0;
+ }
+ 
+-static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
++static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi,
++				struct drm_atomic_state *old_state)
+ {
+ 	struct regmap *msscam;
+ 
+@@ -1331,7 +1341,7 @@ static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
+ 		dev_dbg(kmb_dsi->dev, "failed to get msscam syscon");
+ 		return;
+ 	}
+-
++	drm_atomic_bridge_chain_enable(adv_bridge, old_state);
+ 	/* DISABLE MIPI->CIF CONNECTION */
+ 	regmap_write(msscam, MSS_MIPI_CIF_CFG, 0);
+ 
+@@ -1342,7 +1352,7 @@ static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi)
+ }
+ 
+ int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
+-		     int sys_clk_mhz)
++		     int sys_clk_mhz, struct drm_atomic_state *old_state)
+ {
+ 	u64 data_rate;
+ 
+@@ -1384,18 +1394,13 @@ int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
+ 		mipi_tx_init_cfg.lane_rate_mbps = data_rate;
+ 	}
+ 
+-	kmb_write_mipi(kmb_dsi, DPHY_ENABLE, 0);
+-	kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL0, 0);
+-	kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL1, 0);
+-	kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL2, 0);
+-
+ 	/* Initialize mipi controller */
+ 	mipi_tx_init_cntrl(kmb_dsi, &mipi_tx_init_cfg);
+ 
+ 	/* Dphy initialization */
+ 	mipi_tx_init_dphy(kmb_dsi, &mipi_tx_init_cfg);
+ 
+-	connect_lcd_to_mipi(kmb_dsi);
++	connect_lcd_to_mipi(kmb_dsi, old_state);
+ 	dev_info(kmb_dsi->dev, "mipi hw initialized");
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/kmb/kmb_dsi.h b/drivers/gpu/drm/kmb/kmb_dsi.h
+index 66b7c500d9bcf..09dc88743d779 100644
+--- a/drivers/gpu/drm/kmb/kmb_dsi.h
++++ b/drivers/gpu/drm/kmb/kmb_dsi.h
+@@ -380,7 +380,7 @@ int kmb_dsi_host_bridge_init(struct device *dev);
+ struct kmb_dsi *kmb_dsi_init(struct platform_device *pdev);
+ void kmb_dsi_host_unregister(struct kmb_dsi *kmb_dsi);
+ int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode,
+-		     int sys_clk_mhz);
++		     int sys_clk_mhz, struct drm_atomic_state *old_state);
+ int kmb_dsi_map_mmio(struct kmb_dsi *kmb_dsi);
+ int kmb_dsi_clk_init(struct kmb_dsi *kmb_dsi);
+ int kmb_dsi_encoder_init(struct drm_device *dev, struct kmb_dsi *kmb_dsi);
+diff --git a/drivers/gpu/drm/kmb/kmb_plane.c b/drivers/gpu/drm/kmb/kmb_plane.c
+index ecee6782612d8..00404ba4126dd 100644
+--- a/drivers/gpu/drm/kmb/kmb_plane.c
++++ b/drivers/gpu/drm/kmb/kmb_plane.c
+@@ -67,8 +67,21 @@ static const u32 kmb_formats_v[] = {
+ 
+ static unsigned int check_pixel_format(struct drm_plane *plane, u32 format)
+ {
++	struct kmb_drm_private *kmb;
++	struct kmb_plane *kmb_plane = to_kmb_plane(plane);
+ 	int i;
++	int plane_id = kmb_plane->id;
++	struct disp_cfg init_disp_cfg;
+ 
++	kmb = to_kmb(plane->dev);
++	init_disp_cfg = kmb->init_disp_cfg[plane_id];
++	/* Due to HW limitations, changing pixel format after initial
++	 * plane configuration is not supported.
++	 */
++	if (init_disp_cfg.format && init_disp_cfg.format != format) {
++		drm_dbg(&kmb->drm, "Cannot change format after initial plane configuration");
++		return -EINVAL;
++	}
+ 	for (i = 0; i < plane->format_count; i++) {
+ 		if (plane->format_types[i] == format)
+ 			return 0;
+@@ -81,11 +94,17 @@ static int kmb_plane_atomic_check(struct drm_plane *plane,
+ {
+ 	struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state,
+ 										 plane);
++	struct kmb_drm_private *kmb;
++	struct kmb_plane *kmb_plane = to_kmb_plane(plane);
++	int plane_id = kmb_plane->id;
++	struct disp_cfg init_disp_cfg;
+ 	struct drm_framebuffer *fb;
+ 	int ret;
+ 	struct drm_crtc_state *crtc_state;
+ 	bool can_position;
+ 
++	kmb = to_kmb(plane->dev);
++	init_disp_cfg = kmb->init_disp_cfg[plane_id];
+ 	fb = new_plane_state->fb;
+ 	if (!fb || !new_plane_state->crtc)
+ 		return 0;
+@@ -94,10 +113,21 @@ static int kmb_plane_atomic_check(struct drm_plane *plane,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (new_plane_state->crtc_w > KMB_MAX_WIDTH || new_plane_state->crtc_h > KMB_MAX_HEIGHT)
++	if (new_plane_state->crtc_w > KMB_FB_MAX_WIDTH ||
++	    new_plane_state->crtc_h > KMB_FB_MAX_HEIGHT ||
++	    new_plane_state->crtc_w < KMB_FB_MIN_WIDTH ||
++	    new_plane_state->crtc_h < KMB_FB_MIN_HEIGHT)
+ 		return -EINVAL;
+-	if (new_plane_state->crtc_w < KMB_MIN_WIDTH || new_plane_state->crtc_h < KMB_MIN_HEIGHT)
++
++	/* Due to HW limitations, changing plane height or width after
++	 * initial plane configuration is not supported.
++	 */
++	if ((init_disp_cfg.width && init_disp_cfg.height) &&
++	    (init_disp_cfg.width != fb->width ||
++	    init_disp_cfg.height != fb->height)) {
++		drm_dbg(&kmb->drm, "Cannot change plane height or width after initial configuration");
+ 		return -EINVAL;
++	}
+ 	can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY);
+ 	crtc_state =
+ 		drm_atomic_get_existing_crtc_state(state,
+@@ -277,6 +307,44 @@ static void config_csc(struct kmb_drm_private *kmb, int plane_id)
+ 	kmb_write_lcd(kmb, LCD_LAYERn_CSC_OFF3(plane_id), csc_coef_lcd[11]);
+ }
+ 
++static void kmb_plane_set_alpha(struct kmb_drm_private *kmb,
++				const struct drm_plane_state *state,
++				unsigned char plane_id,
++				unsigned int *val)
++{
++	u16 plane_alpha = state->alpha;
++	u16 pixel_blend_mode = state->pixel_blend_mode;
++	int has_alpha = state->fb->format->has_alpha;
++
++	if (plane_alpha != DRM_BLEND_ALPHA_OPAQUE)
++		*val |= LCD_LAYER_ALPHA_STATIC;
++
++	if (has_alpha) {
++		switch (pixel_blend_mode) {
++		case DRM_MODE_BLEND_PIXEL_NONE:
++			break;
++		case DRM_MODE_BLEND_PREMULTI:
++			*val |= LCD_LAYER_ALPHA_EMBED | LCD_LAYER_ALPHA_PREMULT;
++			break;
++		case DRM_MODE_BLEND_COVERAGE:
++			*val |= LCD_LAYER_ALPHA_EMBED;
++			break;
++		default:
++			DRM_DEBUG("Missing pixel blend mode case (%s == %ld)\n",
++				  __stringify(pixel_blend_mode),
++				  (long)pixel_blend_mode);
++			break;
++		}
++	}
++
++	if (plane_alpha == DRM_BLEND_ALPHA_OPAQUE && !has_alpha) {
++		*val &= LCD_LAYER_ALPHA_DISABLED;
++		return;
++	}
++
++	kmb_write_lcd(kmb, LCD_LAYERn_ALPHA(plane_id), plane_alpha);
++}
++
+ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 				    struct drm_atomic_state *state)
+ {
+@@ -296,6 +364,7 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 	unsigned char plane_id;
+ 	int num_planes;
+ 	static dma_addr_t addr[MAX_SUB_PLANES];
++	struct disp_cfg *init_disp_cfg;
+ 
+ 	if (!plane || !new_plane_state || !old_plane_state)
+ 		return;
+@@ -303,11 +372,12 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 	fb = new_plane_state->fb;
+ 	if (!fb)
+ 		return;
++
+ 	num_planes = fb->format->num_planes;
+ 	kmb_plane = to_kmb_plane(plane);
+-	plane_id = kmb_plane->id;
+ 
+ 	kmb = to_kmb(plane->dev);
++	plane_id = kmb_plane->id;
+ 
+ 	spin_lock_irq(&kmb->irq_lock);
+ 	if (kmb->kmb_under_flow || kmb->kmb_flush_done) {
+@@ -317,7 +387,8 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 	}
+ 	spin_unlock_irq(&kmb->irq_lock);
+ 
+-	src_w = (new_plane_state->src_w >> 16);
++	init_disp_cfg = &kmb->init_disp_cfg[plane_id];
++	src_w = new_plane_state->src_w >> 16;
+ 	src_h = new_plane_state->src_h >> 16;
+ 	crtc_x = new_plane_state->crtc_x;
+ 	crtc_y = new_plane_state->crtc_y;
+@@ -400,20 +471,32 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 		config_csc(kmb, plane_id);
+ 	}
+ 
++	kmb_plane_set_alpha(kmb, plane->state, plane_id, &val);
++
+ 	kmb_write_lcd(kmb, LCD_LAYERn_CFG(plane_id), val);
+ 
++	/* Configure LCD_CONTROL */
++	ctrl = kmb_read_lcd(kmb, LCD_CONTROL);
++
++	/* Set layer blending config */
++	ctrl &= ~LCD_CTRL_ALPHA_ALL;
++	ctrl |= LCD_CTRL_ALPHA_BOTTOM_VL1 |
++		LCD_CTRL_ALPHA_BLEND_VL2;
++
++	ctrl &= ~LCD_CTRL_ALPHA_BLEND_BKGND_DISABLE;
++
+ 	switch (plane_id) {
+ 	case LAYER_0:
+-		ctrl = LCD_CTRL_VL1_ENABLE;
++		ctrl |= LCD_CTRL_VL1_ENABLE;
+ 		break;
+ 	case LAYER_1:
+-		ctrl = LCD_CTRL_VL2_ENABLE;
++		ctrl |= LCD_CTRL_VL2_ENABLE;
+ 		break;
+ 	case LAYER_2:
+-		ctrl = LCD_CTRL_GL1_ENABLE;
++		ctrl |= LCD_CTRL_GL1_ENABLE;
+ 		break;
+ 	case LAYER_3:
+-		ctrl = LCD_CTRL_GL2_ENABLE;
++		ctrl |= LCD_CTRL_GL2_ENABLE;
+ 		break;
+ 	}
+ 
+@@ -425,7 +508,7 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 	 */
+ 	ctrl |= LCD_CTRL_VHSYNC_IDLE_LVL;
+ 
+-	kmb_set_bitmask_lcd(kmb, LCD_CONTROL, ctrl);
++	kmb_write_lcd(kmb, LCD_CONTROL, ctrl);
+ 
+ 	/* Enable pipeline AXI read transactions for the DMA
+ 	 * after setting graphics layers. This must be done
+@@ -448,6 +531,16 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 
+ 	/* Enable DMA */
+ 	kmb_write_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id), dma_cfg);
++
++	/* Save initial display config */
++	if (!init_disp_cfg->width ||
++	    !init_disp_cfg->height ||
++	    !init_disp_cfg->format) {
++		init_disp_cfg->width = width;
++		init_disp_cfg->height = height;
++		init_disp_cfg->format = fb->format->format;
++	}
++
+ 	drm_dbg(&kmb->drm, "dma_cfg=0x%x LCD_DMA_CFG=0x%x\n", dma_cfg,
+ 		kmb_read_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id)));
+ 
+@@ -490,6 +583,9 @@ struct kmb_plane *kmb_plane_init(struct drm_device *drm)
+ 	enum drm_plane_type plane_type;
+ 	const u32 *plane_formats;
+ 	int num_plane_formats;
++	unsigned int blend_caps = BIT(DRM_MODE_BLEND_PIXEL_NONE) |
++				  BIT(DRM_MODE_BLEND_PREMULTI)   |
++				  BIT(DRM_MODE_BLEND_COVERAGE);
+ 
+ 	for (i = 0; i < KMB_MAX_PLANES; i++) {
+ 		plane = drmm_kzalloc(drm, sizeof(*plane), GFP_KERNEL);
+@@ -521,8 +617,16 @@ struct kmb_plane *kmb_plane_init(struct drm_device *drm)
+ 		drm_dbg(drm, "%s : %d i=%d type=%d",
+ 			__func__, __LINE__,
+ 			  i, plane_type);
++		drm_plane_create_alpha_property(&plane->base_plane);
++
++		drm_plane_create_blend_mode_property(&plane->base_plane,
++						     blend_caps);
++
++		drm_plane_create_zpos_immutable_property(&plane->base_plane, i);
++
+ 		drm_plane_helper_add(&plane->base_plane,
+ 				     &kmb_plane_helper_funcs);
++
+ 		if (plane_type == DRM_PLANE_TYPE_PRIMARY) {
+ 			primary = plane;
+ 			kmb->plane = plane;
+diff --git a/drivers/gpu/drm/kmb/kmb_plane.h b/drivers/gpu/drm/kmb/kmb_plane.h
+index 486490f7a3ec5..b51144044fe8e 100644
+--- a/drivers/gpu/drm/kmb/kmb_plane.h
++++ b/drivers/gpu/drm/kmb/kmb_plane.h
+@@ -35,6 +35,9 @@
+ #define POSSIBLE_CRTCS 1
+ #define to_kmb_plane(x) container_of(x, struct kmb_plane, base_plane)
+ 
++#define POSSIBLE_CRTCS		1
++#define KMB_MAX_PLANES		2
++
+ enum layer_id {
+ 	LAYER_0,
+ 	LAYER_1,
+@@ -43,8 +46,6 @@ enum layer_id {
+ 	/* KMB_MAX_PLANES */
+ };
+ 
+-#define KMB_MAX_PLANES 1
+-
+ enum sub_plane_id {
+ 	Y_PLANE,
+ 	U_PLANE,
+@@ -62,6 +63,12 @@ struct layer_status {
+ 	u32 ctrl;
+ };
+ 
++struct disp_cfg {
++	unsigned int width;
++	unsigned int height;
++	unsigned int format;
++};
++
+ struct kmb_plane *kmb_plane_init(struct drm_device *drm);
+ void kmb_plane_destroy(struct drm_plane *plane);
+ #endif /* __KMB_PLANE_H__ */
+diff --git a/drivers/gpu/drm/kmb/kmb_regs.h b/drivers/gpu/drm/kmb/kmb_regs.h
+index 48150569f7025..9756101b0d32f 100644
+--- a/drivers/gpu/drm/kmb/kmb_regs.h
++++ b/drivers/gpu/drm/kmb/kmb_regs.h
+@@ -43,8 +43,10 @@
+ #define LCD_CTRL_OUTPUT_ENABLED			  BIT(19)
+ #define LCD_CTRL_BPORCH_ENABLE			  BIT(21)
+ #define LCD_CTRL_FPORCH_ENABLE			  BIT(22)
++#define LCD_CTRL_ALPHA_BLEND_BKGND_DISABLE	  BIT(23)
+ #define LCD_CTRL_PIPELINE_DMA			  BIT(28)
+ #define LCD_CTRL_VHSYNC_IDLE_LVL		  BIT(31)
++#define LCD_CTRL_ALPHA_ALL			  (0xff << 6)
+ 
+ /* interrupts */
+ #define LCD_INT_STATUS				(0x4 * 0x001)
+@@ -115,6 +117,7 @@
+ #define LCD_LAYER_ALPHA_EMBED			BIT(5)
+ #define LCD_LAYER_ALPHA_COMBI			(LCD_LAYER_ALPHA_STATIC | \
+ 						      LCD_LAYER_ALPHA_EMBED)
++#define LCD_LAYER_ALPHA_DISABLED		~(LCD_LAYER_ALPHA_COMBI)
+ /* RGB multiplied with alpha */
+ #define LCD_LAYER_ALPHA_PREMULT			BIT(6)
+ #define LCD_LAYER_INVERT_COL			BIT(7)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index b349692219b77..c95985792076f 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -296,6 +296,8 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
+ 	u32 val;
+ 	int request, ack;
+ 
++	WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
++
+ 	if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
+ 		return -EINVAL;
+ 
+@@ -337,6 +339,8 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)
+ {
+ 	int bit;
+ 
++	WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));
++
+ 	if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))
+ 		return;
+ 
+@@ -1478,6 +1482,8 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
+ 	if (!pdev)
+ 		return -ENODEV;
+ 
++	mutex_init(&gmu->lock);
++
+ 	gmu->dev = &pdev->dev;
+ 
+ 	of_dma_configure(gmu->dev, node, true);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+index 71dfa60070cc0..19c1a0ddee7a0 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+@@ -44,6 +44,9 @@ struct a6xx_gmu_bo {
+ struct a6xx_gmu {
+ 	struct device *dev;
+ 
++	/* For serializing communication with the GMU: */
++	struct mutex lock;
++
+ 	struct msm_gem_address_space *aspace;
+ 
+ 	void * __iomem mmio;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 1b3519b821a3f..91c5c6709b6cd 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -859,7 +859,7 @@ static int a6xx_zap_shader_init(struct msm_gpu *gpu)
+ 	  A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \
+ 	  A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR)
+ 
+-static int a6xx_hw_init(struct msm_gpu *gpu)
++static int hw_init(struct msm_gpu *gpu)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+@@ -1107,6 +1107,19 @@ out:
+ 	return ret;
+ }
+ 
++static int a6xx_hw_init(struct msm_gpu *gpu)
++{
++	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
++	int ret;
++
++	mutex_lock(&a6xx_gpu->gmu.lock);
++	ret = hw_init(gpu);
++	mutex_unlock(&a6xx_gpu->gmu.lock);
++
++	return ret;
++}
++
+ static void a6xx_dump(struct msm_gpu *gpu)
+ {
+ 	DRM_DEV_INFO(&gpu->pdev->dev, "status:   %08x\n",
+@@ -1481,7 +1494,9 @@ static int a6xx_pm_resume(struct msm_gpu *gpu)
+ 
+ 	trace_msm_gpu_resume(0);
+ 
++	mutex_lock(&a6xx_gpu->gmu.lock);
+ 	ret = a6xx_gmu_resume(a6xx_gpu);
++	mutex_unlock(&a6xx_gpu->gmu.lock);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1504,7 +1519,9 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
+ 
+ 	devfreq_suspend_device(gpu->devfreq.devfreq);
+ 
++	mutex_lock(&a6xx_gpu->gmu.lock);
+ 	ret = a6xx_gmu_stop(a6xx_gpu);
++	mutex_unlock(&a6xx_gpu->gmu.lock);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1519,18 +1536,19 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+-	static DEFINE_MUTEX(perfcounter_oob);
+ 
+-	mutex_lock(&perfcounter_oob);
++	mutex_lock(&a6xx_gpu->gmu.lock);
+ 
+ 	/* Force the GPU power on so we can read this register */
+ 	a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
+ 
+ 	*value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,
+-		REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
++			    REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);
+ 
+ 	a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);
+-	mutex_unlock(&perfcounter_oob);
++
++	mutex_unlock(&a6xx_gpu->gmu.lock);
++
+ 	return 0;
+ }
+ 
+@@ -1594,6 +1612,16 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+ 	return (unsigned long)busy_time;
+ }
+ 
++void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
++{
++	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
++	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
++
++	mutex_lock(&a6xx_gpu->gmu.lock);
++	a6xx_gmu_set_freq(gpu, opp);
++	mutex_unlock(&a6xx_gpu->gmu.lock);
++}
++
+ static struct msm_gem_address_space *
+ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
+ {
+@@ -1740,7 +1768,7 @@ static const struct adreno_gpu_funcs funcs = {
+ #endif
+ 		.gpu_busy = a6xx_gpu_busy,
+ 		.gpu_get_freq = a6xx_gmu_get_freq,
+-		.gpu_set_freq = a6xx_gmu_set_freq,
++		.gpu_set_freq = a6xx_gpu_set_freq,
+ #if defined(CONFIG_DRM_MSM_GPU_STATE)
+ 		.gpu_state_get = a6xx_gpu_state_get,
+ 		.gpu_state_put = a6xx_gpu_state_put,
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index c277d3f61a5ef..1c19ca5398e98 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -268,7 +268,11 @@ static void mxsfb_irq_disable(struct drm_device *drm)
+ 	struct mxsfb_drm_private *mxsfb = drm->dev_private;
+ 
+ 	mxsfb_enable_axi_clk(mxsfb);
+-	mxsfb->crtc.funcs->disable_vblank(&mxsfb->crtc);
++
++	/* Disable and clear VBLANK IRQ */
++	writel(CTRL1_CUR_FRAME_DONE_IRQ_EN, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++	writel(CTRL1_CUR_FRAME_DONE_IRQ, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++
+ 	mxsfb_disable_axi_clk(mxsfb);
+ }
+ 
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
+index 0145129d7c661..534dd7414d428 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
+@@ -590,14 +590,14 @@ static const struct drm_display_mode k101_im2byl02_default_mode = {
+ 	.clock		= 69700,
+ 
+ 	.hdisplay	= 800,
+-	.hsync_start	= 800 + 6,
+-	.hsync_end	= 800 + 6 + 15,
+-	.htotal		= 800 + 6 + 15 + 16,
++	.hsync_start	= 800 + 52,
++	.hsync_end	= 800 + 52 + 8,
++	.htotal		= 800 + 52 + 8 + 48,
+ 
+ 	.vdisplay	= 1280,
+-	.vsync_start	= 1280 + 8,
+-	.vsync_end	= 1280 + 8 + 48,
+-	.vtotal		= 1280 + 8 + 48 + 52,
++	.vsync_start	= 1280 + 16,
++	.vsync_end	= 1280 + 16 + 6,
++	.vtotal		= 1280 + 16 + 6 + 15,
+ 
+ 	.width_mm	= 135,
+ 	.height_mm	= 217,
+diff --git a/drivers/iio/test/Makefile b/drivers/iio/test/Makefile
+index f1099b4953014..467519a2027e5 100644
+--- a/drivers/iio/test/Makefile
++++ b/drivers/iio/test/Makefile
+@@ -5,3 +5,4 @@
+ 
+ # Keep in alphabetical order
+ obj-$(CONFIG_IIO_TEST_FORMAT) += iio-test-format.o
++CFLAGS_iio-test-format.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+diff --git a/drivers/input/keyboard/snvs_pwrkey.c b/drivers/input/keyboard/snvs_pwrkey.c
+index 2f5e3ab5ed638..65286762b02ab 100644
+--- a/drivers/input/keyboard/snvs_pwrkey.c
++++ b/drivers/input/keyboard/snvs_pwrkey.c
+@@ -3,6 +3,7 @@
+ // Driver for the IMX SNVS ON/OFF Power Key
+ // Copyright (C) 2015 Freescale Semiconductor, Inc. All Rights Reserved.
+ 
++#include <linux/clk.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+ #include <linux/init.h>
+@@ -99,6 +100,11 @@ static irqreturn_t imx_snvs_pwrkey_interrupt(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void imx_snvs_pwrkey_disable_clk(void *data)
++{
++	clk_disable_unprepare(data);
++}
++
+ static void imx_snvs_pwrkey_act(void *pdata)
+ {
+ 	struct pwrkey_drv_data *pd = pdata;
+@@ -111,6 +117,7 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
+ 	struct pwrkey_drv_data *pdata;
+ 	struct input_dev *input;
+ 	struct device_node *np;
++	struct clk *clk;
+ 	int error;
+ 	u32 vid;
+ 
+@@ -134,6 +141,28 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
+ 		dev_warn(&pdev->dev, "KEY_POWER without setting in dts\n");
+ 	}
+ 
++	clk = devm_clk_get_optional(&pdev->dev, NULL);
++	if (IS_ERR(clk)) {
++		dev_err(&pdev->dev, "Failed to get snvs clock (%pe)\n", clk);
++		return PTR_ERR(clk);
++	}
++
++	error = clk_prepare_enable(clk);
++	if (error) {
++		dev_err(&pdev->dev, "Failed to enable snvs clock (%pe)\n",
++			ERR_PTR(error));
++		return error;
++	}
++
++	error = devm_add_action_or_reset(&pdev->dev,
++					 imx_snvs_pwrkey_disable_clk, clk);
++	if (error) {
++		dev_err(&pdev->dev,
++			"Failed to register clock cleanup handler (%pe)\n",
++			ERR_PTR(error));
++		return error;
++	}
++
+ 	pdata->wakeup = of_property_read_bool(np, "wakeup-source");
+ 
+ 	pdata->irq = platform_get_irq(pdev, 0);
+diff --git a/drivers/isdn/capi/kcapi.c b/drivers/isdn/capi/kcapi.c
+index cb0afe8971623..7313454e403a6 100644
+--- a/drivers/isdn/capi/kcapi.c
++++ b/drivers/isdn/capi/kcapi.c
+@@ -480,6 +480,11 @@ int detach_capi_ctr(struct capi_ctr *ctr)
+ 
+ 	ctr_down(ctr, CAPI_CTR_DETACHED);
+ 
++	if (ctr->cnr < 1 || ctr->cnr - 1 >= CAPI_MAXCONTR) {
++		err = -EINVAL;
++		goto unlock_out;
++	}
++
+ 	if (capi_controller[ctr->cnr - 1] != ctr) {
+ 		err = -EINVAL;
+ 		goto unlock_out;
+diff --git a/drivers/isdn/hardware/mISDN/netjet.c b/drivers/isdn/hardware/mISDN/netjet.c
+index 2a1ddd47a0968..a52f275f82634 100644
+--- a/drivers/isdn/hardware/mISDN/netjet.c
++++ b/drivers/isdn/hardware/mISDN/netjet.c
+@@ -949,8 +949,8 @@ nj_release(struct tiger_hw *card)
+ 		nj_disable_hwirq(card);
+ 		mode_tiger(&card->bc[0], ISDN_P_NONE);
+ 		mode_tiger(&card->bc[1], ISDN_P_NONE);
+-		card->isac.release(&card->isac);
+ 		spin_unlock_irqrestore(&card->lock, flags);
++		card->isac.release(&card->isac);
+ 		release_region(card->base, card->base_s);
+ 		card->base_s = 0;
+ 	}
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 00e4533c8bddc..8999ec9455ec2 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -846,10 +846,12 @@ static int __maybe_unused rcar_can_suspend(struct device *dev)
+ 	struct rcar_can_priv *priv = netdev_priv(ndev);
+ 	u16 ctlr;
+ 
+-	if (netif_running(ndev)) {
+-		netif_stop_queue(ndev);
+-		netif_device_detach(ndev);
+-	}
++	if (!netif_running(ndev))
++		return 0;
++
++	netif_stop_queue(ndev);
++	netif_device_detach(ndev);
++
+ 	ctlr = readw(&priv->regs->ctlr);
+ 	ctlr |= RCAR_CAN_CTLR_CANM_HALT;
+ 	writew(ctlr, &priv->regs->ctlr);
+@@ -868,6 +870,9 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 	u16 ctlr;
+ 	int err;
+ 
++	if (!netif_running(ndev))
++		return 0;
++
+ 	err = clk_enable(priv->clk);
+ 	if (err) {
+ 		netdev_err(ndev, "clk_enable() failed, error %d\n", err);
+@@ -881,10 +886,9 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 	writew(ctlr, &priv->regs->ctlr);
+ 	priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ 
+-	if (netif_running(ndev)) {
+-		netif_device_attach(ndev);
+-		netif_start_queue(ndev);
+-	}
++	netif_device_attach(ndev);
++	netif_start_queue(ndev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/can/sja1000/peak_pci.c b/drivers/net/can/sja1000/peak_pci.c
+index 84eac8cb86869..15bca07dc7066 100644
+--- a/drivers/net/can/sja1000/peak_pci.c
++++ b/drivers/net/can/sja1000/peak_pci.c
+@@ -729,16 +729,15 @@ static void peak_pci_remove(struct pci_dev *pdev)
+ 		struct net_device *prev_dev = chan->prev_dev;
+ 
+ 		dev_info(&pdev->dev, "removing device %s\n", dev->name);
++		/* do that only for first channel */
++		if (!prev_dev && chan->pciec_card)
++			peak_pciec_remove(chan->pciec_card);
+ 		unregister_sja1000dev(dev);
+ 		free_sja1000dev(dev);
+ 		dev = prev_dev;
+ 
+-		if (!dev) {
+-			/* do that only for first channel */
+-			if (chan->pciec_card)
+-				peak_pciec_remove(chan->pciec_card);
++		if (!dev)
+ 			break;
+-		}
+ 		priv = netdev_priv(dev);
+ 		chan = priv->priv;
+ 	}
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index b11eabad575bb..e206959b3d06c 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -551,11 +551,10 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if,
+ 	} else if (sm->channel_p_w_b & PUCAN_BUS_WARNING) {
+ 		new_state = CAN_STATE_ERROR_WARNING;
+ 	} else {
+-		/* no error bit (so, no error skb, back to active state) */
+-		dev->can.state = CAN_STATE_ERROR_ACTIVE;
++		/* back to (or still in) ERROR_ACTIVE state */
++		new_state = CAN_STATE_ERROR_ACTIVE;
+ 		pdev->bec.txerr = 0;
+ 		pdev->bec.rxerr = 0;
+-		return 0;
+ 	}
+ 
+ 	/* state hasn't changed */
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 267324889dd64..1b9b7569c371b 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -230,7 +230,7 @@
+ #define GSWIP_SDMA_PCTRLp(p)		(0xBC0 + ((p) * 0x6))
+ #define  GSWIP_SDMA_PCTRL_EN		BIT(0)	/* SDMA Port Enable */
+ #define  GSWIP_SDMA_PCTRL_FCEN		BIT(1)	/* Flow Control Enable */
+-#define  GSWIP_SDMA_PCTRL_PAUFWD	BIT(1)	/* Pause Frame Forwarding */
++#define  GSWIP_SDMA_PCTRL_PAUFWD	BIT(3)	/* Pause Frame Forwarding */
+ 
+ #define GSWIP_TABLE_ACTIVE_VLAN		0x01
+ #define GSWIP_TABLE_VLAN_MAPPING	0x02
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 0cea1572f8260..5f60db08bf80b 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1031,9 +1031,6 @@ mt7530_port_enable(struct dsa_switch *ds, int port,
+ {
+ 	struct mt7530_priv *priv = ds->priv;
+ 
+-	if (!dsa_is_user_port(ds, port))
+-		return 0;
+-
+ 	mutex_lock(&priv->reg_mutex);
+ 
+ 	/* Allow the user port gets connected to the cpu port and also
+@@ -1056,9 +1053,6 @@ mt7530_port_disable(struct dsa_switch *ds, int port)
+ {
+ 	struct mt7530_priv *priv = ds->priv;
+ 
+-	if (!dsa_is_user_port(ds, port))
+-		return;
+-
+ 	mutex_lock(&priv->reg_mutex);
+ 
+ 	/* Clear up all port matrix which could be restored in the next
+@@ -3132,7 +3126,7 @@ mt7530_probe(struct mdio_device *mdiodev)
+ 		return -ENOMEM;
+ 
+ 	priv->ds->dev = &mdiodev->dev;
+-	priv->ds->num_ports = DSA_MAX_PORTS;
++	priv->ds->num_ports = MT7530_NUM_PORTS;
+ 
+ 	/* Use medatek,mcm property to distinguish hardware type that would
+ 	 * casues a little bit differences on power-on sequence.
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index ebccaf02411c9..8b618c15984db 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -157,7 +157,7 @@ static const struct {
+ 	{ ENETC_PM0_TFRM,   "MAC tx frames" },
+ 	{ ENETC_PM0_TFCS,   "MAC tx fcs errors" },
+ 	{ ENETC_PM0_TVLAN,  "MAC tx VLAN frames" },
+-	{ ENETC_PM0_TERR,   "MAC tx frames" },
++	{ ENETC_PM0_TERR,   "MAC tx frame errors" },
+ 	{ ENETC_PM0_TUCA,   "MAC tx unicast frames" },
+ 	{ ENETC_PM0_TMCA,   "MAC tx multicast frames" },
+ 	{ ENETC_PM0_TBCA,   "MAC tx broadcast frames" },
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index cf00709caea4b..3ac324509f43e 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -517,10 +517,13 @@ static void enetc_port_si_configure(struct enetc_si *si)
+ 
+ static void enetc_configure_port_mac(struct enetc_hw *hw)
+ {
++	int tc;
++
+ 	enetc_port_wr(hw, ENETC_PM0_MAXFRM,
+ 		      ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE));
+ 
+-	enetc_port_wr(hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
++	for (tc = 0; tc < 8; tc++)
++		enetc_port_wr(hw, ENETC_PTCMSDUR(tc), ENETC_MAC_MAXFRM_SIZE);
+ 
+ 	enetc_port_wr(hw, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN |
+ 		      ENETC_PM0_CMD_TXP	| ENETC_PM0_PROMISC);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index eef1b2764d34a..67b0bf310daaa 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -10,6 +10,27 @@ static LIST_HEAD(hnae3_ae_algo_list);
+ static LIST_HEAD(hnae3_client_list);
+ static LIST_HEAD(hnae3_ae_dev_list);
+ 
++void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo)
++{
++	const struct pci_device_id *pci_id;
++	struct hnae3_ae_dev *ae_dev;
++
++	if (!ae_algo)
++		return;
++
++	list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
++		if (!hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))
++			continue;
++
++		pci_id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
++		if (!pci_id)
++			continue;
++		if (IS_ENABLED(CONFIG_PCI_IOV))
++			pci_disable_sriov(ae_dev->pdev);
++	}
++}
++EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
++
+ /* we are keeping things simple and using single lock for all the
+  * list. This is a non-critical code so other updations, if happen
+  * in parallel, can wait.
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 32987bd134a1d..dc5cce127d8ea 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -850,6 +850,7 @@ struct hnae3_handle {
+ int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev);
+ void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev);
+ 
++void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo);
+ void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo);
+ void hnae3_register_ae_algo(struct hnae3_ae_algo *ae_algo);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 114692c4f7978..5aad7951308d9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1845,7 +1845,6 @@ void hns3_shinfo_pack(struct skb_shared_info *shinfo, __u32 *size)
+ 
+ static int hns3_skb_linearize(struct hns3_enet_ring *ring,
+ 			      struct sk_buff *skb,
+-			      u8 max_non_tso_bd_num,
+ 			      unsigned int bd_num)
+ {
+ 	/* 'bd_num == UINT_MAX' means the skb' fraglist has a
+@@ -1862,8 +1861,7 @@ static int hns3_skb_linearize(struct hns3_enet_ring *ring,
+ 	 * will not help.
+ 	 */
+ 	if (skb->len > HNS3_MAX_TSO_SIZE ||
+-	    (!skb_is_gso(skb) && skb->len >
+-	     HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {
++	    (!skb_is_gso(skb) && skb->len > HNS3_MAX_NON_TSO_SIZE)) {
+ 		u64_stats_update_begin(&ring->syncp);
+ 		ring->stats.hw_limitation++;
+ 		u64_stats_update_end(&ring->syncp);
+@@ -1898,8 +1896,7 @@ static int hns3_nic_maybe_stop_tx(struct hns3_enet_ring *ring,
+ 			goto out;
+ 		}
+ 
+-		if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,
+-				       bd_num))
++		if (hns3_skb_linearize(ring, skb, bd_num))
+ 			return -ENOMEM;
+ 
+ 		bd_num = hns3_tx_bd_count(skb->len);
+@@ -3266,6 +3263,7 @@ static void hns3_buffer_detach(struct hns3_enet_ring *ring, int i)
+ {
+ 	hns3_unmap_buffer(ring, &ring->desc_cb[i]);
+ 	ring->desc[i].addr = 0;
++	ring->desc_cb[i].refill = 0;
+ }
+ 
+ static void hns3_free_buffer_detach(struct hns3_enet_ring *ring, int i,
+@@ -3343,6 +3341,7 @@ static int hns3_alloc_and_attach_buffer(struct hns3_enet_ring *ring, int i)
+ 		return ret;
+ 
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
++	ring->desc_cb[i].refill = 1;
+ 
+ 	return 0;
+ }
+@@ -3373,12 +3372,14 @@ static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
+ 	hns3_unmap_buffer(ring, &ring->desc_cb[i]);
+ 	ring->desc_cb[i] = *res_cb;
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
++	ring->desc_cb[i].refill = 1;
+ 	ring->desc[i].rx.bd_base_info = 0;
+ }
+ 
+ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
+ {
+ 	ring->desc_cb[i].reuse_flag = 0;
++	ring->desc_cb[i].refill = 1;
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
+ 					 ring->desc_cb[i].page_offset);
+ 	ring->desc[i].rx.bd_base_info = 0;
+@@ -3485,10 +3486,14 @@ static int hns3_desc_unused(struct hns3_enet_ring *ring)
+ 	int ntc = ring->next_to_clean;
+ 	int ntu = ring->next_to_use;
+ 
++	if (unlikely(ntc == ntu && !ring->desc_cb[ntc].refill))
++		return ring->desc_num;
++
+ 	return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu;
+ }
+ 
+-static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
++/* Return true if there is any allocation failure */
++static bool hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ 				      int cleand_count)
+ {
+ 	struct hns3_desc_cb *desc_cb;
+@@ -3513,7 +3518,10 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ 				hns3_rl_err(ring_to_netdev(ring),
+ 					    "alloc rx buffer failed: %d\n",
+ 					    ret);
+-				break;
++
++				writel(i, ring->tqp->io_base +
++				       HNS3_RING_RX_RING_HEAD_REG);
++				return true;
+ 			}
+ 			hns3_replace_buffer(ring, ring->next_to_use, &res_cbs);
+ 
+@@ -3526,6 +3534,7 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ 	}
+ 
+ 	writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG);
++	return false;
+ }
+ 
+ static bool hns3_can_reuse_page(struct hns3_desc_cb *cb)
+@@ -3824,6 +3833,7 @@ static void hns3_rx_ring_move_fw(struct hns3_enet_ring *ring)
+ {
+ 	ring->desc[ring->next_to_clean].rx.bd_base_info &=
+ 		cpu_to_le32(~BIT(HNS3_RXD_VLD_B));
++	ring->desc_cb[ring->next_to_clean].refill = 0;
+ 	ring->next_to_clean += 1;
+ 
+ 	if (unlikely(ring->next_to_clean == ring->desc_num))
+@@ -4159,6 +4169,7 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ {
+ #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16
+ 	int unused_count = hns3_desc_unused(ring);
++	bool failure = false;
+ 	int recv_pkts = 0;
+ 	int err;
+ 
+@@ -4167,9 +4178,9 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ 	while (recv_pkts < budget) {
+ 		/* Reuse or realloc buffers */
+ 		if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) {
+-			hns3_nic_alloc_rx_buffers(ring, unused_count);
+-			unused_count = hns3_desc_unused(ring) -
+-					ring->pending_buf;
++			failure = failure ||
++				hns3_nic_alloc_rx_buffers(ring, unused_count);
++			unused_count = 0;
+ 		}
+ 
+ 		/* Poll one pkt */
+@@ -4188,11 +4199,7 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ 	}
+ 
+ out:
+-	/* Make all data has been write before submit */
+-	if (unused_count > 0)
+-		hns3_nic_alloc_rx_buffers(ring, unused_count);
+-
+-	return recv_pkts;
++	return failure ? budget : recv_pkts;
+ }
+ 
+ static void hns3_update_rx_int_coalesce(struct hns3_enet_tqp_vector *tqp_vector)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 15af3d93857b6..d146f44bfacab 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -185,11 +185,9 @@ enum hns3_nic_state {
+ 
+ #define HNS3_MAX_BD_SIZE			65535
+ #define HNS3_MAX_TSO_BD_NUM			63U
+-#define HNS3_MAX_TSO_SIZE \
+-	(HNS3_MAX_BD_SIZE * HNS3_MAX_TSO_BD_NUM)
++#define HNS3_MAX_TSO_SIZE			1048576U
++#define HNS3_MAX_NON_TSO_SIZE			9728U
+ 
+-#define HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num) \
+-	(HNS3_MAX_BD_SIZE * (max_non_tso_bd_num))
+ 
+ #define HNS3_VECTOR_GL0_OFFSET			0x100
+ #define HNS3_VECTOR_GL1_OFFSET			0x200
+@@ -324,6 +322,7 @@ struct hns3_desc_cb {
+ 	u32 length;     /* length of the buffer */
+ 
+ 	u16 reuse_flag;
++	u16 refill;
+ 
+ 	/* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index c90bfde2aecff..c60d0626062cf 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -133,6 +133,15 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 				*changed = true;
+ 			break;
+ 		case IEEE_8021QAZ_TSA_ETS:
++			/* The hardware will switch to sp mode if bandwidth is
++			 * 0, so limit ets bandwidth must be greater than 0.
++			 */
++			if (!ets->tc_tx_bw[i]) {
++				dev_err(&hdev->pdev->dev,
++					"tc%u ets bw cannot be 0\n", i);
++				return -EINVAL;
++			}
++
+ 			if (hdev->tm_info.tc_info[i].tc_sch_mode !=
+ 				HCLGE_SCH_MODE_DWRR)
+ 				*changed = true;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index 2eeafd61a07ee..c63b440fd6549 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -995,8 +995,11 @@ static int hclge_config_tm_hw_err_int(struct hclge_dev *hdev, bool en)
+ 
+ 	/* configure TM QCN hw errors */
+ 	hclge_cmd_setup_basic_desc(&desc, HCLGE_TM_QCN_MEM_INT_CFG, false);
+-	if (en)
++	desc.data[0] = cpu_to_le32(HCLGE_TM_QCN_ERR_INT_TYPE);
++	if (en) {
++		desc.data[0] |= cpu_to_le32(HCLGE_TM_QCN_FIFO_INT_EN);
+ 		desc.data[1] = cpu_to_le32(HCLGE_TM_QCN_MEM_ERR_INT_EN);
++	}
+ 
+ 	ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+index 07987fb8332ef..d811eeefe2c05 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+@@ -50,6 +50,8 @@
+ #define HCLGE_PPP_MPF_ECC_ERR_INT3_EN	0x003F
+ #define HCLGE_PPP_MPF_ECC_ERR_INT3_EN_MASK	0x003F
+ #define HCLGE_TM_SCH_ECC_ERR_INT_EN	0x3
++#define HCLGE_TM_QCN_ERR_INT_TYPE	0x29
++#define HCLGE_TM_QCN_FIFO_INT_EN	0xFFFF00
+ #define HCLGE_TM_QCN_MEM_ERR_INT_EN	0xFFFFFF
+ #define HCLGE_NCSI_ERR_INT_EN	0x3
+ #define HCLGE_NCSI_ERR_INT_TYPE	0x9
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 9920e76b4f41c..be46b164b0e2c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -13023,6 +13023,7 @@ static int hclge_init(void)
+ 
+ static void hclge_exit(void)
+ {
++	hnae3_unregister_ae_algo_prepare(&ae_algo);
+ 	hnae3_unregister_ae_algo(&ae_algo);
+ 	destroy_workqueue(hclge_wq);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index f314dbd3ce11f..95074e91a8466 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -752,6 +752,8 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
+ 		hdev->tm_info.pg_info[i].tc_bit_map = hdev->hw_tc_map;
+ 		for (k = 0; k < hdev->tm_info.num_tc; k++)
+ 			hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT;
++		for (; k < HNAE3_MAX_TC; k++)
++			hdev->tm_info.pg_info[i].tc_dwrr[k] = 0;
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 22cf66004dfa2..b8414f684e89d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2271,9 +2271,9 @@ static void hclgevf_reset_service_task(struct hclgevf_dev *hdev)
+ 		hdev->reset_attempts = 0;
+ 
+ 		hdev->last_reset_time = jiffies;
+-		while ((hdev->reset_type =
+-			hclgevf_get_reset_level(hdev, &hdev->reset_pending))
+-		       != HNAE3_NONE_RESET)
++		hdev->reset_type =
++			hclgevf_get_reset_level(hdev, &hdev->reset_pending);
++		if (hdev->reset_type != HNAE3_NONE_RESET)
+ 			hclgevf_reset(hdev);
+ 	} else if (test_and_clear_bit(HCLGEVF_RESET_REQUESTED,
+ 				      &hdev->reset_state)) {
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
+index 5b2143f4b1f85..3178efd980066 100644
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h
+@@ -113,7 +113,8 @@ enum e1000_boards {
+ 	board_pch2lan,
+ 	board_pch_lpt,
+ 	board_pch_spt,
+-	board_pch_cnp
++	board_pch_cnp,
++	board_pch_tgp
+ };
+ 
+ struct e1000_ps_page {
+@@ -499,6 +500,7 @@ extern const struct e1000_info e1000_pch2_info;
+ extern const struct e1000_info e1000_pch_lpt_info;
+ extern const struct e1000_info e1000_pch_spt_info;
+ extern const struct e1000_info e1000_pch_cnp_info;
++extern const struct e1000_info e1000_pch_tgp_info;
+ extern const struct e1000_info e1000_es2_info;
+ 
+ void e1000e_ptp_init(struct e1000_adapter *adapter);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index a80336c4319bb..f8b3e758a8d2e 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -4804,7 +4804,7 @@ static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw)
+ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+ {
+ 	struct e1000_mac_info *mac = &hw->mac;
+-	u32 ctrl_ext, txdctl, snoop;
++	u32 ctrl_ext, txdctl, snoop, fflt_dbg;
+ 	s32 ret_val;
+ 	u16 i;
+ 
+@@ -4863,6 +4863,15 @@ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+ 		snoop = (u32)~(PCIE_NO_SNOOP_ALL);
+ 	e1000e_set_pcie_no_snoop(hw, snoop);
+ 
++	/* Enable workaround for packet loss issue on TGP PCH
++	 * Do not gate DMA clock from the modPHY block
++	 */
++	if (mac->type >= e1000_pch_tgp) {
++		fflt_dbg = er32(FFLT_DBG);
++		fflt_dbg |= E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK;
++		ew32(FFLT_DBG, fflt_dbg);
++	}
++
+ 	ctrl_ext = er32(CTRL_EXT);
+ 	ctrl_ext |= E1000_CTRL_EXT_RO_DIS;
+ 	ew32(CTRL_EXT, ctrl_ext);
+@@ -5983,3 +5992,23 @@ const struct e1000_info e1000_pch_cnp_info = {
+ 	.phy_ops		= &ich8_phy_ops,
+ 	.nvm_ops		= &spt_nvm_ops,
+ };
++
++const struct e1000_info e1000_pch_tgp_info = {
++	.mac			= e1000_pch_tgp,
++	.flags			= FLAG_IS_ICH
++				  | FLAG_HAS_WOL
++				  | FLAG_HAS_HW_TIMESTAMP
++				  | FLAG_HAS_CTRLEXT_ON_LOAD
++				  | FLAG_HAS_AMT
++				  | FLAG_HAS_FLASH
++				  | FLAG_HAS_JUMBO_FRAMES
++				  | FLAG_APME_IN_WUC,
++	.flags2			= FLAG2_HAS_PHY_STATS
++				  | FLAG2_HAS_EEE,
++	.pba			= 26,
++	.max_hw_frame_size	= 9022,
++	.get_variants		= e1000_get_variants_ich8lan,
++	.mac_ops		= &ich8_mac_ops,
++	.phy_ops		= &ich8_phy_ops,
++	.nvm_ops		= &spt_nvm_ops,
++};
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index e757896287eba..8f2a8f4ce0ee4 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -286,6 +286,9 @@
+ /* Proprietary Latency Tolerance Reporting PCI Capability */
+ #define E1000_PCI_LTR_CAP_LPT		0xA8
+ 
++/* Don't gate wake DMA clock */
++#define E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK	0x1000
++
+ void e1000e_write_protect_nvm_ich8lan(struct e1000_hw *hw);
+ void e1000e_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw,
+ 						  bool state);
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 757a54c39eefd..774f849027f09 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -51,6 +51,7 @@ static const struct e1000_info *e1000_info_tbl[] = {
+ 	[board_pch_lpt]		= &e1000_pch_lpt_info,
+ 	[board_pch_spt]		= &e1000_pch_spt_info,
+ 	[board_pch_cnp]		= &e1000_pch_cnp_info,
++	[board_pch_tgp]		= &e1000_pch_tgp_info,
+ };
+ 
+ struct e1000_reg_info {
+@@ -7844,20 +7845,20 @@ static const struct pci_device_id e1000_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V11), board_pch_cnp },
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_LM12), board_pch_spt },
+ 	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V12), board_pch_spt },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_cnp },
+-	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_cnp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp },
++	{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp },
+ 
+ 	{ 0, 0, 0, 0, 0, 0, 0 }	/* terminate list */
+ };
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 2fb81e359cdfd..df5ad4de1f00e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -25,6 +25,8 @@ static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+ 	case ICE_DEV_ID_E810C_BACKPLANE:
+ 	case ICE_DEV_ID_E810C_QSFP:
+ 	case ICE_DEV_ID_E810C_SFP:
++	case ICE_DEV_ID_E810_XXV_BACKPLANE:
++	case ICE_DEV_ID_E810_XXV_QSFP:
+ 	case ICE_DEV_ID_E810_XXV_SFP:
+ 		hw->mac_type = ICE_MAC_E810;
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h
+index 9d8194671f6a6..ef4392e6e2444 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devids.h
++++ b/drivers/net/ethernet/intel/ice/ice_devids.h
+@@ -21,6 +21,10 @@
+ #define ICE_DEV_ID_E810C_QSFP		0x1592
+ /* Intel(R) Ethernet Controller E810-C for SFP */
+ #define ICE_DEV_ID_E810C_SFP		0x1593
++/* Intel(R) Ethernet Controller E810-XXV for backplane */
++#define ICE_DEV_ID_E810_XXV_BACKPLANE	0x1599
++/* Intel(R) Ethernet Controller E810-XXV for QSFP */
++#define ICE_DEV_ID_E810_XXV_QSFP	0x159A
+ /* Intel(R) Ethernet Controller E810-XXV for SFP */
+ #define ICE_DEV_ID_E810_XXV_SFP		0x159B
+ /* Intel(R) Ethernet Connection E823-C for backplane */
+diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
+index 7fe6e8ea39f0d..64bea7659cf7e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
++++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
+@@ -63,7 +63,8 @@ static int ice_info_fw_api(struct ice_pf *pf, struct ice_info_ctx *ctx)
+ {
+ 	struct ice_hw *hw = &pf->hw;
+ 
+-	snprintf(ctx->buf, sizeof(ctx->buf), "%u.%u", hw->api_maj_ver, hw->api_min_ver);
++	snprintf(ctx->buf, sizeof(ctx->buf), "%u.%u.%u", hw->api_maj_ver,
++		 hw->api_min_ver, hw->api_patch);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index 06ac9badee774..1ac96dc66d0db 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -1668,7 +1668,7 @@ static u16 ice_tunnel_idx_to_entry(struct ice_hw *hw, enum ice_tunnel_type type,
+ 	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+ 		if (hw->tnl.tbl[i].valid &&
+ 		    hw->tnl.tbl[i].type == type &&
+-		    idx--)
++		    idx-- == 0)
+ 			return i;
+ 
+ 	WARN_ON_ONCE(1);
+@@ -1828,7 +1828,7 @@ int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
+ 	u16 index;
+ 
+ 	tnl_type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? TNL_VXLAN : TNL_GENEVE;
+-	index = ice_tunnel_idx_to_entry(&pf->hw, idx, tnl_type);
++	index = ice_tunnel_idx_to_entry(&pf->hw, tnl_type, idx);
+ 
+ 	status = ice_create_tunnel(&pf->hw, index, tnl_type, ntohs(ti->port));
+ 	if (status) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index dde9802c6c729..b718e196af2a4 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2841,6 +2841,7 @@ void ice_napi_del(struct ice_vsi *vsi)
+  */
+ int ice_vsi_release(struct ice_vsi *vsi)
+ {
++	enum ice_status err;
+ 	struct ice_pf *pf;
+ 
+ 	if (!vsi->back)
+@@ -2912,6 +2913,10 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ 
+ 	ice_fltr_remove_all(vsi);
+ 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
++	err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
++	if (err)
++		dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
++			vsi->vsi_num, err);
+ 	ice_vsi_delete(vsi);
+ 	ice_vsi_free_q_vectors(vsi);
+ 
+@@ -3092,6 +3097,10 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
+ 	prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
+ 
+ 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
++	ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
++	if (ret)
++		dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
++			vsi->vsi_num, ret);
+ 	ice_vsi_free_q_vectors(vsi);
+ 
+ 	/* SR-IOV determines needed MSIX resources all at once instead of per
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index a8bd512d5b450..ed087fde20668 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4224,6 +4224,9 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+ 	if (!pf)
+ 		return -ENOMEM;
+ 
++	/* initialize Auxiliary index to invalid value */
++	pf->aux_idx = -1;
++
+ 	/* set up for high or low DMA */
+ 	err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ 	if (err)
+@@ -4615,7 +4618,8 @@ static void ice_remove(struct pci_dev *pdev)
+ 
+ 	ice_aq_cancel_waiting_tasks(pf);
+ 	ice_unplug_aux_dev(pf);
+-	ida_free(&ice_aux_ida, pf->aux_idx);
++	if (pf->aux_idx >= 0)
++		ida_free(&ice_aux_ida, pf->aux_idx);
+ 	set_bit(ICE_DOWN, pf->state);
+ 
+ 	mutex_destroy(&(&pf->hw)->fdir_fltr_lock);
+@@ -5016,6 +5020,8 @@ static const struct pci_device_id ice_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP), 0 },
++	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE), 0 },
++	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE), 0 },
+ 	{ PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP), 0 },
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
+index 9f07b66417059..2d9b10277186b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.c
++++ b/drivers/net/ethernet/intel/ice/ice_sched.c
+@@ -2070,6 +2070,19 @@ enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+ 	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+ }
+ 
++/**
++ * ice_rm_vsi_rdma_cfg - remove VSI and its RDMA children nodes
++ * @pi: port information structure
++ * @vsi_handle: software VSI handle
++ *
++ * This function clears the VSI and its RDMA children nodes from scheduler tree
++ * for all TCs.
++ */
++enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle)
++{
++	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_RDMA);
++}
++
+ /**
+  * ice_get_agg_info - get the aggregator ID
+  * @hw: pointer to the hardware structure
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h
+index 9beef8f0ec760..fdf7a5882f076 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.h
++++ b/drivers/net/ethernet/intel/ice/ice_sched.h
+@@ -89,6 +89,7 @@ enum ice_status
+ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+ 		  u8 owner, bool enable);
+ enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
++enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle);
+ 
+ /* Tx scheduler rate limiter functions */
+ enum ice_status
+diff --git a/drivers/net/ethernet/intel/igc/igc_hw.h b/drivers/net/ethernet/intel/igc/igc_hw.h
+index 4461f8b9a864b..4e0203336c6bf 100644
+--- a/drivers/net/ethernet/intel/igc/igc_hw.h
++++ b/drivers/net/ethernet/intel/igc/igc_hw.h
+@@ -22,8 +22,8 @@
+ #define IGC_DEV_ID_I220_V			0x15F7
+ #define IGC_DEV_ID_I225_K			0x3100
+ #define IGC_DEV_ID_I225_K2			0x3101
++#define IGC_DEV_ID_I226_K			0x3102
+ #define IGC_DEV_ID_I225_LMVP			0x5502
+-#define IGC_DEV_ID_I226_K			0x5504
+ #define IGC_DEV_ID_I225_IT			0x0D9F
+ #define IGC_DEV_ID_I226_LM			0x125B
+ #define IGC_DEV_ID_I226_V			0x125C
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index 1e2d117082d47..603d9884b6bd1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -10,6 +10,8 @@
+ #include "en_tc.h"
+ #include "rep/tc.h"
+ #include "rep/neigh.h"
++#include "lag.h"
++#include "lag_mp.h"
+ 
+ struct mlx5e_tc_tun_route_attr {
+ 	struct net_device *out_dev;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+index 33de8f0092a66..fb5397324aa4f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+@@ -141,8 +141,7 @@ static void mlx5e_ipsec_set_swp(struct sk_buff *skb,
+ 	 * Pkt: MAC  IP     ESP  IP    L4
+ 	 *
+ 	 * Transport Mode:
+-	 * SWP:      OutL3       InL4
+-	 *           InL3
++	 * SWP:      OutL3       OutL4
+ 	 * Pkt: MAC  IP     ESP  L4
+ 	 *
+ 	 * Tunnel(VXLAN TCP/UDP) over Transport Mode
+@@ -171,31 +170,35 @@ static void mlx5e_ipsec_set_swp(struct sk_buff *skb,
+ 		return;
+ 
+ 	if (!xo->inner_ipproto) {
+-		eseg->swp_inner_l3_offset = skb_network_offset(skb) / 2;
+-		eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
+-		if (skb->protocol == htons(ETH_P_IPV6))
+-			eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
+-		if (xo->proto == IPPROTO_UDP)
++		switch (xo->proto) {
++		case IPPROTO_UDP:
++			eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L4_UDP;
++			fallthrough;
++		case IPPROTO_TCP:
++			/* IP | ESP | TCP */
++			eseg->swp_outer_l4_offset = skb_inner_transport_offset(skb) / 2;
++			break;
++		default:
++			break;
++		}
++	} else {
++		/* Tunnel(VXLAN TCP/UDP) over Transport Mode */
++		switch (xo->inner_ipproto) {
++		case IPPROTO_UDP:
+ 			eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
+-		return;
+-	}
+-
+-	/* Tunnel(VXLAN TCP/UDP) over Transport Mode */
+-	switch (xo->inner_ipproto) {
+-	case IPPROTO_UDP:
+-		eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
+-		fallthrough;
+-	case IPPROTO_TCP:
+-		eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
+-		eseg->swp_inner_l4_offset = (skb->csum_start + skb->head - skb->data) / 2;
+-		if (skb->protocol == htons(ETH_P_IPV6))
+-			eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
+-		break;
+-	default:
+-		break;
++			fallthrough;
++		case IPPROTO_TCP:
++			eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
++			eseg->swp_inner_l4_offset =
++				(skb->csum_start + skb->head - skb->data) / 2;
++			if (skb->protocol == htons(ETH_P_IPV6))
++				eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
++			break;
++		default:
++			break;
++		}
+ 	}
+ 
+-	return;
+ }
+ 
+ void mlx5e_ipsec_set_iv_esn(struct sk_buff *skb, struct xfrm_state *x,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 6eba574c5a364..c757209b47ee0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -72,6 +72,8 @@
+ #include "lib/fs_chains.h"
+ #include "diag/en_tc_tracepoint.h"
+ #include <asm/div64.h>
++#include "lag.h"
++#include "lag_mp.h"
+ 
+ #define nic_chains(priv) ((priv)->fs.tc.chains)
+ #define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index c63d78eda6060..188994d091c54 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -213,19 +213,18 @@ static inline void mlx5e_insert_vlan(void *start, struct sk_buff *skb, u16 ihs)
+ 	memcpy(&vhdr->h_vlan_encapsulated_proto, skb->data + cpy1_sz, cpy2_sz);
+ }
+ 
+-/* If packet is not IP's CHECKSUM_PARTIAL (e.g. icmd packet),
+- * need to set L3 checksum flag for IPsec
+- */
+ static void
+ ipsec_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 			    struct mlx5_wqe_eth_seg *eseg)
+ {
++	struct xfrm_offload *xo = xfrm_offload(skb);
++
+ 	eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
+-	if (skb->encapsulation) {
+-		eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM;
++	if (xo->inner_ipproto) {
++		eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM | MLX5_ETH_WQE_L3_INNER_CSUM;
++	} else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
++		eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM;
+ 		sq->stats->csum_partial_inner++;
+-	} else {
+-		sq->stats->csum_partial++;
+ 	}
+ }
+ 
+@@ -234,6 +233,11 @@ mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 			    struct mlx5e_accel_tx_state *accel,
+ 			    struct mlx5_wqe_eth_seg *eseg)
+ {
++	if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) {
++		ipsec_txwqe_build_eseg_csum(sq, skb, eseg);
++		return;
++	}
++
+ 	if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ 		eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM;
+ 		if (skb->encapsulation) {
+@@ -249,8 +253,6 @@ mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 		eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM;
+ 		sq->stats->csum_partial++;
+ #endif
+-	} else if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) {
+-		ipsec_txwqe_build_eseg_csum(sq, skb, eseg);
+ 	} else
+ 		sq->stats->csum_none++;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index 40ef60f562b42..be6e7e10b2529 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -372,12 +372,17 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ 	bool do_bond, roce_lag;
+ 	int err;
+ 
+-	if (!mlx5_lag_is_ready(ldev))
+-		return;
++	if (!mlx5_lag_is_ready(ldev)) {
++		do_bond = false;
++	} else {
++		/* VF LAG is in multipath mode, ignore bond change requests */
++		if (mlx5_lag_is_multipath(dev0))
++			return;
+ 
+-	tracker = ldev->tracker;
++		tracker = ldev->tracker;
+ 
+-	do_bond = tracker.is_bonded && mlx5_lag_check_prereq(ldev);
++		do_bond = tracker.is_bonded && mlx5_lag_check_prereq(ldev);
++	}
+ 
+ 	if (do_bond && !__mlx5_lag_is_active(ldev)) {
+ 		roce_lag = !mlx5_sriov_is_enabled(dev0) &&
+@@ -691,11 +696,11 @@ void mlx5_lag_remove_netdev(struct mlx5_core_dev *dev,
+ 	if (!ldev)
+ 		return;
+ 
+-	if (__mlx5_lag_is_active(ldev))
+-		mlx5_disable_lag(ldev);
+-
+ 	mlx5_ldev_remove_netdev(ldev, netdev);
+ 	ldev->flags &= ~MLX5_LAG_FLAG_READY;
++
++	if (__mlx5_lag_is_active(ldev))
++		mlx5_queue_bond_work(ldev, 0);
+ }
+ 
+ /* Must be called with intf_mutex held */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+index 516bfc2bd797b..577e5d02bfdd4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+@@ -9,20 +9,23 @@
+ #include "eswitch.h"
+ #include "lib/mlx5.h"
+ 
++static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev)
++{
++	return !!(ldev->flags & MLX5_LAG_FLAG_MULTIPATH);
++}
++
+ static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev)
+ {
+ 	if (!mlx5_lag_is_ready(ldev))
+ 		return false;
+ 
++	if (__mlx5_lag_is_active(ldev) && !__mlx5_lag_is_multipath(ldev))
++		return false;
++
+ 	return mlx5_esw_multipath_prereq(ldev->pf[MLX5_LAG_P1].dev,
+ 					 ldev->pf[MLX5_LAG_P2].dev);
+ }
+ 
+-static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev)
+-{
+-	return !!(ldev->flags & MLX5_LAG_FLAG_MULTIPATH);
+-}
+-
+ bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_lag *ldev;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h
+index 729c839397a89..dea199e79beda 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.h
+@@ -24,12 +24,14 @@ struct lag_mp {
+ void mlx5_lag_mp_reset(struct mlx5_lag *ldev);
+ int mlx5_lag_mp_init(struct mlx5_lag *ldev);
+ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev);
++bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev);
+ 
+ #else /* CONFIG_MLX5_ESWITCH */
+ 
+ static inline void mlx5_lag_mp_reset(struct mlx5_lag *ldev) {};
+ static inline int mlx5_lag_mp_init(struct mlx5_lag *ldev) { return 0; }
+ static inline void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev) {}
++bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev) { return false; }
+ 
+ #endif /* CONFIG_MLX5_ESWITCH */
+ #endif /* __MLX5_LAG_MP_H__ */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
+index fbfda55b4c526..5e731a72cce81 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
+@@ -71,6 +71,7 @@ err_remove_config_dt:
+ 
+ static const struct of_device_id dwmac_generic_match[] = {
+ 	{ .compatible = "st,spear600-gmac"},
++	{ .compatible = "snps,dwmac-3.40a"},
+ 	{ .compatible = "snps,dwmac-3.50a"},
+ 	{ .compatible = "snps,dwmac-3.610"},
+ 	{ .compatible = "snps,dwmac-3.70a"},
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 6b2a5e5769e89..14e577787415e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -736,7 +736,7 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ 			config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ 			ptp_v2 = PTP_TCR_TSVER2ENA;
+ 			snap_type_sel = PTP_TCR_SNAPTYPSEL_1;
+-			if (priv->synopsys_id != DWMAC_CORE_5_10)
++			if (priv->synopsys_id < DWMAC_CORE_4_10)
+ 				ts_event_en = PTP_TCR_TSEVNTENA;
+ 			ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;
+ 			ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 62cec9bfcd337..232ac98943cd0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -508,6 +508,14 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 		plat->pmt = 1;
+ 	}
+ 
++	if (of_device_is_compatible(np, "snps,dwmac-3.40a")) {
++		plat->has_gmac = 1;
++		plat->enh_desc = 1;
++		plat->tx_coe = 1;
++		plat->bugged_jumbo = 1;
++		plat->pmt = 1;
++	}
++
+ 	if (of_device_is_compatible(np, "snps,dwmac-4.00") ||
+ 	    of_device_is_compatible(np, "snps,dwmac-4.10a") ||
+ 	    of_device_is_compatible(np, "snps,dwmac-4.20a") ||
+diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
+index 4435a1195194d..8ea8d50f81c4b 100644
+--- a/drivers/net/hamradio/baycom_epp.c
++++ b/drivers/net/hamradio/baycom_epp.c
+@@ -623,16 +623,16 @@ static int receive(struct net_device *dev, int cnt)
+ 
+ /* --------------------------------------------------------------------- */
+ 
+-#ifdef __i386__
++#if defined(__i386__) && !defined(CONFIG_UML)
+ #include <asm/msr.h>
+ #define GETTICK(x)						\
+ ({								\
+ 	if (boot_cpu_has(X86_FEATURE_TSC))			\
+ 		x = (unsigned int)rdtsc();			\
+ })
+-#else /* __i386__ */
++#else /* __i386__  && !CONFIG_UML */
+ #define GETTICK(x)
+-#endif /* __i386__ */
++#endif /* __i386__  && !CONFIG_UML */
+ 
+ static void epp_bh(struct work_struct *work)
+ {
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 6865d9319197f..3fa9c15ec81e2 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -548,6 +548,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	err = device_register(&bus->dev);
+ 	if (err) {
+ 		pr_err("mii_bus %s failed to register\n", bus->id);
++		put_device(&bus->dev);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
+index f87f175033731..b554054a7560a 100644
+--- a/drivers/net/usb/Kconfig
++++ b/drivers/net/usb/Kconfig
+@@ -117,6 +117,7 @@ config USB_LAN78XX
+ 	select PHYLIB
+ 	select MICROCHIP_PHY
+ 	select FIXED_PHY
++	select CRC32
+ 	help
+ 	  This option adds support for Microchip LAN78XX based USB 2
+ 	  & USB 3 10/100/1000 Ethernet adapters.
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 79832374f78db..92fca5e9ed030 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -767,6 +767,7 @@ enum rtl8152_flags {
+ 	PHY_RESET,
+ 	SCHEDULE_TASKLET,
+ 	GREEN_ETHERNET,
++	RX_EPROTO,
+ };
+ 
+ #define DEVICE_ID_THINKPAD_THUNDERBOLT3_DOCK_GEN2	0x3082
+@@ -1770,6 +1771,14 @@ static void read_bulk_callback(struct urb *urb)
+ 		rtl_set_unplug(tp);
+ 		netif_device_detach(tp->netdev);
+ 		return;
++	case -EPROTO:
++		urb->actual_length = 0;
++		spin_lock_irqsave(&tp->rx_lock, flags);
++		list_add_tail(&agg->list, &tp->rx_done);
++		spin_unlock_irqrestore(&tp->rx_lock, flags);
++		set_bit(RX_EPROTO, &tp->flags);
++		schedule_delayed_work(&tp->schedule, 1);
++		return;
+ 	case -ENOENT:
+ 		return;	/* the urb is in unlink state */
+ 	case -ETIME:
+@@ -2425,6 +2434,7 @@ static int rx_bottom(struct r8152 *tp, int budget)
+ 	if (list_empty(&tp->rx_done))
+ 		goto out1;
+ 
++	clear_bit(RX_EPROTO, &tp->flags);
+ 	INIT_LIST_HEAD(&rx_queue);
+ 	spin_lock_irqsave(&tp->rx_lock, flags);
+ 	list_splice_init(&tp->rx_done, &rx_queue);
+@@ -2441,7 +2451,7 @@ static int rx_bottom(struct r8152 *tp, int budget)
+ 
+ 		agg = list_entry(cursor, struct rx_agg, list);
+ 		urb = agg->urb;
+-		if (urb->actual_length < ETH_ZLEN)
++		if (urb->status != 0 || urb->actual_length < ETH_ZLEN)
+ 			goto submit;
+ 
+ 		agg_free = rtl_get_free_rx(tp, GFP_ATOMIC);
+@@ -6643,6 +6653,10 @@ static void rtl_work_func_t(struct work_struct *work)
+ 	    netif_carrier_ok(tp->netdev))
+ 		tasklet_schedule(&tp->tx_tl);
+ 
++	if (test_and_clear_bit(RX_EPROTO, &tp->flags) &&
++	    !list_empty(&tp->rx_done))
++		napi_schedule(&tp->napi);
++
+ 	mutex_unlock(&tp->control);
+ 
+ out1:
+diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c
+index 014868752cd4d..dcefdb42ac463 100644
+--- a/drivers/pci/hotplug/s390_pci_hpc.c
++++ b/drivers/pci/hotplug/s390_pci_hpc.c
+@@ -62,14 +62,7 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
+ 	struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
+ 					     hotplug_slot);
+ 
+-	switch (zdev->state) {
+-	case ZPCI_FN_STATE_STANDBY:
+-		*value = 0;
+-		break;
+-	default:
+-		*value = 1;
+-		break;
+-	}
++	*value = zpci_is_device_configured(zdev) ? 1 : 0;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 68b3886f9f0f3..dfd8888a222a4 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1644,8 +1644,8 @@ int __maybe_unused stm32_pinctrl_resume(struct device *dev)
+ 	struct stm32_pinctrl_group *g = pctl->groups;
+ 	int i;
+ 
+-	for (i = g->pin; i < g->pin + pctl->ngroups; i++)
+-		stm32_pinctrl_restore_gpio_regs(pctl, i);
++	for (i = 0; i < pctl->ngroups; i++, g++)
++		stm32_pinctrl_restore_gpio_regs(pctl, g->pin);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index 25b98b12439f1..121037d0a9330 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -75,7 +75,7 @@ struct intel_scu_ipc_dev {
+ #define IPC_READ_BUFFER		0x90
+ 
+ /* Timeout in jiffies */
+-#define IPC_TIMEOUT		(5 * HZ)
++#define IPC_TIMEOUT		(10 * HZ)
+ 
+ static struct intel_scu_ipc_dev *ipcdev; /* Only one for now */
+ static DEFINE_MUTEX(ipclock); /* lock used to prevent multiple call to SCU */
+@@ -247,7 +247,7 @@ static inline int busy_loop(struct intel_scu_ipc_dev *scu)
+ 	return -ETIMEDOUT;
+ }
+ 
+-/* Wait till ipc ioc interrupt is received or timeout in 3 HZ */
++/* Wait till ipc ioc interrupt is received or timeout in 10 HZ */
+ static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu)
+ {
+ 	int status;
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 4dfc52e067041..7fd02aabd79a4 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -283,15 +283,22 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
+ 	/* Create a posix clock and link it to the device. */
+ 	err = posix_clock_register(&ptp->clock, &ptp->dev);
+ 	if (err) {
++	        if (ptp->pps_source)
++	                pps_unregister_source(ptp->pps_source);
++
++		kfree(ptp->vclock_index);
++
++		if (ptp->kworker)
++	                kthread_destroy_worker(ptp->kworker);
++
++		put_device(&ptp->dev);
++
+ 		pr_err("failed to create posix clock\n");
+-		goto no_clock;
++		return ERR_PTR(err);
+ 	}
+ 
+ 	return ptp;
+ 
+-no_clock:
+-	if (ptp->pps_source)
+-		pps_unregister_source(ptp->pps_source);
+ no_pps:
+ 	ptp_cleanup_pin_groups(ptp);
+ no_pin_groups:
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 3f6f14f0cafb3..24b72ee4246fb 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -220,7 +220,8 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 		goto fail;
+ 	}
+ 
+-	shost->cmd_per_lun = min_t(short, shost->cmd_per_lun,
++	/* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
++	shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
+ 				   shost->can_queue);
+ 
+ 	error = scsi_init_sense_cache(shost);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 24ac7ddec7494..206c2598ade30 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -3755,7 +3755,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	shost->max_lun = -1;
+ 	shost->unique_id = mrioc->id;
+ 
+-	shost->max_channel = 1;
++	shost->max_channel = 0;
+ 	shost->max_id = 0xFFFFFFFF;
+ 
+ 	if (prot_mask >= 0)
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index d42b2ad840498..2304f54fdc935 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -415,7 +415,7 @@ done_unmap_sg:
+ 	goto done_free_fcport;
+ 
+ done_free_fcport:
+-	if (bsg_request->msgcode == FC_BSG_RPT_ELS)
++	if (bsg_request->msgcode != FC_BSG_RPT_ELS)
+ 		qla2x00_free_fcport(fcport);
+ done:
+ 	return rval;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 922e4c7bd88e4..78343d3f93857 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2930,8 +2930,6 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 			session->recovery_tmo = value;
+ 		break;
+ 	default:
+-		err = transport->set_param(conn, ev->u.set_param.param,
+-					   data, ev->u.set_param.len);
+ 		if ((conn->state == ISCSI_CONN_BOUND) ||
+ 			(conn->state == ISCSI_CONN_UP)) {
+ 			err = transport->set_param(conn, ev->u.set_param.param,
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 37506b3fe5a92..5fa1120a87f7e 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1285,11 +1285,15 @@ static void storvsc_on_channel_callback(void *context)
+ 	foreach_vmbus_pkt(desc, channel) {
+ 		struct vstor_packet *packet = hv_pkt_data(desc);
+ 		struct storvsc_cmd_request *request = NULL;
++		u32 pktlen = hv_pkt_datalen(desc);
+ 		u64 rqst_id = desc->trans_id;
++		u32 minlen = rqst_id ? sizeof(struct vstor_packet) -
++			stor_device->vmscsi_size_delta : sizeof(enum vstor_packet_operation);
+ 
+-		if (hv_pkt_datalen(desc) < sizeof(struct vstor_packet) -
+-				stor_device->vmscsi_size_delta) {
+-			dev_err(&device->device, "Invalid packet len\n");
++		if (pktlen < minlen) {
++			dev_err(&device->device,
++				"Invalid pkt: id=%llu, len=%u, minlen=%u\n",
++				rqst_id, pktlen, minlen);
+ 			continue;
+ 		}
+ 
+@@ -1302,13 +1306,23 @@ static void storvsc_on_channel_callback(void *context)
+ 			if (rqst_id == 0) {
+ 				/*
+ 				 * storvsc_on_receive() looks at the vstor_packet in the message
+-				 * from the ring buffer.  If the operation in the vstor_packet is
+-				 * COMPLETE_IO, then we call storvsc_on_io_completion(), and
+-				 * dereference the guest memory address.  Make sure we don't call
+-				 * storvsc_on_io_completion() with a guest memory address that is
+-				 * zero if Hyper-V were to construct and send such a bogus packet.
++				 * from the ring buffer.
++				 *
++				 * - If the operation in the vstor_packet is COMPLETE_IO, then
++				 *   we call storvsc_on_io_completion(), and dereference the
++				 *   guest memory address.  Make sure we don't call
++				 *   storvsc_on_io_completion() with a guest memory address
++				 *   that is zero if Hyper-V were to construct and send such
++				 *   a bogus packet.
++				 *
++				 * - If the operation in the vstor_packet is FCHBA_DATA, then
++				 *   we call cache_wwn(), and access the data payload area of
++				 *   the packet (wwn_packet); however, there is no guarantee
++				 *   that the packet is big enough to contain such area.
++				 *   Future-proof the code by rejecting such a bogus packet.
+ 				 */
+-				if (packet->operation == VSTOR_OPERATION_COMPLETE_IO) {
++				if (packet->operation == VSTOR_OPERATION_COMPLETE_IO ||
++				    packet->operation == VSTOR_OPERATION_FCHBA_DATA) {
+ 					dev_err(&device->device, "Invalid packet with ID of 0\n");
+ 					continue;
+ 				}
+diff --git a/drivers/spi/spi-mux.c b/drivers/spi/spi-mux.c
+index 9708b7827ff70..f5d32ec4634e3 100644
+--- a/drivers/spi/spi-mux.c
++++ b/drivers/spi/spi-mux.c
+@@ -137,6 +137,13 @@ static int spi_mux_probe(struct spi_device *spi)
+ 	priv = spi_controller_get_devdata(ctlr);
+ 	priv->spi = spi;
+ 
++	/*
++	 * Increase lockdep class as these lock are taken while the parent bus
++	 * already holds their instance's lock.
++	 */
++	lockdep_set_subclass(&ctlr->io_mutex, 1);
++	lockdep_set_subclass(&ctlr->add_lock, 1);
++
+ 	priv->mux = devm_mux_control_get(&spi->dev, NULL);
+ 	if (IS_ERR(priv->mux)) {
+ 		ret = dev_err_probe(&spi->dev, PTR_ERR(priv->mux),
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index f95f7666cb5b7..3093e0041158c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -480,12 +480,6 @@ static LIST_HEAD(spi_controller_list);
+  */
+ static DEFINE_MUTEX(board_lock);
+ 
+-/*
+- * Prevents addition of devices with same chip select and
+- * addition of devices below an unregistering controller.
+- */
+-static DEFINE_MUTEX(spi_add_lock);
+-
+ /**
+  * spi_alloc_device - Allocate a new SPI device
+  * @ctlr: Controller to which device is connected
+@@ -638,9 +632,9 @@ int spi_add_device(struct spi_device *spi)
+ 	 * chipselect **BEFORE** we call setup(), else we'll trash
+ 	 * its configuration.  Lock against concurrent add() calls.
+ 	 */
+-	mutex_lock(&spi_add_lock);
++	mutex_lock(&ctlr->add_lock);
+ 	status = __spi_add_device(spi);
+-	mutex_unlock(&spi_add_lock);
++	mutex_unlock(&ctlr->add_lock);
+ 	return status;
+ }
+ EXPORT_SYMBOL_GPL(spi_add_device);
+@@ -660,7 +654,7 @@ static int spi_add_device_locked(struct spi_device *spi)
+ 	/* Set the bus ID string */
+ 	spi_dev_set_name(spi);
+ 
+-	WARN_ON(!mutex_is_locked(&spi_add_lock));
++	WARN_ON(!mutex_is_locked(&ctlr->add_lock));
+ 	return __spi_add_device(spi);
+ }
+ 
+@@ -2555,6 +2549,12 @@ struct spi_controller *__spi_alloc_controller(struct device *dev,
+ 		return NULL;
+ 
+ 	device_initialize(&ctlr->dev);
++	INIT_LIST_HEAD(&ctlr->queue);
++	spin_lock_init(&ctlr->queue_lock);
++	spin_lock_init(&ctlr->bus_lock_spinlock);
++	mutex_init(&ctlr->bus_lock_mutex);
++	mutex_init(&ctlr->io_mutex);
++	mutex_init(&ctlr->add_lock);
+ 	ctlr->bus_num = -1;
+ 	ctlr->num_chipselect = 1;
+ 	ctlr->slave = slave;
+@@ -2827,11 +2827,6 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 			return id;
+ 		ctlr->bus_num = id;
+ 	}
+-	INIT_LIST_HEAD(&ctlr->queue);
+-	spin_lock_init(&ctlr->queue_lock);
+-	spin_lock_init(&ctlr->bus_lock_spinlock);
+-	mutex_init(&ctlr->bus_lock_mutex);
+-	mutex_init(&ctlr->io_mutex);
+ 	ctlr->bus_lock_flag = 0;
+ 	init_completion(&ctlr->xfer_completion);
+ 	if (!ctlr->max_dma_len)
+@@ -2968,7 +2963,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ 
+ 	/* Prevent addition of new devices, unregister existing ones */
+ 	if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
+-		mutex_lock(&spi_add_lock);
++		mutex_lock(&ctlr->add_lock);
+ 
+ 	device_for_each_child(&ctlr->dev, NULL, __unregister);
+ 
+@@ -2999,7 +2994,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ 	mutex_unlock(&board_lock);
+ 
+ 	if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
+-		mutex_unlock(&spi_add_lock);
++		mutex_unlock(&ctlr->add_lock);
+ }
+ EXPORT_SYMBOL_GPL(spi_unregister_controller);
+ 
+diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
+index da19d7987d005..78fd365893c13 100644
+--- a/drivers/thunderbolt/Makefile
++++ b/drivers/thunderbolt/Makefile
+@@ -7,6 +7,7 @@ thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o
+ thunderbolt-${CONFIG_ACPI} += acpi.o
+ thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
+ thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o
++CFLAGS_test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+ 
+ thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o
+ obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o
+diff --git a/fs/autofs/waitq.c b/fs/autofs/waitq.c
+index 16b5fca0626e6..54c1f8b8b0757 100644
+--- a/fs/autofs/waitq.c
++++ b/fs/autofs/waitq.c
+@@ -358,7 +358,7 @@ int autofs_wait(struct autofs_sb_info *sbi,
+ 		qstr.len = strlen(p);
+ 		offset = p - name;
+ 	}
+-	qstr.hash = full_name_hash(dentry, name, qstr.len);
++	qstr.hash = full_name_hash(dentry, qstr.name, qstr.len);
+ 
+ 	if (mutex_lock_interruptible(&sbi->wq_mutex)) {
+ 		kfree(name);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 17f0de5bb8733..539c5db2b22b8 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -939,9 +939,11 @@ out:
+ }
+ 
+ /*
+- * helper function to see if a given name and sequence number found
+- * in an inode back reference are already in a directory and correctly
+- * point to this inode
++ * See if a given name and sequence number found in an inode back reference are
++ * already in a directory and correctly point to this inode.
++ *
++ * Returns: < 0 on error, 0 if the directory entry does not exists and 1 if it
++ * exists.
+  */
+ static noinline int inode_in_dir(struct btrfs_root *root,
+ 				 struct btrfs_path *path,
+@@ -950,29 +952,35 @@ static noinline int inode_in_dir(struct btrfs_root *root,
+ {
+ 	struct btrfs_dir_item *di;
+ 	struct btrfs_key location;
+-	int match = 0;
++	int ret = 0;
+ 
+ 	di = btrfs_lookup_dir_index_item(NULL, root, path, dirid,
+ 					 index, name, name_len, 0);
+-	if (di && !IS_ERR(di)) {
++	if (IS_ERR(di)) {
++		if (PTR_ERR(di) != -ENOENT)
++			ret = PTR_ERR(di);
++		goto out;
++	} else if (di) {
+ 		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
+ 		if (location.objectid != objectid)
+ 			goto out;
+-	} else
++	} else {
+ 		goto out;
+-	btrfs_release_path(path);
++	}
+ 
++	btrfs_release_path(path);
+ 	di = btrfs_lookup_dir_item(NULL, root, path, dirid, name, name_len, 0);
+-	if (di && !IS_ERR(di)) {
+-		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
+-		if (location.objectid != objectid)
+-			goto out;
+-	} else
++	if (IS_ERR(di)) {
++		ret = PTR_ERR(di);
+ 		goto out;
+-	match = 1;
++	} else if (di) {
++		btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);
++		if (location.objectid == objectid)
++			ret = 1;
++	}
+ out:
+ 	btrfs_release_path(path);
+-	return match;
++	return ret;
+ }
+ 
+ /*
+@@ -1522,10 +1530,12 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 		if (ret)
+ 			goto out;
+ 
+-		/* if we already have a perfect match, we're done */
+-		if (!inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),
+-					btrfs_ino(BTRFS_I(inode)), ref_index,
+-					name, namelen)) {
++		ret = inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),
++				   btrfs_ino(BTRFS_I(inode)), ref_index,
++				   name, namelen);
++		if (ret < 0) {
++			goto out;
++		} else if (ret == 0) {
+ 			/*
+ 			 * look for a conflicting back reference in the
+ 			 * metadata. if we find one we have to unlink that name
+@@ -1585,6 +1595,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 			if (ret)
+ 				goto out;
+ 		}
++		/* Else, ret == 1, we already have a perfect match, we're done. */
+ 
+ 		ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen;
+ 		kfree(name);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 3296a93be907c..9290f97446c24 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2264,7 +2264,6 @@ static int unsafe_request_wait(struct inode *inode)
+ 
+ int ceph_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+-	struct ceph_file_info *fi = file->private_data;
+ 	struct inode *inode = file->f_mapping->host;
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 	u64 flush_tid;
+@@ -2299,14 +2298,9 @@ int ceph_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	if (err < 0)
+ 		ret = err;
+ 
+-	if (errseq_check(&ci->i_meta_err, READ_ONCE(fi->meta_err))) {
+-		spin_lock(&file->f_lock);
+-		err = errseq_check_and_advance(&ci->i_meta_err,
+-					       &fi->meta_err);
+-		spin_unlock(&file->f_lock);
+-		if (err < 0)
+-			ret = err;
+-	}
++	err = file_check_and_advance_wb_err(file);
++	if (err < 0)
++		ret = err;
+ out:
+ 	dout("fsync %p%s result=%d\n", inode, datasync ? " datasync" : "", ret);
+ 	return ret;
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 3daebfaec8c6d..8a50a74c80177 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -233,7 +233,6 @@ static int ceph_init_file_info(struct inode *inode, struct file *file,
+ 
+ 	spin_lock_init(&fi->rw_contexts_lock);
+ 	INIT_LIST_HEAD(&fi->rw_contexts);
+-	fi->meta_err = errseq_sample(&ci->i_meta_err);
+ 	fi->filp_gen = READ_ONCE(ceph_inode_to_client(inode)->filp_gen);
+ 
+ 	return 0;
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 1bd2cc015913f..4648a4b5d9c51 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -541,8 +541,6 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
+ 
+ 	ceph_fscache_inode_init(ci);
+ 
+-	ci->i_meta_err = 0;
+-
+ 	return &ci->vfs_inode;
+ }
+ 
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 52b3ddc5f1991..363f4f66b048f 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1479,7 +1479,6 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc,
+ {
+ 	struct ceph_mds_request *req;
+ 	struct rb_node *p;
+-	struct ceph_inode_info *ci;
+ 
+ 	dout("cleanup_session_requests mds%d\n", session->s_mds);
+ 	mutex_lock(&mdsc->mutex);
+@@ -1488,16 +1487,10 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc,
+ 				       struct ceph_mds_request, r_unsafe_item);
+ 		pr_warn_ratelimited(" dropping unsafe request %llu\n",
+ 				    req->r_tid);
+-		if (req->r_target_inode) {
+-			/* dropping unsafe change of inode's attributes */
+-			ci = ceph_inode(req->r_target_inode);
+-			errseq_set(&ci->i_meta_err, -EIO);
+-		}
+-		if (req->r_unsafe_dir) {
+-			/* dropping unsafe directory operation */
+-			ci = ceph_inode(req->r_unsafe_dir);
+-			errseq_set(&ci->i_meta_err, -EIO);
+-		}
++		if (req->r_target_inode)
++			mapping_set_error(req->r_target_inode->i_mapping, -EIO);
++		if (req->r_unsafe_dir)
++			mapping_set_error(req->r_unsafe_dir->i_mapping, -EIO);
+ 		__unregister_request(mdsc, req);
+ 	}
+ 	/* zero r_attempts, so kick_requests() will re-send requests */
+@@ -1664,7 +1657,7 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		spin_unlock(&mdsc->cap_dirty_lock);
+ 
+ 		if (dirty_dropped) {
+-			errseq_set(&ci->i_meta_err, -EIO);
++			mapping_set_error(inode->i_mapping, -EIO);
+ 
+ 			if (ci->i_wrbuffer_ref_head == 0 &&
+ 			    ci->i_wr_ref == 0 &&
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 9b1b7f4cfdd4b..fd8742bae8471 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -1002,16 +1002,16 @@ static int ceph_compare_super(struct super_block *sb, struct fs_context *fc)
+ 	struct ceph_fs_client *new = fc->s_fs_info;
+ 	struct ceph_mount_options *fsopt = new->mount_options;
+ 	struct ceph_options *opt = new->client->options;
+-	struct ceph_fs_client *other = ceph_sb_to_client(sb);
++	struct ceph_fs_client *fsc = ceph_sb_to_client(sb);
+ 
+ 	dout("ceph_compare_super %p\n", sb);
+ 
+-	if (compare_mount_options(fsopt, opt, other)) {
++	if (compare_mount_options(fsopt, opt, fsc)) {
+ 		dout("monitor(s)/mount options don't match\n");
+ 		return 0;
+ 	}
+ 	if ((opt->flags & CEPH_OPT_FSID) &&
+-	    ceph_fsid_compare(&opt->fsid, &other->client->fsid)) {
++	    ceph_fsid_compare(&opt->fsid, &fsc->client->fsid)) {
+ 		dout("fsid doesn't match\n");
+ 		return 0;
+ 	}
+@@ -1019,6 +1019,17 @@ static int ceph_compare_super(struct super_block *sb, struct fs_context *fc)
+ 		dout("flags differ\n");
+ 		return 0;
+ 	}
++
++	if (fsc->blocklisted && !ceph_test_mount_opt(fsc, CLEANRECOVER)) {
++		dout("client is blocklisted (and CLEANRECOVER is not set)\n");
++		return 0;
++	}
++
++	if (fsc->mount_state == CEPH_MOUNT_SHUTDOWN) {
++		dout("client has been forcibly unmounted\n");
++		return 0;
++	}
++
+ 	return 1;
+ }
+ 
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 2200ed76b1230..3f96d2ff91ef2 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -430,8 +430,6 @@ struct ceph_inode_info {
+ #ifdef CONFIG_CEPH_FSCACHE
+ 	struct fscache_cookie *fscache;
+ #endif
+-	errseq_t i_meta_err;
+-
+ 	struct inode vfs_inode; /* at end */
+ };
+ 
+@@ -775,7 +773,6 @@ struct ceph_file_info {
+ 	spinlock_t rw_contexts_lock;
+ 	struct list_head rw_contexts;
+ 
+-	errseq_t meta_err;
+ 	u32 filp_gen;
+ 	atomic_t num_locks;
+ };
+diff --git a/fs/kernel_read_file.c b/fs/kernel_read_file.c
+index 87aac4c72c37d..1b07550485b96 100644
+--- a/fs/kernel_read_file.c
++++ b/fs/kernel_read_file.c
+@@ -178,7 +178,7 @@ int kernel_read_file_from_fd(int fd, loff_t offset, void **buf,
+ 	struct fd f = fdget(fd);
+ 	int ret = -EBADF;
+ 
+-	if (!f.file)
++	if (!f.file || !(f.file->f_mode & FMODE_READ))
+ 		goto out;
+ 
+ 	ret = kernel_read_file(f.file, offset, buf, buf_size, file_size, id);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 09ae1a0873d05..070e5dd03e26f 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -793,7 +793,10 @@ out_close:
+ 		svc_xprt_put(xprt);
+ 	}
+ out_err:
+-	nfsd_destroy(net);
++	if (!list_empty(&nn->nfsd_serv->sv_permsocks))
++		nn->nfsd_serv->sv_nrthreads--;
++	 else
++		nfsd_destroy(net);
+ 	return err;
+ }
+ 
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index f1cc8258d34a4..5d9ae17bd443f 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -7045,7 +7045,7 @@ void ocfs2_set_inode_data_inline(struct inode *inode, struct ocfs2_dinode *di)
+ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 					 struct buffer_head *di_bh)
+ {
+-	int ret, i, has_data, num_pages = 0;
++	int ret, has_data, num_pages = 0;
+ 	int need_free = 0;
+ 	u32 bit_off, num;
+ 	handle_t *handle;
+@@ -7054,26 +7054,17 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 	struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ 	struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	struct ocfs2_alloc_context *data_ac = NULL;
+-	struct page **pages = NULL;
+-	loff_t end = osb->s_clustersize;
++	struct page *page = NULL;
+ 	struct ocfs2_extent_tree et;
+ 	int did_quota = 0;
+ 
+ 	has_data = i_size_read(inode) ? 1 : 0;
+ 
+ 	if (has_data) {
+-		pages = kcalloc(ocfs2_pages_per_cluster(osb->sb),
+-				sizeof(struct page *), GFP_NOFS);
+-		if (pages == NULL) {
+-			ret = -ENOMEM;
+-			mlog_errno(ret);
+-			return ret;
+-		}
+-
+ 		ret = ocfs2_reserve_clusters(osb, 1, &data_ac);
+ 		if (ret) {
+ 			mlog_errno(ret);
+-			goto free_pages;
++			goto out;
+ 		}
+ 	}
+ 
+@@ -7093,7 +7084,8 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 	}
+ 
+ 	if (has_data) {
+-		unsigned int page_end;
++		unsigned int page_end = min_t(unsigned, PAGE_SIZE,
++							osb->s_clustersize);
+ 		u64 phys;
+ 
+ 		ret = dquot_alloc_space_nodirty(inode,
+@@ -7117,15 +7109,8 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 		 */
+ 		block = phys = ocfs2_clusters_to_blocks(inode->i_sb, bit_off);
+ 
+-		/*
+-		 * Non sparse file systems zero on extend, so no need
+-		 * to do that now.
+-		 */
+-		if (!ocfs2_sparse_alloc(osb) &&
+-		    PAGE_SIZE < osb->s_clustersize)
+-			end = PAGE_SIZE;
+-
+-		ret = ocfs2_grab_eof_pages(inode, 0, end, pages, &num_pages);
++		ret = ocfs2_grab_eof_pages(inode, 0, page_end, &page,
++					   &num_pages);
+ 		if (ret) {
+ 			mlog_errno(ret);
+ 			need_free = 1;
+@@ -7136,20 +7121,15 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 		 * This should populate the 1st page for us and mark
+ 		 * it up to date.
+ 		 */
+-		ret = ocfs2_read_inline_data(inode, pages[0], di_bh);
++		ret = ocfs2_read_inline_data(inode, page, di_bh);
+ 		if (ret) {
+ 			mlog_errno(ret);
+ 			need_free = 1;
+ 			goto out_unlock;
+ 		}
+ 
+-		page_end = PAGE_SIZE;
+-		if (PAGE_SIZE > osb->s_clustersize)
+-			page_end = osb->s_clustersize;
+-
+-		for (i = 0; i < num_pages; i++)
+-			ocfs2_map_and_dirty_page(inode, handle, 0, page_end,
+-						 pages[i], i > 0, &phys);
++		ocfs2_map_and_dirty_page(inode, handle, 0, page_end, page, 0,
++					 &phys);
+ 	}
+ 
+ 	spin_lock(&oi->ip_lock);
+@@ -7180,8 +7160,8 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
+ 	}
+ 
+ out_unlock:
+-	if (pages)
+-		ocfs2_unlock_and_free_pages(pages, num_pages);
++	if (page)
++		ocfs2_unlock_and_free_pages(&page, num_pages);
+ 
+ out_commit:
+ 	if (ret < 0 && did_quota)
+@@ -7205,8 +7185,6 @@ out_commit:
+ out:
+ 	if (data_ac)
+ 		ocfs2_free_alloc_context(data_ac);
+-free_pages:
+-	kfree(pages);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index c86bd4e60e207..5c914ce9b3ac9 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2167,11 +2167,17 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ 	}
+ 
+ 	if (ocfs2_clusterinfo_valid(osb)) {
++		/*
++		 * ci_stack and ci_cluster in ocfs2_cluster_info may not be null
++		 * terminated, so make sure no overflow happens here by using
++		 * memcpy. Destination strings will always be null terminated
++		 * because osb is allocated using kzalloc.
++		 */
+ 		osb->osb_stackflags =
+ 			OCFS2_RAW_SB(di)->s_cluster_info.ci_stackflags;
+-		strlcpy(osb->osb_cluster_stack,
++		memcpy(osb->osb_cluster_stack,
+ 		       OCFS2_RAW_SB(di)->s_cluster_info.ci_stack,
+-		       OCFS2_STACK_LABEL_LEN + 1);
++		       OCFS2_STACK_LABEL_LEN);
+ 		if (strlen(osb->osb_cluster_stack) != OCFS2_STACK_LABEL_LEN) {
+ 			mlog(ML_ERROR,
+ 			     "couldn't mount because of an invalid "
+@@ -2180,9 +2186,9 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ 			status = -EINVAL;
+ 			goto bail;
+ 		}
+-		strlcpy(osb->osb_cluster_name,
++		memcpy(osb->osb_cluster_name,
+ 			OCFS2_RAW_SB(di)->s_cluster_info.ci_cluster,
+-			OCFS2_CLUSTER_NAME_LEN + 1);
++			OCFS2_CLUSTER_NAME_LEN);
+ 	} else {
+ 		/* The empty string is identical with classic tools that
+ 		 * don't know about s_cluster_info. */
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index c830cc4ea60fb..a87af3333739d 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1826,9 +1826,15 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx,
+ 	if (mode_wp && mode_dontwake)
+ 		return -EINVAL;
+ 
+-	ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,
+-				  uffdio_wp.range.len, mode_wp,
+-				  &ctx->mmap_changing);
++	if (mmget_not_zero(ctx->mm)) {
++		ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,
++					  uffdio_wp.range.len, mode_wp,
++					  &ctx->mmap_changing);
++		mmput(ctx->mm);
++	} else {
++		return -ESRCH;
++	}
++
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/include/linux/elfcore.h b/include/linux/elfcore.h
+index 2aaa15779d50b..957ebec35aad0 100644
+--- a/include/linux/elfcore.h
++++ b/include/linux/elfcore.h
+@@ -109,7 +109,7 @@ static inline int elf_core_copy_task_fpregs(struct task_struct *t, struct pt_reg
+ #endif
+ }
+ 
+-#if defined(CONFIG_UM) || defined(CONFIG_IA64)
++#if (defined(CONFIG_UML) && defined(CONFIG_X86_32)) || defined(CONFIG_IA64)
+ /*
+  * These functions parameterize elf_core_dump in fs/binfmt_elf.c to write out
+  * extra segments containing the gate DSO contents.  Dumping its
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 25a8be58d2895..9b8add8eac0cb 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1135,7 +1135,6 @@ int mlx5_cmd_create_vport_lag(struct mlx5_core_dev *dev);
+ int mlx5_cmd_destroy_vport_lag(struct mlx5_core_dev *dev);
+ bool mlx5_lag_is_roce(struct mlx5_core_dev *dev);
+ bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev);
+-bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev);
+ bool mlx5_lag_is_active(struct mlx5_core_dev *dev);
+ struct net_device *mlx5_lag_get_roce_netdev(struct mlx5_core_dev *dev);
+ u8 mlx5_lag_get_slave_port(struct mlx5_core_dev *dev,
+diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
+index 21c3771e6a56b..988528b5da438 100644
+--- a/include/linux/secretmem.h
++++ b/include/linux/secretmem.h
+@@ -23,7 +23,7 @@ static inline bool page_is_secretmem(struct page *page)
+ 	mapping = (struct address_space *)
+ 		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
+ 
+-	if (mapping != page->mapping)
++	if (!mapping || mapping != page->mapping)
+ 		return false;
+ 
+ 	return mapping->a_ops == &secretmem_aops;
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index 97b8d12b5f2bb..5d80c6fd2a223 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -527,6 +527,9 @@ struct spi_controller {
+ 	/* I/O mutex */
+ 	struct mutex		io_mutex;
+ 
++	/* Used to avoid adding the same CS twice */
++	struct mutex		add_lock;
++
+ 	/* lock and mutex for SPI bus locking */
+ 	spinlock_t		bus_lock_spinlock;
+ 	struct mutex		bus_lock_mutex;
+diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
+index a9f9c5714e650..fe95f09225266 100644
+--- a/include/linux/trace_recursion.h
++++ b/include/linux/trace_recursion.h
+@@ -16,23 +16,8 @@
+  *  When function tracing occurs, the following steps are made:
+  *   If arch does not support a ftrace feature:
+  *    call internal function (uses INTERNAL bits) which calls...
+- *   If callback is registered to the "global" list, the list
+- *    function is called and recursion checks the GLOBAL bits.
+- *    then this function calls...
+  *   The function callback, which can use the FTRACE bits to
+  *    check for recursion.
+- *
+- * Now if the arch does not support a feature, and it calls
+- * the global list function which calls the ftrace callback
+- * all three of these steps will do a recursion protection.
+- * There's no reason to do one if the previous caller already
+- * did. The recursion that we are protecting against will
+- * go through the same steps again.
+- *
+- * To prevent the multiple recursion checks, if a recursion
+- * bit is set that is higher than the MAX bit of the current
+- * check, then we know that the check was made by the previous
+- * caller, and we can skip the current check.
+  */
+ enum {
+ 	/* Function recursion bits */
+@@ -40,12 +25,14 @@ enum {
+ 	TRACE_FTRACE_NMI_BIT,
+ 	TRACE_FTRACE_IRQ_BIT,
+ 	TRACE_FTRACE_SIRQ_BIT,
++	TRACE_FTRACE_TRANSITION_BIT,
+ 
+-	/* INTERNAL_BITs must be greater than FTRACE_BITs */
++	/* Internal use recursion bits */
+ 	TRACE_INTERNAL_BIT,
+ 	TRACE_INTERNAL_NMI_BIT,
+ 	TRACE_INTERNAL_IRQ_BIT,
+ 	TRACE_INTERNAL_SIRQ_BIT,
++	TRACE_INTERNAL_TRANSITION_BIT,
+ 
+ 	TRACE_BRANCH_BIT,
+ /*
+@@ -86,12 +73,6 @@ enum {
+ 	 */
+ 	TRACE_GRAPH_NOTRACE_BIT,
+ 
+-	/*
+-	 * When transitioning between context, the preempt_count() may
+-	 * not be correct. Allow for a single recursion to cover this case.
+-	 */
+-	TRACE_TRANSITION_BIT,
+-
+ 	/* Used to prevent recursion recording from recursing. */
+ 	TRACE_RECORD_RECURSION_BIT,
+ };
+@@ -113,12 +94,10 @@ enum {
+ #define TRACE_CONTEXT_BITS	4
+ 
+ #define TRACE_FTRACE_START	TRACE_FTRACE_BIT
+-#define TRACE_FTRACE_MAX	((1 << (TRACE_FTRACE_START + TRACE_CONTEXT_BITS)) - 1)
+ 
+ #define TRACE_LIST_START	TRACE_INTERNAL_BIT
+-#define TRACE_LIST_MAX		((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
+ 
+-#define TRACE_CONTEXT_MASK	TRACE_LIST_MAX
++#define TRACE_CONTEXT_MASK	((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
+ 
+ /*
+  * Used for setting context
+@@ -132,6 +111,7 @@ enum {
+ 	TRACE_CTX_IRQ,
+ 	TRACE_CTX_SOFTIRQ,
+ 	TRACE_CTX_NORMAL,
++	TRACE_CTX_TRANSITION,
+ };
+ 
+ static __always_inline int trace_get_context_bit(void)
+@@ -160,45 +140,34 @@ extern void ftrace_record_recursion(unsigned long ip, unsigned long parent_ip);
+ #endif
+ 
+ static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip,
+-							int start, int max)
++							int start)
+ {
+ 	unsigned int val = READ_ONCE(current->trace_recursion);
+ 	int bit;
+ 
+-	/* A previous recursion check was made */
+-	if ((val & TRACE_CONTEXT_MASK) > max)
+-		return 0;
+-
+ 	bit = trace_get_context_bit() + start;
+ 	if (unlikely(val & (1 << bit))) {
+ 		/*
+ 		 * It could be that preempt_count has not been updated during
+ 		 * a switch between contexts. Allow for a single recursion.
+ 		 */
+-		bit = TRACE_TRANSITION_BIT;
++		bit = TRACE_CTX_TRANSITION + start;
+ 		if (val & (1 << bit)) {
+ 			do_ftrace_record_recursion(ip, pip);
+ 			return -1;
+ 		}
+-	} else {
+-		/* Normal check passed, clear the transition to allow it again */
+-		val &= ~(1 << TRACE_TRANSITION_BIT);
+ 	}
+ 
+ 	val |= 1 << bit;
+ 	current->trace_recursion = val;
+ 	barrier();
+ 
+-	return bit + 1;
++	return bit;
+ }
+ 
+ static __always_inline void trace_clear_recursion(int bit)
+ {
+-	if (!bit)
+-		return;
+-
+ 	barrier();
+-	bit--;
+ 	trace_recursion_clear(bit);
+ }
+ 
+@@ -214,7 +183,7 @@ static __always_inline void trace_clear_recursion(int bit)
+ static __always_inline int ftrace_test_recursion_trylock(unsigned long ip,
+ 							 unsigned long parent_ip)
+ {
+-	return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX);
++	return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START);
+ }
+ 
+ /**
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index eb70cabe6e7f2..33a4240e6a6f1 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -127,6 +127,8 @@ static inline long get_ucounts_value(struct ucounts *ucounts, enum ucount_type t
+ 
+ long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v);
+ bool dec_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v);
++long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type);
++void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type);
+ bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max);
+ 
+ static inline void set_rlimit_ucount_max(struct user_namespace *ns,
+diff --git a/include/net/sctp/sm.h b/include/net/sctp/sm.h
+index 2eb6d7c2c9310..f37c7a558d6dd 100644
+--- a/include/net/sctp/sm.h
++++ b/include/net/sctp/sm.h
+@@ -384,11 +384,11 @@ sctp_vtag_verify(const struct sctp_chunk *chunk,
+ 	 * Verification Tag value does not match the receiver's own
+ 	 * tag value, the receiver shall silently discard the packet...
+ 	 */
+-        if (ntohl(chunk->sctp_hdr->vtag) == asoc->c.my_vtag)
+-                return 1;
++	if (ntohl(chunk->sctp_hdr->vtag) != asoc->c.my_vtag)
++		return 0;
+ 
+ 	chunk->transport->encap_port = SCTP_INPUT_CB(chunk->skb)->encap_port;
+-	return 0;
++	return 1;
+ }
+ 
+ /* Check VTAG of the packet matches the sender's own tag and the T bit is
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index 2e8d51937acdf..47d2cad21b56a 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -225,6 +225,7 @@ struct hda_codec {
+ #endif
+ 
+ 	/* misc flags */
++	unsigned int configured:1; /* codec was configured */
+ 	unsigned int in_freeing:1; /* being released */
+ 	unsigned int registered:1; /* codec was registered */
+ 	unsigned int display_power_control:1; /* needs display power */
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 8dd73a64f9216..b1cb1dbf7417f 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -657,7 +657,7 @@ static int audit_filter_rules(struct task_struct *tsk,
+ 			result = audit_comparator(audit_loginuid_set(tsk), f->op, f->val);
+ 			break;
+ 		case AUDIT_SADDR_FAM:
+-			if (ctx->sockaddr)
++			if (ctx && ctx->sockaddr)
+ 				result = audit_comparator(ctx->sockaddr->ss_family,
+ 							  f->op, f->val);
+ 			break;
+diff --git a/kernel/cred.c b/kernel/cred.c
+index f784e08c2fbd6..1ae0b4948a5a8 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -225,8 +225,6 @@ struct cred *cred_alloc_blank(void)
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	new->magic = CRED_MAGIC;
+ #endif
+-	new->ucounts = get_ucounts(&init_ucounts);
+-
+ 	if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+ 
+@@ -501,7 +499,7 @@ int commit_creds(struct cred *new)
+ 		inc_rlimit_ucounts(new->ucounts, UCOUNT_RLIMIT_NPROC, 1);
+ 	rcu_assign_pointer(task->real_cred, new);
+ 	rcu_assign_pointer(task->cred, new);
+-	if (new->user != old->user)
++	if (new->user != old->user || new->user_ns != old->user_ns)
+ 		dec_rlimit_ucounts(old->ucounts, UCOUNT_RLIMIT_NPROC, 1);
+ 	alter_cred_subscribers(old, -2);
+ 
+@@ -669,7 +667,7 @@ int set_cred_ucounts(struct cred *new)
+ {
+ 	struct task_struct *task = current;
+ 	const struct cred *old = task->real_cred;
+-	struct ucounts *old_ucounts = new->ucounts;
++	struct ucounts *new_ucounts, *old_ucounts = new->ucounts;
+ 
+ 	if (new->user == old->user && new->user_ns == old->user_ns)
+ 		return 0;
+@@ -681,9 +679,10 @@ int set_cred_ucounts(struct cred *new)
+ 	if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
+ 		return 0;
+ 
+-	if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid)))
++	if (!(new_ucounts = alloc_ucounts(new->user_ns, new->euid)))
+ 		return -EAGAIN;
+ 
++	new->ucounts = new_ucounts;
+ 	if (old_ucounts)
+ 		put_ucounts(old_ucounts);
+ 
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 70519f67556f9..fad3c77c1da17 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -1299,6 +1299,12 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 	if (unlikely(dma_debug_disabled()))
+ 		return;
+ 
++	for_each_sg(sg, s, nents, i) {
++		check_for_stack(dev, sg_page(s), s->offset);
++		if (!PageHighMem(sg_page(s)))
++			check_for_illegal_area(dev, sg_virt(s), s->length);
++	}
++
+ 	for_each_sg(sg, s, mapped_ents, i) {
+ 		entry = dma_entry_alloc();
+ 		if (!entry)
+@@ -1314,12 +1320,6 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 		entry->sg_call_ents   = nents;
+ 		entry->sg_mapped_ents = mapped_ents;
+ 
+-		check_for_stack(dev, sg_page(s), s->offset);
+-
+-		if (!PageHighMem(sg_page(s))) {
+-			check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s));
+-		}
+-
+ 		check_sg_segment(dev, s);
+ 
+ 		add_dma_entry(entry);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 399c37c95392e..e165d28cf73be 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8495,6 +8495,7 @@ void idle_task_exit(void)
+ 		finish_arch_post_lock_switch();
+ 	}
+ 
++	scs_task_reset(current);
+ 	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index a3229add44554..13d2505a14a0e 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -425,22 +425,10 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t gfp_flags,
+ 	 */
+ 	rcu_read_lock();
+ 	ucounts = task_ucounts(t);
+-	sigpending = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);
+-	switch (sigpending) {
+-	case 1:
+-		if (likely(get_ucounts(ucounts)))
+-			break;
+-		fallthrough;
+-	case LONG_MAX:
+-		/*
+-		 * we need to decrease the ucount in the userns tree on any
+-		 * failure to avoid counts leaking.
+-		 */
+-		dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);
+-		rcu_read_unlock();
+-		return NULL;
+-	}
++	sigpending = inc_rlimit_get_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);
+ 	rcu_read_unlock();
++	if (!sigpending)
++		return NULL;
+ 
+ 	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
+ 		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
+@@ -449,8 +437,7 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t gfp_flags,
+ 	}
+ 
+ 	if (unlikely(q == NULL)) {
+-		if (dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1))
+-			put_ucounts(ucounts);
++		dec_rlimit_put_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);
+ 	} else {
+ 		INIT_LIST_HEAD(&q->list);
+ 		q->flags = sigqueue_flags;
+@@ -463,8 +450,8 @@ static void __sigqueue_free(struct sigqueue *q)
+ {
+ 	if (q->flags & SIGQUEUE_PREALLOC)
+ 		return;
+-	if (q->ucounts && dec_rlimit_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING, 1)) {
+-		put_ucounts(q->ucounts);
++	if (q->ucounts) {
++		dec_rlimit_put_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING);
+ 		q->ucounts = NULL;
+ 	}
+ 	kmem_cache_free(sigqueue_cachep, q);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 7b180f61e6d3c..06700d5b11717 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -6977,7 +6977,7 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
+ 	struct ftrace_ops *op;
+ 	int bit;
+ 
+-	bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX);
++	bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START);
+ 	if (bit < 0)
+ 		return;
+ 
+@@ -7052,7 +7052,7 @@ static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
+ {
+ 	int bit;
+ 
+-	bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX);
++	bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START);
+ 	if (bit < 0)
+ 		return;
+ 
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index bb51849e63752..eb03f3c68375d 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -284,6 +284,55 @@ bool dec_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v)
+ 	return (new == 0);
+ }
+ 
++static void do_dec_rlimit_put_ucounts(struct ucounts *ucounts,
++				struct ucounts *last, enum ucount_type type)
++{
++	struct ucounts *iter, *next;
++	for (iter = ucounts; iter != last; iter = next) {
++		long dec = atomic_long_add_return(-1, &iter->ucount[type]);
++		WARN_ON_ONCE(dec < 0);
++		next = iter->ns->ucounts;
++		if (dec == 0)
++			put_ucounts(iter);
++	}
++}
++
++void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type)
++{
++	do_dec_rlimit_put_ucounts(ucounts, NULL, type);
++}
++
++long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type)
++{
++	/* Caller must hold a reference to ucounts */
++	struct ucounts *iter;
++	long dec, ret = 0;
++
++	for (iter = ucounts; iter; iter = iter->ns->ucounts) {
++		long max = READ_ONCE(iter->ns->ucount_max[type]);
++		long new = atomic_long_add_return(1, &iter->ucount[type]);
++		if (new < 0 || new > max)
++			goto unwind;
++		if (iter == ucounts)
++			ret = new;
++		/*
++		 * Grab an extra ucount reference for the caller when
++		 * the rlimit count was previously 0.
++		 */
++		if (new != 1)
++			continue;
++		if (!get_ucounts(iter))
++			goto dec_unwind;
++	}
++	return ret;
++dec_unwind:
++	dec = atomic_long_add_return(-1, &iter->ucount[type]);
++	WARN_ON_ONCE(dec < 0);
++unwind:
++	do_dec_rlimit_put_ucounts(ucounts, iter, type);
++	return 0;
++}
++
+ bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max)
+ {
+ 	struct ucounts *iter;
+diff --git a/lib/Makefile b/lib/Makefile
+index 5efd1b435a37c..a841be5244ac6 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -351,7 +351,7 @@ obj-$(CONFIG_OBJAGG) += objagg.o
+ obj-$(CONFIG_PLDMFW) += pldmfw/
+ 
+ # KUnit tests
+-CFLAGS_bitfield_kunit.o := $(call cc-option,-Wframe-larger-than=10240)
++CFLAGS_bitfield_kunit.o := $(DISABLE_STRUCTLEAK_PLUGIN)
+ obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o
+ obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
+ obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
+diff --git a/lib/kunit/executor_test.c b/lib/kunit/executor_test.c
+index cdbe54b165017..e14a18af573dd 100644
+--- a/lib/kunit/executor_test.c
++++ b/lib/kunit/executor_test.c
+@@ -116,8 +116,8 @@ static void kfree_at_end(struct kunit *test, const void *to_free)
+ 	/* kfree() handles NULL already, but avoid allocating a no-op cleanup. */
+ 	if (IS_ERR_OR_NULL(to_free))
+ 		return;
+-	kunit_alloc_and_get_resource(test, NULL, kfree_res_free, GFP_KERNEL,
+-				     (void *)to_free);
++	kunit_alloc_resource(test, NULL, kfree_res_free, GFP_KERNEL,
++			     (void *)to_free);
+ }
+ 
+ static struct kunit_suite *alloc_fake_suite(struct kunit *test,
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index afff3ac870673..163c2da2a6548 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2724,12 +2724,14 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
+ 		if (mapping) {
+ 			int nr = thp_nr_pages(head);
+ 
+-			if (PageSwapBacked(head))
++			if (PageSwapBacked(head)) {
+ 				__mod_lruvec_page_state(head, NR_SHMEM_THPS,
+ 							-nr);
+-			else
++			} else {
+ 				__mod_lruvec_page_state(head, NR_FILE_THPS,
+ 							-nr);
++				filemap_nr_thps_dec(mapping);
++			}
+ 		}
+ 
+ 		__split_huge_page(page, list, end);
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 54f6eaff18c52..aeebe8d11ad1f 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -857,16 +857,6 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags,
+ 		goto out;
+ 	}
+ 
+-	if (flags & MPOL_F_NUMA_BALANCING) {
+-		if (new && new->mode == MPOL_BIND) {
+-			new->flags |= (MPOL_F_MOF | MPOL_F_MORON);
+-		} else {
+-			ret = -EINVAL;
+-			mpol_put(new);
+-			goto out;
+-		}
+-	}
+-
+ 	ret = mpol_set_nodemask(new, nodes, scratch);
+ 	if (ret) {
+ 		mpol_put(new);
+@@ -1450,7 +1440,11 @@ static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
+ 		return -EINVAL;
+ 	if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES))
+ 		return -EINVAL;
+-
++	if (*flags & MPOL_F_NUMA_BALANCING) {
++		if (*mode != MPOL_BIND)
++			return -EINVAL;
++		*flags |= (MPOL_F_MOF | MPOL_F_MORON);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/mm/slub.c b/mm/slub.c
+index f77d8cd79ef7f..1d2587d9dbfda 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1629,7 +1629,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s,
+ }
+ 
+ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+-					   void **head, void **tail)
++					   void **head, void **tail,
++					   int *cnt)
+ {
+ 
+ 	void *object;
+@@ -1656,6 +1657,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
+ 			*head = object;
+ 			if (!*tail)
+ 				*tail = object;
++		} else {
++			/*
++			 * Adjust the reconstructed freelist depth
++			 * accordingly if object's reuse is delayed.
++			 */
++			--(*cnt);
+ 		}
+ 	} while (object != old_tail);
+ 
+@@ -3166,7 +3173,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
+ 	struct kmem_cache_cpu *c;
+ 	unsigned long tid;
+ 
+-	memcg_slab_free_hook(s, &head, 1);
++	/* memcg_slab_free_hook() is already called for bulk free. */
++	if (!tail)
++		memcg_slab_free_hook(s, &head, 1);
+ redo:
+ 	/*
+ 	 * Determine the currently cpus per cpu slab.
+@@ -3210,7 +3219,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
+ 	 * With KASAN enabled slab_free_freelist_hook modifies the freelist
+ 	 * to remove objects, whose reuse must be delayed.
+ 	 */
+-	if (slab_free_freelist_hook(s, &head, &tail))
++	if (slab_free_freelist_hook(s, &head, &tail, &cnt))
+ 		do_slab_free(s, page, head, tail, cnt, addr);
+ }
+ 
+@@ -3928,8 +3937,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
+ 	if (alloc_kmem_cache_cpus(s))
+ 		return 0;
+ 
+-	free_kmem_cache_nodes(s);
+ error:
++	__kmem_cache_release(s);
+ 	return -EINVAL;
+ }
+ 
+@@ -4597,13 +4606,15 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
+ 		return 0;
+ 
+ 	err = sysfs_slab_add(s);
+-	if (err)
++	if (err) {
+ 		__kmem_cache_release(s);
++		return err;
++	}
+ 
+ 	if (s->flags & SLAB_STORE_USER)
+ 		debugfs_slab_add(s);
+ 
+-	return err;
++	return 0;
+ }
+ 
+ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index caa16bf30fb55..a63f1cc8568bc 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -552,6 +552,12 @@ static void convert_skb_to___skb(struct sk_buff *skb, struct __sk_buff *__skb)
+ 	__skb->gso_segs = skb_shinfo(skb)->gso_segs;
+ }
+ 
++static struct proto bpf_dummy_proto = {
++	.name   = "bpf_dummy",
++	.owner  = THIS_MODULE,
++	.obj_size = sizeof(struct sock),
++};
++
+ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
+ 			  union bpf_attr __user *uattr)
+ {
+@@ -596,20 +602,19 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
+ 		break;
+ 	}
+ 
+-	sk = kzalloc(sizeof(struct sock), GFP_USER);
++	sk = sk_alloc(net, AF_UNSPEC, GFP_USER, &bpf_dummy_proto, 1);
+ 	if (!sk) {
+ 		kfree(data);
+ 		kfree(ctx);
+ 		return -ENOMEM;
+ 	}
+-	sock_net_set(sk, net);
+ 	sock_init_data(NULL, sk);
+ 
+ 	skb = build_skb(data, 0);
+ 	if (!skb) {
+ 		kfree(data);
+ 		kfree(ctx);
+-		kfree(sk);
++		sk_free(sk);
+ 		return -ENOMEM;
+ 	}
+ 	skb->sk = sk;
+@@ -682,8 +687,7 @@ out:
+ 	if (dev && dev != net->loopback_dev)
+ 		dev_put(dev);
+ 	kfree_skb(skb);
+-	bpf_sk_storage_free(sk);
+-	kfree(sk);
++	sk_free(sk);
+ 	kfree(ctx);
+ 	return ret;
+ }
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 2b48b204205e6..7d3155283af93 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -1002,9 +1002,7 @@ static inline unsigned long br_multicast_lmqt(const struct net_bridge *br)
+ 
+ static inline unsigned long br_multicast_gmi(const struct net_bridge *br)
+ {
+-	/* use the RFC default of 2 for QRV */
+-	return 2 * br->multicast_query_interval +
+-	       br->multicast_query_response_interval;
++	return br->multicast_membership_interval;
+ }
+ #else
+ static inline int br_multicast_rcv(struct net_bridge *br,
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index caaa532ece949..df6968b28bf41 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -121,7 +121,7 @@ enum {
+ struct tpcon {
+ 	int idx;
+ 	int len;
+-	u8 state;
++	u32 state;
+ 	u8 bs;
+ 	u8 sn;
+ 	u8 ll_dl;
+@@ -848,6 +848,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ {
+ 	struct sock *sk = sock->sk;
+ 	struct isotp_sock *so = isotp_sk(sk);
++	u32 old_state = so->tx.state;
+ 	struct sk_buff *skb;
+ 	struct net_device *dev;
+ 	struct canfd_frame *cf;
+@@ -860,45 +861,55 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 		return -EADDRNOTAVAIL;
+ 
+ 	/* we do not support multiple buffers - for now */
+-	if (so->tx.state != ISOTP_IDLE || wq_has_sleeper(&so->wait)) {
+-		if (msg->msg_flags & MSG_DONTWAIT)
+-			return -EAGAIN;
++	if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE ||
++	    wq_has_sleeper(&so->wait)) {
++		if (msg->msg_flags & MSG_DONTWAIT) {
++			err = -EAGAIN;
++			goto err_out;
++		}
+ 
+ 		/* wait for complete transmission of current pdu */
+-		wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++		err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++		if (err)
++			goto err_out;
+ 	}
+ 
+-	if (!size || size > MAX_MSG_LENGTH)
+-		return -EINVAL;
++	if (!size || size > MAX_MSG_LENGTH) {
++		err = -EINVAL;
++		goto err_out;
++	}
+ 
+ 	/* take care of a potential SF_DL ESC offset for TX_DL > 8 */
+ 	off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;
+ 
+ 	/* does the given data fit into a single frame for SF_BROADCAST? */
+ 	if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) &&
+-	    (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off))
+-		return -EINVAL;
++	    (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) {
++		err = -EINVAL;
++		goto err_out;
++	}
+ 
+ 	err = memcpy_from_msg(so->tx.buf, msg, size);
+ 	if (err < 0)
+-		return err;
++		goto err_out;
+ 
+ 	dev = dev_get_by_index(sock_net(sk), so->ifindex);
+-	if (!dev)
+-		return -ENXIO;
++	if (!dev) {
++		err = -ENXIO;
++		goto err_out;
++	}
+ 
+ 	skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv),
+ 				  msg->msg_flags & MSG_DONTWAIT, &err);
+ 	if (!skb) {
+ 		dev_put(dev);
+-		return err;
++		goto err_out;
+ 	}
+ 
+ 	can_skb_reserve(skb);
+ 	can_skb_prv(skb)->ifindex = dev->ifindex;
+ 	can_skb_prv(skb)->skbcnt = 0;
+ 
+-	so->tx.state = ISOTP_SENDING;
+ 	so->tx.len = size;
+ 	so->tx.idx = 0;
+ 
+@@ -954,15 +965,25 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ 	if (err) {
+ 		pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
+ 			       __func__, ERR_PTR(err));
+-		return err;
++		goto err_out;
+ 	}
+ 
+ 	if (wait_tx_done) {
+ 		/* wait for complete transmission of current pdu */
+ 		wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);
++
++		if (sk->sk_err)
++			return -sk->sk_err;
+ 	}
+ 
+ 	return size;
++
++err_out:
++	so->tx.state = old_state;
++	if (so->tx.state == ISOTP_IDLE)
++		wake_up_interruptible(&so->wait);
++
++	return err;
+ }
+ 
+ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index 12369b604ce95..cea712fb2a9e0 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -326,6 +326,7 @@ int j1939_session_activate(struct j1939_session *session);
+ void j1939_tp_schedule_txtimer(struct j1939_session *session, int msec);
+ void j1939_session_timers_cancel(struct j1939_session *session);
+ 
++#define J1939_MIN_TP_PACKET_SIZE 9
+ #define J1939_MAX_TP_PACKET_SIZE (7 * 0xff)
+ #define J1939_MAX_ETP_PACKET_SIZE (7 * 0x00ffffff)
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 08c8606cfd9c7..9bc55ecb37f9f 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -249,11 +249,14 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 	struct j1939_priv *priv, *priv_new;
+ 	int ret;
+ 
+-	priv = j1939_priv_get_by_ndev(ndev);
++	spin_lock(&j1939_netdev_lock);
++	priv = j1939_priv_get_by_ndev_locked(ndev);
+ 	if (priv) {
+ 		kref_get(&priv->rx_kref);
++		spin_unlock(&j1939_netdev_lock);
+ 		return priv;
+ 	}
++	spin_unlock(&j1939_netdev_lock);
+ 
+ 	priv = j1939_priv_create(ndev);
+ 	if (!priv)
+@@ -269,10 +272,10 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 		/* Someone was faster than us, use their priv and roll
+ 		 * back our's.
+ 		 */
++		kref_get(&priv_new->rx_kref);
+ 		spin_unlock(&j1939_netdev_lock);
+ 		dev_put(ndev);
+ 		kfree(priv);
+-		kref_get(&priv_new->rx_kref);
+ 		return priv_new;
+ 	}
+ 	j1939_priv_set(ndev, priv);
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index bdc95bd7a851f..e59fbbffa31ce 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1230,12 +1230,11 @@ static enum hrtimer_restart j1939_tp_rxtimer(struct hrtimer *hrtimer)
+ 		session->err = -ETIME;
+ 		j1939_session_deactivate(session);
+ 	} else {
+-		netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n",
+-			     __func__, session);
+-
+ 		j1939_session_list_lock(session->priv);
+ 		if (session->state >= J1939_SESSION_ACTIVE &&
+ 		    session->state < J1939_SESSION_ACTIVE_MAX) {
++			netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n",
++				     __func__, session);
+ 			j1939_session_get(session);
+ 			hrtimer_start(&session->rxtimer,
+ 				      ms_to_ktime(J1939_XTP_ABORT_TIMEOUT_MS),
+@@ -1597,6 +1596,8 @@ j1939_session *j1939_xtp_rx_rts_session_new(struct j1939_priv *priv,
+ 			abort = J1939_XTP_ABORT_FAULT;
+ 		else if (len > priv->tp_max_packet_size)
+ 			abort = J1939_XTP_ABORT_RESOURCE;
++		else if (len < J1939_MIN_TP_PACKET_SIZE)
++			abort = J1939_XTP_ABORT_FAULT;
+ 	}
+ 
+ 	if (abort != J1939_XTP_NO_ABORT) {
+@@ -1771,6 +1772,7 @@ static void j1939_xtp_rx_dpo(struct j1939_priv *priv, struct sk_buff *skb,
+ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 				 struct sk_buff *skb)
+ {
++	enum j1939_xtp_abort abort = J1939_XTP_ABORT_FAULT;
+ 	struct j1939_priv *priv = session->priv;
+ 	struct j1939_sk_buff_cb *skcb;
+ 	struct sk_buff *se_skb = NULL;
+@@ -1785,9 +1787,11 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 
+ 	skcb = j1939_skb_to_cb(skb);
+ 	dat = skb->data;
+-	if (skb->len <= 1)
++	if (skb->len != 8) {
+ 		/* makes no sense */
++		abort = J1939_XTP_ABORT_UNEXPECTED_DATA;
+ 		goto out_session_cancel;
++	}
+ 
+ 	switch (session->last_cmd) {
+ 	case 0xff:
+@@ -1885,7 +1889,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+  out_session_cancel:
+ 	kfree_skb(se_skb);
+ 	j1939_session_timers_cancel(session);
+-	j1939_session_cancel(session, J1939_XTP_ABORT_FAULT);
++	j1939_session_cancel(session, abort);
+ 	j1939_session_put(session);
+ }
+ 
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 76ed5ef0e36a8..28326ca34b523 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -1283,12 +1283,15 @@ static int dsa_switch_parse_ports_of(struct dsa_switch *ds,
+ 
+ 	for_each_available_child_of_node(ports, port) {
+ 		err = of_property_read_u32(port, "reg", &reg);
+-		if (err)
++		if (err) {
++			of_node_put(port);
+ 			goto out_put_node;
++		}
+ 
+ 		if (reg >= ds->num_ports) {
+ 			dev_err(ds->dev, "port %pOF index %u exceeds num_ports (%zu)\n",
+ 				port, reg, ds->num_ports);
++			of_node_put(port);
+ 			err = -EINVAL;
+ 			goto out_put_node;
+ 		}
+@@ -1296,8 +1299,10 @@ static int dsa_switch_parse_ports_of(struct dsa_switch *ds,
+ 		dp = dsa_to_port(ds, reg);
+ 
+ 		err = dsa_port_parse_of(dp, port);
+-		if (err)
++		if (err) {
++			of_node_put(port);
+ 			goto out_put_node;
++		}
+ 	}
+ 
+ out_put_node:
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index db07c05736b25..609d7048d04de 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1037,6 +1037,20 @@ static void tcp_v4_reqsk_destructor(struct request_sock *req)
+ DEFINE_STATIC_KEY_FALSE(tcp_md5_needed);
+ EXPORT_SYMBOL(tcp_md5_needed);
+ 
++static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *new)
++{
++	if (!old)
++		return true;
++
++	/* l3index always overrides non-l3index */
++	if (old->l3index && new->l3index == 0)
++		return false;
++	if (old->l3index == 0 && new->l3index)
++		return true;
++
++	return old->prefixlen < new->prefixlen;
++}
++
+ /* Find the Key structure for an address.  */
+ struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
+ 					   const union tcp_md5_addr *addr,
+@@ -1074,8 +1088,7 @@ struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
+ 			match = false;
+ 		}
+ 
+-		if (match && (!best_match ||
+-			      key->prefixlen > best_match->prefixlen))
++		if (match && better_md5_match(best_match, key))
+ 			best_match = key;
+ 	}
+ 	return best_match;
+@@ -1105,7 +1118,7 @@ static struct tcp_md5sig_key *tcp_md5_do_lookup_exact(const struct sock *sk,
+ 				 lockdep_sock_is_held(sk)) {
+ 		if (key->family != family)
+ 			continue;
+-		if (key->l3index && key->l3index != l3index)
++		if (key->l3index != l3index)
+ 			continue;
+ 		if (!memcmp(&key->addr, addr, size) &&
+ 		    key->prefixlen == prefixlen)
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 8e6ca9ad68121..80b1a7838cff7 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -488,13 +488,14 @@ static bool ip6_pkt_too_big(const struct sk_buff *skb, unsigned int mtu)
+ 
+ int ip6_forward(struct sk_buff *skb)
+ {
+-	struct inet6_dev *idev = __in6_dev_get_safely(skb->dev);
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct ipv6hdr *hdr = ipv6_hdr(skb);
+ 	struct inet6_skb_parm *opt = IP6CB(skb);
+ 	struct net *net = dev_net(dst->dev);
++	struct inet6_dev *idev;
+ 	u32 mtu;
+ 
++	idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));
+ 	if (net->ipv6.devconf_all->forwarding == 0)
+ 		goto error;
+ 
+diff --git a/net/ipv6/netfilter/ip6t_rt.c b/net/ipv6/netfilter/ip6t_rt.c
+index 733c83d38b308..4ad8b2032f1f9 100644
+--- a/net/ipv6/netfilter/ip6t_rt.c
++++ b/net/ipv6/netfilter/ip6t_rt.c
+@@ -25,12 +25,7 @@ MODULE_AUTHOR("Andras Kis-Szabo <kisza@sch.bme.hu>");
+ static inline bool
+ segsleft_match(u_int32_t min, u_int32_t max, u_int32_t id, bool invert)
+ {
+-	bool r;
+-	pr_debug("segsleft_match:%c 0x%x <= 0x%x <= 0x%x\n",
+-		 invert ? '!' : ' ', min, id, max);
+-	r = (id >= min && id <= max) ^ invert;
+-	pr_debug(" result %s\n", r ? "PASS" : "FAILED");
+-	return r;
++	return (id >= min && id <= max) ^ invert;
+ }
+ 
+ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+@@ -65,30 +60,6 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 		return false;
+ 	}
+ 
+-	pr_debug("IPv6 RT LEN %u %u ", hdrlen, rh->hdrlen);
+-	pr_debug("TYPE %04X ", rh->type);
+-	pr_debug("SGS_LEFT %u %02X\n", rh->segments_left, rh->segments_left);
+-
+-	pr_debug("IPv6 RT segsleft %02X ",
+-		 segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1],
+-				rh->segments_left,
+-				!!(rtinfo->invflags & IP6T_RT_INV_SGS)));
+-	pr_debug("type %02X %02X %02X ",
+-		 rtinfo->rt_type, rh->type,
+-		 (!(rtinfo->flags & IP6T_RT_TYP) ||
+-		  ((rtinfo->rt_type == rh->type) ^
+-		   !!(rtinfo->invflags & IP6T_RT_INV_TYP))));
+-	pr_debug("len %02X %04X %02X ",
+-		 rtinfo->hdrlen, hdrlen,
+-		 !(rtinfo->flags & IP6T_RT_LEN) ||
+-		  ((rtinfo->hdrlen == hdrlen) ^
+-		   !!(rtinfo->invflags & IP6T_RT_INV_LEN)));
+-	pr_debug("res %02X %02X %02X ",
+-		 rtinfo->flags & IP6T_RT_RES,
+-		 ((const struct rt0_hdr *)rh)->reserved,
+-		 !((rtinfo->flags & IP6T_RT_RES) &&
+-		   (((const struct rt0_hdr *)rh)->reserved)));
+-
+ 	ret = (segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1],
+ 			      rh->segments_left,
+ 			      !!(rtinfo->invflags & IP6T_RT_INV_SGS))) &&
+@@ -107,22 +78,22 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 						       reserved),
+ 					sizeof(_reserved),
+ 					&_reserved);
++		if (!rp) {
++			par->hotdrop = true;
++			return false;
++		}
+ 
+ 		ret = (*rp == 0);
+ 	}
+ 
+-	pr_debug("#%d ", rtinfo->addrnr);
+ 	if (!(rtinfo->flags & IP6T_RT_FST)) {
+ 		return ret;
+ 	} else if (rtinfo->flags & IP6T_RT_FST_NSTRICT) {
+-		pr_debug("Not strict ");
+ 		if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) {
+-			pr_debug("There isn't enough space\n");
+ 			return false;
+ 		} else {
+ 			unsigned int i = 0;
+ 
+-			pr_debug("#%d ", rtinfo->addrnr);
+ 			for (temp = 0;
+ 			     temp < (unsigned int)((hdrlen - 8) / 16);
+ 			     temp++) {
+@@ -138,26 +109,20 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 					return false;
+ 				}
+ 
+-				if (ipv6_addr_equal(ap, &rtinfo->addrs[i])) {
+-					pr_debug("i=%d temp=%d;\n", i, temp);
++				if (ipv6_addr_equal(ap, &rtinfo->addrs[i]))
+ 					i++;
+-				}
+ 				if (i == rtinfo->addrnr)
+ 					break;
+ 			}
+-			pr_debug("i=%d #%d\n", i, rtinfo->addrnr);
+ 			if (i == rtinfo->addrnr)
+ 				return ret;
+ 			else
+ 				return false;
+ 		}
+ 	} else {
+-		pr_debug("Strict ");
+ 		if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) {
+-			pr_debug("There isn't enough space\n");
+ 			return false;
+ 		} else {
+-			pr_debug("#%d ", rtinfo->addrnr);
+ 			for (temp = 0; temp < rtinfo->addrnr; temp++) {
+ 				ap = skb_header_pointer(skb,
+ 							ptr
+@@ -173,7 +138,6 @@ static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 				if (!ipv6_addr_equal(ap, &rtinfo->addrs[temp]))
+ 					break;
+ 			}
+-			pr_debug("temp=%d #%d\n", temp, rtinfo->addrnr);
+ 			if (temp == rtinfo->addrnr &&
+ 			    temp == (unsigned int)((hdrlen - 8) / 16))
+ 				return ret;
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index 54395266339d7..92a747896f808 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -109,7 +109,7 @@ config NF_CONNTRACK_MARK
+ config NF_CONNTRACK_SECMARK
+ 	bool  'Connection tracking security mark support'
+ 	depends on NETWORK_SECMARK
+-	default m if NETFILTER_ADVANCED=n
++	default y if NETFILTER_ADVANCED=n
+ 	help
+ 	  This option enables security markings to be applied to
+ 	  connections.  Typically they are copied to connections from
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index c25097092a060..29ec3ef63edc7 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -4090,6 +4090,11 @@ static int __net_init ip_vs_control_net_init_sysctl(struct netns_ipvs *ipvs)
+ 	tbl[idx++].data = &ipvs->sysctl_conn_reuse_mode;
+ 	tbl[idx++].data = &ipvs->sysctl_schedule_icmp;
+ 	tbl[idx++].data = &ipvs->sysctl_ignore_tunneled;
++#ifdef CONFIG_IP_VS_DEBUG
++	/* Global sysctls must be ro in non-init netns */
++	if (!net_eq(net, &init_net))
++		tbl[idx++].mode = 0444;
++#endif
+ 
+ 	ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
+ 	if (ipvs->sysctl_hdr == NULL) {
+diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
+index 5b02408a920bf..3ced0eb6b7c3b 100644
+--- a/net/netfilter/nft_chain_filter.c
++++ b/net/netfilter/nft_chain_filter.c
+@@ -342,12 +342,6 @@ static void nft_netdev_event(unsigned long event, struct net_device *dev,
+ 		return;
+ 	}
+ 
+-	/* UNREGISTER events are also happening on netns exit.
+-	 *
+-	 * Although nf_tables core releases all tables/chains, only this event
+-	 * handler provides guarantee that hook->ops.dev is still accessible,
+-	 * so we cannot skip exiting net namespaces.
+-	 */
+ 	__nft_release_basechain(ctx);
+ }
+ 
+@@ -366,6 +360,9 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 	    event != NETDEV_CHANGENAME)
+ 		return NOTIFY_DONE;
+ 
++	if (!check_net(ctx.net))
++		return NOTIFY_DONE;
++
+ 	nft_net = nft_pernet(ctx.net);
+ 	mutex_lock(&nft_net->commit_mutex);
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
+index 7b2f359bfce46..2f7cf5ecebf4f 100644
+--- a/net/netfilter/xt_IDLETIMER.c
++++ b/net/netfilter/xt_IDLETIMER.c
+@@ -137,7 +137,7 @@ static int idletimer_tg_create(struct idletimer_tg_info *info)
+ {
+ 	int ret;
+ 
+-	info->timer = kmalloc(sizeof(*info->timer), GFP_KERNEL);
++	info->timer = kzalloc(sizeof(*info->timer), GFP_KERNEL);
+ 	if (!info->timer) {
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/net/nfc/nci/rsp.c b/net/nfc/nci/rsp.c
+index e9605922a3228..49cbc44e075d5 100644
+--- a/net/nfc/nci/rsp.c
++++ b/net/nfc/nci/rsp.c
+@@ -330,6 +330,8 @@ static void nci_core_conn_close_rsp_packet(struct nci_dev *ndev,
+ 							 ndev->cur_conn_id);
+ 		if (conn_info) {
+ 			list_del(&conn_info->list);
++			if (conn_info == ndev->rf_conn_info)
++				ndev->rf_conn_info = NULL;
+ 			devm_kfree(&ndev->nfc_dev->dev, conn_info);
+ 		}
+ 	}
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 1b4b3514c94f2..07f4dce7b5352 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -960,6 +960,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 	tmpl = p->tmpl;
+ 
+ 	tcf_lastuse_update(&c->tcf_tm);
++	tcf_action_update_bstats(&c->common, skb);
+ 
+ 	if (clear) {
+ 		qdisc_skb_cb(skb)->post_ct = false;
+@@ -1049,7 +1050,6 @@ out_push:
+ 
+ 	qdisc_skb_cb(skb)->post_ct = true;
+ out_clear:
+-	tcf_action_update_bstats(&c->common, skb);
+ 	if (defrag)
+ 		qdisc_skb_cb(skb)->pkt_len = skb->len;
+ 	return retval;
+diff --git a/scripts/Makefile.gcc-plugins b/scripts/Makefile.gcc-plugins
+index 952e46876329a..4aad284800355 100644
+--- a/scripts/Makefile.gcc-plugins
++++ b/scripts/Makefile.gcc-plugins
+@@ -19,6 +19,10 @@ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF)		\
+ 		+= -fplugin-arg-structleak_plugin-byref
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL)	\
+ 		+= -fplugin-arg-structleak_plugin-byref-all
++ifdef CONFIG_GCC_PLUGIN_STRUCTLEAK
++    DISABLE_STRUCTLEAK_PLUGIN += -fplugin-arg-structleak_plugin-disable
++endif
++export DISABLE_STRUCTLEAK_PLUGIN
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK)		\
+ 		+= -DSTRUCTLEAK_PLUGIN
+ 
+diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c
+index e3d79a7b6db66..b5d5333ab3300 100644
+--- a/security/keys/process_keys.c
++++ b/security/keys/process_keys.c
+@@ -918,6 +918,13 @@ void key_change_session_keyring(struct callback_head *twork)
+ 		return;
+ 	}
+ 
++	/* If get_ucounts fails more bits are needed in the refcount */
++	if (unlikely(!get_ucounts(old->ucounts))) {
++		WARN_ONCE(1, "In %s get_ucounts failed\n", __func__);
++		put_cred(new);
++		return;
++	}
++
+ 	new->  uid	= old->  uid;
+ 	new-> euid	= old-> euid;
+ 	new-> suid	= old-> suid;
+@@ -927,6 +934,7 @@ void key_change_session_keyring(struct callback_head *twork)
+ 	new-> sgid	= old-> sgid;
+ 	new->fsgid	= old->fsgid;
+ 	new->user	= get_uid(old->user);
++	new->ucounts	= old->ucounts;
+ 	new->user_ns	= get_user_ns(old->user_ns);
+ 	new->group_info	= get_group_info(old->group_info);
+ 
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 062da7a7a5861..f7bd6e2db085b 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -421,8 +421,9 @@ int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset)
+ 	if (!full_reset)
+ 		goto skip_reset;
+ 
+-	/* clear STATESTS */
+-	snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK);
++	/* clear STATESTS if not in reset */
++	if (snd_hdac_chip_readb(bus, GCTL) & AZX_GCTL_RESET)
++		snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK);
+ 
+ 	/* reset controller */
+ 	snd_hdac_bus_enter_link_reset(bus);
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index e8dee24c309da..50a58fb5ad9c4 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -304,29 +304,31 @@ int snd_hda_codec_configure(struct hda_codec *codec)
+ {
+ 	int err;
+ 
++	if (codec->configured)
++		return 0;
++
+ 	if (is_generic_config(codec))
+ 		codec->probe_id = HDA_CODEC_ID_GENERIC;
+ 	else
+ 		codec->probe_id = 0;
+ 
+-	err = snd_hdac_device_register(&codec->core);
+-	if (err < 0)
+-		return err;
++	if (!device_is_registered(&codec->core.dev)) {
++		err = snd_hdac_device_register(&codec->core);
++		if (err < 0)
++			return err;
++	}
+ 
+ 	if (!codec->preset)
+ 		codec_bind_module(codec);
+ 	if (!codec->preset) {
+ 		err = codec_bind_generic(codec);
+ 		if (err < 0) {
+-			codec_err(codec, "Unable to bind the codec\n");
+-			goto error;
++			codec_dbg(codec, "Unable to bind the codec\n");
++			return err;
+ 		}
+ 	}
+ 
++	codec->configured = 1;
+ 	return 0;
+-
+- error:
+-	snd_hdac_device_unregister(&codec->core);
+-	return err;
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_configure);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 7a717e1511569..8afcce6478cdd 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -791,6 +791,7 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec)
+ 	snd_array_free(&codec->nids);
+ 	remove_conn_list(codec);
+ 	snd_hdac_regmap_exit(&codec->core);
++	codec->configured = 0;
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_codec_cleanup_for_unbind);
+ 
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index ca2f2ecd14888..5a49ee4f6ce03 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -25,6 +25,7 @@
+ #include <sound/core.h>
+ #include <sound/initval.h>
+ #include "hda_controller.h"
++#include "hda_local.h"
+ 
+ #define CREATE_TRACE_POINTS
+ #include "hda_controller_trace.h"
+@@ -1259,17 +1260,24 @@ EXPORT_SYMBOL_GPL(azx_probe_codecs);
+ int azx_codec_configure(struct azx *chip)
+ {
+ 	struct hda_codec *codec, *next;
++	int success = 0;
+ 
+-	/* use _safe version here since snd_hda_codec_configure() deregisters
+-	 * the device upon error and deletes itself from the bus list.
+-	 */
+-	list_for_each_codec_safe(codec, next, &chip->bus) {
+-		snd_hda_codec_configure(codec);
++	list_for_each_codec(codec, &chip->bus) {
++		if (!snd_hda_codec_configure(codec))
++			success++;
+ 	}
+ 
+-	if (!azx_bus(chip)->num_codecs)
+-		return -ENODEV;
+-	return 0;
++	if (success) {
++		/* unregister failed codecs if any codec has been probed */
++		list_for_each_codec_safe(codec, next, &chip->bus) {
++			if (!codec->configured) {
++				codec_err(codec, "Unable to configure, disabling\n");
++				snd_hdac_device_unregister(&codec->core);
++			}
++		}
++	}
++
++	return success ? 0 : -ENODEV;
+ }
+ EXPORT_SYMBOL_GPL(azx_codec_configure);
+ 
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index 68f9668788ea2..324cba13c7bac 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -41,7 +41,7 @@
+ /* 24 unused */
+ #define AZX_DCAPS_COUNT_LPIB_DELAY  (1 << 25)	/* Take LPIB as delay */
+ #define AZX_DCAPS_PM_RUNTIME	(1 << 26)	/* runtime PM support */
+-/* 27 unused */
++#define AZX_DCAPS_RETRY_PROBE	(1 << 27)	/* retry probe if no codec is configured */
+ #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28)	/* CORBRP clears itself after reset */
+ #define AZX_DCAPS_NO_MSI64      (1 << 29)	/* Stick to 32-bit MSIs */
+ #define AZX_DCAPS_SEPARATE_STREAM_TAG	(1 << 30) /* capture and playback use separate stream tag */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 0062c18b646af..89f135a6a1f6d 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -307,7 +307,8 @@ enum {
+ /* quirks for AMD SB */
+ #define AZX_DCAPS_PRESET_AMD_SB \
+ 	(AZX_DCAPS_NO_TCSEL | AZX_DCAPS_AMD_WORKAROUND |\
+-	 AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME)
++	 AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME |\
++	 AZX_DCAPS_RETRY_PROBE)
+ 
+ /* quirks for Nvidia */
+ #define AZX_DCAPS_PRESET_NVIDIA \
+@@ -1730,7 +1731,7 @@ static void azx_check_snoop_available(struct azx *chip)
+ 
+ static void azx_probe_work(struct work_struct *work)
+ {
+-	struct hda_intel *hda = container_of(work, struct hda_intel, probe_work);
++	struct hda_intel *hda = container_of(work, struct hda_intel, probe_work.work);
+ 	azx_probe_continue(&hda->chip);
+ }
+ 
+@@ -1839,7 +1840,7 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 	}
+ 
+ 	/* continue probing in work context as may trigger request module */
+-	INIT_WORK(&hda->probe_work, azx_probe_work);
++	INIT_DELAYED_WORK(&hda->probe_work, azx_probe_work);
+ 
+ 	*rchip = chip;
+ 
+@@ -2170,7 +2171,7 @@ static int azx_probe(struct pci_dev *pci,
+ #endif
+ 
+ 	if (schedule_probe)
+-		schedule_work(&hda->probe_work);
++		schedule_delayed_work(&hda->probe_work, 0);
+ 
+ 	dev++;
+ 	if (chip->disabled)
+@@ -2256,6 +2257,11 @@ static int azx_probe_continue(struct azx *chip)
+ 	int dev = chip->dev_index;
+ 	int err;
+ 
++	if (chip->disabled || hda->init_failed)
++		return -EIO;
++	if (hda->probe_retry)
++		goto probe_retry;
++
+ 	to_hda_bus(bus)->bus_probing = 1;
+ 	hda->probe_continued = 1;
+ 
+@@ -2317,10 +2323,20 @@ static int azx_probe_continue(struct azx *chip)
+ #endif
+ 	}
+ #endif
++
++ probe_retry:
+ 	if (bus->codec_mask && !(probe_only[dev] & 1)) {
+ 		err = azx_codec_configure(chip);
+-		if (err < 0)
++		if (err) {
++			if ((chip->driver_caps & AZX_DCAPS_RETRY_PROBE) &&
++			    ++hda->probe_retry < 60) {
++				schedule_delayed_work(&hda->probe_work,
++						      msecs_to_jiffies(1000));
++				return 0; /* keep things up */
++			}
++			dev_err(chip->card->dev, "Cannot probe codecs, giving up\n");
+ 			goto out_free;
++		}
+ 	}
+ 
+ 	err = snd_card_register(chip->card);
+@@ -2350,6 +2366,7 @@ out_free:
+ 		display_power(chip, false);
+ 	complete_all(&hda->probe_wait);
+ 	to_hda_bus(bus)->bus_probing = 0;
++	hda->probe_retry = 0;
+ 	return 0;
+ }
+ 
+@@ -2375,7 +2392,7 @@ static void azx_remove(struct pci_dev *pci)
+ 		 * device during cancel_work_sync() call.
+ 		 */
+ 		device_unlock(&pci->dev);
+-		cancel_work_sync(&hda->probe_work);
++		cancel_delayed_work_sync(&hda->probe_work);
+ 		device_lock(&pci->dev);
+ 
+ 		snd_card_free(card);
+diff --git a/sound/pci/hda/hda_intel.h b/sound/pci/hda/hda_intel.h
+index 3fb119f090408..0f39418f9328b 100644
+--- a/sound/pci/hda/hda_intel.h
++++ b/sound/pci/hda/hda_intel.h
+@@ -14,7 +14,7 @@ struct hda_intel {
+ 
+ 	/* sync probing */
+ 	struct completion probe_wait;
+-	struct work_struct probe_work;
++	struct delayed_work probe_work;
+ 
+ 	/* card list (for power_save trigger) */
+ 	struct list_head list;
+@@ -30,6 +30,8 @@ struct hda_intel {
+ 	unsigned int freed:1; /* resources already released */
+ 
+ 	bool need_i915_power:1; /* the hda controller needs i915 power */
++
++	int probe_retry;	/* being probe-retry */
+ };
+ 
+ #endif
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8e6ff50f0f94f..b30e1843273bf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2547,6 +2547,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index db16071205ba9..dd1ae611fc2a5 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -1564,6 +1564,7 @@ config SND_SOC_WCD938X
+ config SND_SOC_WCD938X_SDW
+ 	tristate "WCD9380/WCD9385 Codec - SDW"
+ 	select SND_SOC_WCD938X
++	select REGMAP_IRQ
+ 	depends on SOUNDWIRE
+ 	select REGMAP_SOUNDWIRE
+ 	help
+diff --git a/sound/soc/codecs/cs4341.c b/sound/soc/codecs/cs4341.c
+index 7d3e54d8eef36..29d05e32d3417 100644
+--- a/sound/soc/codecs/cs4341.c
++++ b/sound/soc/codecs/cs4341.c
+@@ -305,12 +305,19 @@ static int cs4341_spi_probe(struct spi_device *spi)
+ 	return cs4341_probe(&spi->dev);
+ }
+ 
++static const struct spi_device_id cs4341_spi_ids[] = {
++	{ "cs4341a" },
++	{ }
++};
++MODULE_DEVICE_TABLE(spi, cs4341_spi_ids);
++
+ static struct spi_driver cs4341_spi_driver = {
+ 	.driver = {
+ 		.name = "cs4341-spi",
+ 		.of_match_table = of_match_ptr(cs4341_dt_ids),
+ 	},
+ 	.probe = cs4341_spi_probe,
++	.id_table = cs4341_spi_ids,
+ };
+ #endif
+ 
+diff --git a/sound/soc/codecs/nau8824.c b/sound/soc/codecs/nau8824.c
+index db88be48c9980..f946ef65a4c19 100644
+--- a/sound/soc/codecs/nau8824.c
++++ b/sound/soc/codecs/nau8824.c
+@@ -867,8 +867,8 @@ static void nau8824_jdet_work(struct work_struct *work)
+ 	struct regmap *regmap = nau8824->regmap;
+ 	int adc_value, event = 0, event_mask = 0;
+ 
+-	snd_soc_dapm_enable_pin(dapm, "MICBIAS");
+-	snd_soc_dapm_enable_pin(dapm, "SAR");
++	snd_soc_dapm_force_enable_pin(dapm, "MICBIAS");
++	snd_soc_dapm_force_enable_pin(dapm, "SAR");
+ 	snd_soc_dapm_sync(dapm);
+ 
+ 	msleep(100);
+diff --git a/sound/soc/codecs/pcm179x-spi.c b/sound/soc/codecs/pcm179x-spi.c
+index 0a542924ec5f9..ebf63ea90a1c4 100644
+--- a/sound/soc/codecs/pcm179x-spi.c
++++ b/sound/soc/codecs/pcm179x-spi.c
+@@ -36,6 +36,7 @@ static const struct of_device_id pcm179x_of_match[] = {
+ MODULE_DEVICE_TABLE(of, pcm179x_of_match);
+ 
+ static const struct spi_device_id pcm179x_spi_ids[] = {
++	{ "pcm1792a", 0 },
+ 	{ "pcm179x", 0 },
+ 	{ },
+ };
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index 4dc844f3c1fc0..60dee41816dc2 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -116,6 +116,8 @@ static const struct reg_default pcm512x_reg_defaults[] = {
+ 	{ PCM512x_FS_SPEED_MODE,     0x00 },
+ 	{ PCM512x_IDAC_1,            0x01 },
+ 	{ PCM512x_IDAC_2,            0x00 },
++	{ PCM512x_I2S_1,             0x02 },
++	{ PCM512x_I2S_2,             0x00 },
+ };
+ 
+ static bool pcm512x_readable(struct device *dev, unsigned int reg)
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index 9e621a254392c..499604f1e1789 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -742,9 +742,16 @@ static int wm8960_configure_clocking(struct snd_soc_component *component)
+ 	int i, j, k;
+ 	int ret;
+ 
+-	if (!(iface1 & (1<<6))) {
+-		dev_dbg(component->dev,
+-			"Codec is slave mode, no need to configure clock\n");
++	/*
++	 * For Slave mode clocking should still be configured,
++	 * so this if statement should be removed, but some platform
++	 * may not work if the sysclk is not configured, to avoid such
++	 * compatible issue, just add '!wm8960->sysclk' condition in
++	 * this if statement.
++	 */
++	if (!(iface1 & (1 << 6)) && !wm8960->sysclk) {
++		dev_warn(component->dev,
++			 "slave mode, but proceeding with no clock configuration\n");
+ 		return 0;
+ 	}
+ 
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index 477d16713e72e..a9b6c2b0c871b 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -487,8 +487,9 @@ static int fsl_xcvr_prepare(struct snd_pcm_substream *substream,
+ 		return ret;
+ 	}
+ 
+-	/* clear DPATH RESET */
++	/* set DPATH RESET */
+ 	m_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
++	v_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
+ 	ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, m_ctl, v_ctl);
+ 	if (ret < 0) {
+ 		dev_err(dai->dev, "Error while setting EXT_CTRL: %d\n", ret);
+@@ -590,10 +591,6 @@ static void fsl_xcvr_shutdown(struct snd_pcm_substream *substream,
+ 		val  |= FSL_XCVR_EXT_CTRL_CMDC_RESET(tx);
+ 	}
+ 
+-	/* set DPATH RESET */
+-	mask |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
+-	val  |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);
+-
+ 	ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, mask, val);
+ 	if (ret < 0) {
+ 		dev_err(dai->dev, "Err setting DPATH RESET: %d\n", ret);
+@@ -643,6 +640,16 @@ static int fsl_xcvr_trigger(struct snd_pcm_substream *substream, int cmd,
+ 			dev_err(dai->dev, "Failed to enable DMA: %d\n", ret);
+ 			return ret;
+ 		}
++
++		/* clear DPATH RESET */
++		ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,
++					 FSL_XCVR_EXT_CTRL_DPTH_RESET(tx),
++					 0);
++		if (ret < 0) {
++			dev_err(dai->dev, "Failed to clear DPATH RESET: %d\n", ret);
++			return ret;
++		}
++
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 91bf939d5233e..8477071141e28 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -2559,6 +2559,7 @@ static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
+ 				const char *pin, int status)
+ {
+ 	struct snd_soc_dapm_widget *w = dapm_find_widget(dapm, pin, true);
++	int ret = 0;
+ 
+ 	dapm_assert_locked(dapm);
+ 
+@@ -2571,13 +2572,14 @@ static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
+ 		dapm_mark_dirty(w, "pin configuration");
+ 		dapm_widget_invalidate_input_paths(w);
+ 		dapm_widget_invalidate_output_paths(w);
++		ret = 1;
+ 	}
+ 
+ 	w->connected = status;
+ 	if (status == 0)
+ 		w->force = 0;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+@@ -3582,14 +3584,15 @@ int snd_soc_dapm_put_pin_switch(struct snd_kcontrol *kcontrol,
+ {
+ 	struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
+ 	const char *pin = (const char *)kcontrol->private_value;
++	int ret;
+ 
+ 	if (ucontrol->value.integer.value[0])
+-		snd_soc_dapm_enable_pin(&card->dapm, pin);
++		ret = snd_soc_dapm_enable_pin(&card->dapm, pin);
+ 	else
+-		snd_soc_dapm_disable_pin(&card->dapm, pin);
++		ret = snd_soc_dapm_disable_pin(&card->dapm, pin);
+ 
+ 	snd_soc_dapm_sync(&card->dapm);
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dapm_put_pin_switch);
+ 
+@@ -4023,7 +4026,7 @@ static int snd_soc_dapm_dai_link_put(struct snd_kcontrol *kcontrol,
+ 
+ 	rtd->params_select = ucontrol->value.enumerated.item[0];
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static void
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 147b831e1a82d..91d40b4c851c1 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -4080,6 +4080,38 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 		}
+ 	}
+ },
++{
++	/*
++	 * Sennheiser GSP670
++	 * Change order of interfaces loaded
++	 */
++	USB_DEVICE(0x1395, 0x0300),
++	.bInterfaceClass = USB_CLASS_PER_INTERFACE,
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = &(const struct snd_usb_audio_quirk[]) {
++			// Communication
++			{
++				.ifnum = 3,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE
++			},
++			// Recording
++			{
++				.ifnum = 4,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE
++			},
++			// Main
++			{
++				.ifnum = 1,
++				.type = QUIRK_AUDIO_STANDARD_INTERFACE
++			},
++			{
++				.ifnum = -1
++			}
++		}
++	}
++},
+ 
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/tools/lib/perf/tests/test-evlist.c b/tools/lib/perf/tests/test-evlist.c
+index c67c833991708..ce91a582f0e41 100644
+--- a/tools/lib/perf/tests/test-evlist.c
++++ b/tools/lib/perf/tests/test-evlist.c
+@@ -40,7 +40,7 @@ static int test_stat_cpu(void)
+ 		.type	= PERF_TYPE_SOFTWARE,
+ 		.config	= PERF_COUNT_SW_TASK_CLOCK,
+ 	};
+-	int err, cpu, tmp;
++	int err, idx;
+ 
+ 	cpus = perf_cpu_map__new(NULL);
+ 	__T("failed to create cpus", cpus);
+@@ -70,10 +70,10 @@ static int test_stat_cpu(void)
+ 	perf_evlist__for_each_evsel(evlist, evsel) {
+ 		cpus = perf_evsel__cpus(evsel);
+ 
+-		perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
++		for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {
+ 			struct perf_counts_values counts = { .val = 0 };
+ 
+-			perf_evsel__read(evsel, cpu, 0, &counts);
++			perf_evsel__read(evsel, idx, 0, &counts);
+ 			__T("failed to read value for evsel", counts.val != 0);
+ 		}
+ 	}
+diff --git a/tools/lib/perf/tests/test-evsel.c b/tools/lib/perf/tests/test-evsel.c
+index a184e4861627e..33ae9334861a2 100644
+--- a/tools/lib/perf/tests/test-evsel.c
++++ b/tools/lib/perf/tests/test-evsel.c
+@@ -22,7 +22,7 @@ static int test_stat_cpu(void)
+ 		.type	= PERF_TYPE_SOFTWARE,
+ 		.config	= PERF_COUNT_SW_CPU_CLOCK,
+ 	};
+-	int err, cpu, tmp;
++	int err, idx;
+ 
+ 	cpus = perf_cpu_map__new(NULL);
+ 	__T("failed to create cpus", cpus);
+@@ -33,10 +33,10 @@ static int test_stat_cpu(void)
+ 	err = perf_evsel__open(evsel, cpus, NULL);
+ 	__T("failed to open evsel", err == 0);
+ 
+-	perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
++	for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {
+ 		struct perf_counts_values counts = { .val = 0 };
+ 
+-		perf_evsel__read(evsel, cpu, 0, &counts);
++		perf_evsel__read(evsel, idx, 0, &counts);
+ 		__T("failed to read value for evsel", counts.val != 0);
+ 	}
+ 
+@@ -148,6 +148,7 @@ static int test_stat_user_read(int event)
+ 	__T("failed to mmap evsel", err == 0);
+ 
+ 	pc = perf_evsel__mmap_base(evsel, 0, 0);
++	__T("failed to get mmapped address", pc);
+ 
+ #if defined(__i386__) || defined(__x86_64__)
+ 	__T("userspace counter access not supported", pc->cap_user_rdpmc);
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 8676c75987281..a9c2bebd7576e 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -509,6 +509,7 @@ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+ 	list_add_tail(&reloc->list, &sec->reloc->reloc_list);
+ 	elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));
+ 
++	sec->reloc->sh.sh_size += sec->reloc->sh.sh_entsize;
+ 	sec->reloc->changed = true;
+ 
+ 	return 0;
+@@ -979,63 +980,63 @@ static struct section *elf_create_reloc_section(struct elf *elf,
+ 	}
+ }
+ 
+-static int elf_rebuild_rel_reloc_section(struct section *sec, int nr)
++static int elf_rebuild_rel_reloc_section(struct section *sec)
+ {
+ 	struct reloc *reloc;
+-	int idx = 0, size;
++	int idx = 0;
+ 	void *buf;
+ 
+ 	/* Allocate a buffer for relocations */
+-	size = nr * sizeof(GElf_Rel);
+-	buf = malloc(size);
++	buf = malloc(sec->sh.sh_size);
+ 	if (!buf) {
+ 		perror("malloc");
+ 		return -1;
+ 	}
+ 
+ 	sec->data->d_buf = buf;
+-	sec->data->d_size = size;
++	sec->data->d_size = sec->sh.sh_size;
+ 	sec->data->d_type = ELF_T_REL;
+ 
+-	sec->sh.sh_size = size;
+-
+ 	idx = 0;
+ 	list_for_each_entry(reloc, &sec->reloc_list, list) {
+ 		reloc->rel.r_offset = reloc->offset;
+ 		reloc->rel.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type);
+-		gelf_update_rel(sec->data, idx, &reloc->rel);
++		if (!gelf_update_rel(sec->data, idx, &reloc->rel)) {
++			WARN_ELF("gelf_update_rel");
++			return -1;
++		}
+ 		idx++;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int elf_rebuild_rela_reloc_section(struct section *sec, int nr)
++static int elf_rebuild_rela_reloc_section(struct section *sec)
+ {
+ 	struct reloc *reloc;
+-	int idx = 0, size;
++	int idx = 0;
+ 	void *buf;
+ 
+ 	/* Allocate a buffer for relocations with addends */
+-	size = nr * sizeof(GElf_Rela);
+-	buf = malloc(size);
++	buf = malloc(sec->sh.sh_size);
+ 	if (!buf) {
+ 		perror("malloc");
+ 		return -1;
+ 	}
+ 
+ 	sec->data->d_buf = buf;
+-	sec->data->d_size = size;
++	sec->data->d_size = sec->sh.sh_size;
+ 	sec->data->d_type = ELF_T_RELA;
+ 
+-	sec->sh.sh_size = size;
+-
+ 	idx = 0;
+ 	list_for_each_entry(reloc, &sec->reloc_list, list) {
+ 		reloc->rela.r_offset = reloc->offset;
+ 		reloc->rela.r_addend = reloc->addend;
+ 		reloc->rela.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type);
+-		gelf_update_rela(sec->data, idx, &reloc->rela);
++		if (!gelf_update_rela(sec->data, idx, &reloc->rela)) {
++			WARN_ELF("gelf_update_rela");
++			return -1;
++		}
+ 		idx++;
+ 	}
+ 
+@@ -1044,16 +1045,9 @@ static int elf_rebuild_rela_reloc_section(struct section *sec, int nr)
+ 
+ static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec)
+ {
+-	struct reloc *reloc;
+-	int nr;
+-
+-	nr = 0;
+-	list_for_each_entry(reloc, &sec->reloc_list, list)
+-		nr++;
+-
+ 	switch (sec->sh.sh_type) {
+-	case SHT_REL:  return elf_rebuild_rel_reloc_section(sec, nr);
+-	case SHT_RELA: return elf_rebuild_rela_reloc_section(sec, nr);
++	case SHT_REL:  return elf_rebuild_rel_reloc_section(sec);
++	case SHT_RELA: return elf_rebuild_rela_reloc_section(sec);
+ 	default:       return -1;
+ 	}
+ }
+@@ -1113,12 +1107,6 @@ int elf_write(struct elf *elf)
+ 	/* Update changed relocation sections and section headers: */
+ 	list_for_each_entry(sec, &elf->sections, list) {
+ 		if (sec->changed) {
+-			if (sec->base &&
+-			    elf_rebuild_reloc_section(elf, sec)) {
+-				WARN("elf_rebuild_reloc_section");
+-				return -1;
+-			}
+-
+ 			s = elf_getscn(elf->elf, sec->idx);
+ 			if (!s) {
+ 				WARN_ELF("elf_getscn");
+@@ -1129,6 +1117,12 @@ int elf_write(struct elf *elf)
+ 				return -1;
+ 			}
+ 
++			if (sec->base &&
++			    elf_rebuild_reloc_section(elf, sec)) {
++				WARN("elf_rebuild_reloc_section");
++				return -1;
++			}
++
+ 			sec->changed = false;
+ 			elf->changed = true;
+ 		}
+diff --git a/tools/testing/selftests/net/forwarding/Makefile b/tools/testing/selftests/net/forwarding/Makefile
+index d97bd6889446d..72ee644d47bfa 100644
+--- a/tools/testing/selftests/net/forwarding/Makefile
++++ b/tools/testing/selftests/net/forwarding/Makefile
+@@ -9,6 +9,7 @@ TEST_PROGS = bridge_igmp.sh \
+ 	gre_inner_v4_multipath.sh \
+ 	gre_inner_v6_multipath.sh \
+ 	gre_multipath.sh \
++	ip6_forward_instats_vrf.sh \
+ 	ip6gre_inner_v4_multipath.sh \
+ 	ip6gre_inner_v6_multipath.sh \
+ 	ipip_flat_gre_key.sh \
+diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+index b802c14d29509..e5e2fbeca22ec 100644
+--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
++++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+@@ -39,3 +39,5 @@ NETIF_CREATE=yes
+ # Timeout (in seconds) before ping exits regardless of how many packets have
+ # been sent or received
+ PING_TIMEOUT=5
++# IPv6 traceroute utility name.
++TROUTE6=traceroute6
+diff --git a/tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh b/tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh
+new file mode 100755
+index 0000000000000..9f5b3e2e5e954
+--- /dev/null
++++ b/tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh
+@@ -0,0 +1,172 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++# Test ipv6 stats on the incoming if when forwarding with VRF
++
++ALL_TESTS="
++	ipv6_ping
++	ipv6_in_too_big_err
++	ipv6_in_hdr_err
++	ipv6_in_addr_err
++	ipv6_in_discard
++"
++
++NUM_NETIFS=4
++source lib.sh
++
++h1_create()
++{
++	simple_if_init $h1 2001:1:1::2/64
++	ip -6 route add vrf v$h1 2001:1:2::/64 via 2001:1:1::1
++}
++
++h1_destroy()
++{
++	ip -6 route del vrf v$h1 2001:1:2::/64 via 2001:1:1::1
++	simple_if_fini $h1 2001:1:1::2/64
++}
++
++router_create()
++{
++	vrf_create router
++	__simple_if_init $rtr1 router 2001:1:1::1/64
++	__simple_if_init $rtr2 router 2001:1:2::1/64
++	mtu_set $rtr2 1280
++}
++
++router_destroy()
++{
++	mtu_restore $rtr2
++	__simple_if_fini $rtr2 2001:1:2::1/64
++	__simple_if_fini $rtr1 2001:1:1::1/64
++	vrf_destroy router
++}
++
++h2_create()
++{
++	simple_if_init $h2 2001:1:2::2/64
++	ip -6 route add vrf v$h2 2001:1:1::/64 via 2001:1:2::1
++	mtu_set $h2 1280
++}
++
++h2_destroy()
++{
++	mtu_restore $h2
++	ip -6 route del vrf v$h2 2001:1:1::/64 via 2001:1:2::1
++	simple_if_fini $h2 2001:1:2::2/64
++}
++
++setup_prepare()
++{
++	h1=${NETIFS[p1]}
++	rtr1=${NETIFS[p2]}
++
++	rtr2=${NETIFS[p3]}
++	h2=${NETIFS[p4]}
++
++	vrf_prepare
++	h1_create
++	router_create
++	h2_create
++
++	forwarding_enable
++}
++
++cleanup()
++{
++	pre_cleanup
++
++	forwarding_restore
++
++	h2_destroy
++	router_destroy
++	h1_destroy
++	vrf_cleanup
++}
++
++ipv6_in_too_big_err()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors)
++	local vrf_name=$(master_name_get $h1)
++
++	# Send too big packets
++	ip vrf exec $vrf_name \
++		$PING6 -s 1300 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InTooBigErrors"
++}
++
++ipv6_in_hdr_err()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InHdrErrors)
++	local vrf_name=$(master_name_get $h1)
++
++	# Send packets with hop limit 1, easiest with traceroute6 as some ping6
++	# doesn't allow hop limit to be specified
++	ip vrf exec $vrf_name \
++		$TROUTE6 2001:1:2::2 &> /dev/null
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InHdrErrors)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InHdrErrors"
++}
++
++ipv6_in_addr_err()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InAddrErrors)
++	local vrf_name=$(master_name_get $h1)
++
++	# Disable forwarding temporary while sending the packet
++	sysctl -qw net.ipv6.conf.all.forwarding=0
++	ip vrf exec $vrf_name \
++		$PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
++	sysctl -qw net.ipv6.conf.all.forwarding=1
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InAddrErrors)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InAddrErrors"
++}
++
++ipv6_in_discard()
++{
++	RET=0
++
++	local t0=$(ipv6_stats_get $rtr1 Ip6InDiscards)
++	local vrf_name=$(master_name_get $h1)
++
++	# Add a policy to discard
++	ip xfrm policy add dst 2001:1:2::2/128 dir fwd action block
++	ip vrf exec $vrf_name \
++		$PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null
++	ip xfrm policy del dst 2001:1:2::2/128 dir fwd
++
++	local t1=$(ipv6_stats_get $rtr1 Ip6InDiscards)
++	test "$((t1 - t0))" -ne 0
++	check_err $?
++	log_test "Ip6InDiscards"
++}
++ipv6_ping()
++{
++	RET=0
++
++	ping6_test $h1 2001:1:2::2
++}
++
++trap cleanup EXIT
++
++setup_prepare
++setup_wait
++tests_run
++
++exit $EXIT_STATUS
+diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
+index 42e28c983d41b..5140980f54758 100644
+--- a/tools/testing/selftests/net/forwarding/lib.sh
++++ b/tools/testing/selftests/net/forwarding/lib.sh
+@@ -748,6 +748,14 @@ qdisc_parent_stats_get()
+ 	    | jq '.[] | select(.parent == "'"$parent"'") | '"$selector"
+ }
+ 
++ipv6_stats_get()
++{
++	local dev=$1; shift
++	local stat=$1; shift
++
++	cat /proc/net/dev_snmp6/$dev | grep "^$stat" | cut -f2
++}
++
+ humanize()
+ {
+ 	local speed=$1; shift
+diff --git a/tools/testing/selftests/netfilter/nft_flowtable.sh b/tools/testing/selftests/netfilter/nft_flowtable.sh
+index 427d94816f2d6..d4ffebb989f88 100755
+--- a/tools/testing/selftests/netfilter/nft_flowtable.sh
++++ b/tools/testing/selftests/netfilter/nft_flowtable.sh
+@@ -199,7 +199,6 @@ fi
+ # test basic connectivity
+ if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then
+   echo "ERROR: ns1 cannot reach ns2" 1>&2
+-  bash
+   exit 1
+ fi
+ 
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index 2ea438e6b8b1f..a7e512a81d82c 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -414,9 +414,6 @@ static void uffd_test_ctx_init_ext(uint64_t *features)
+ 	uffd_test_ops->allocate_area((void **)&area_src);
+ 	uffd_test_ops->allocate_area((void **)&area_dst);
+ 
+-	uffd_test_ops->release_pages(area_src);
+-	uffd_test_ops->release_pages(area_dst);
+-
+ 	userfaultfd_open(features);
+ 
+ 	count_verify = malloc(nr_pages * sizeof(unsigned long long));
+@@ -437,6 +434,26 @@ static void uffd_test_ctx_init_ext(uint64_t *features)
+ 		*(area_count(area_src, nr) + 1) = 1;
+ 	}
+ 
++	/*
++	 * After initialization of area_src, we must explicitly release pages
++	 * for area_dst to make sure it's fully empty.  Otherwise we could have
++	 * some area_dst pages be errornously initialized with zero pages,
++	 * hence we could hit memory corruption later in the test.
++	 *
++	 * One example is when THP is globally enabled, above allocate_area()
++	 * calls could have the two areas merged into a single VMA (as they
++	 * will have the same VMA flags so they're mergeable).  When we
++	 * initialize the area_src above, it's possible that some part of
++	 * area_dst could have been faulted in via one huge THP that will be
++	 * shared between area_src and area_dst.  It could cause some of the
++	 * area_dst won't be trapped by missing userfaults.
++	 *
++	 * This release_pages() will guarantee even if that happened, we'll
++	 * proactively split the thp and drop any accidentally initialized
++	 * pages within area_dst.
++	 */
++	uffd_test_ops->release_pages(area_dst);
++
+ 	pipefd = malloc(sizeof(int) * nr_cpus * 2);
+ 	if (!pipefd)
+ 		err("pipefd");


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-02 19:30 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-02 19:30 UTC (permalink / raw
  To: gentoo-commits

commit:     727abb72d5c6700b0a552aacec38e628eaa69fdf
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov  2 19:30:02 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Nov  2 19:30:02 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=727abb72

Linux patch 5.14.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-5.14.16.patch | 4422 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4426 insertions(+)

diff --git a/0000_README b/0000_README
index ea788cb..8bcce4c 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1014_linux-5.14.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.15
 
+Patch:  1015_linux-5.14.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.14.16.patch b/1015_linux-5.14.16.patch
new file mode 100644
index 0000000..4584296
--- /dev/null
+++ b/1015_linux-5.14.16.patch
@@ -0,0 +1,4422 @@
+diff --git a/Makefile b/Makefile
+index e66341fba8a4e..02b6dab373ddb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/compressed/decompress.c b/arch/arm/boot/compressed/decompress.c
+index aa075d8372ea2..74255e8198314 100644
+--- a/arch/arm/boot/compressed/decompress.c
++++ b/arch/arm/boot/compressed/decompress.c
+@@ -47,7 +47,10 @@ extern char * strchrnul(const char *, int);
+ #endif
+ 
+ #ifdef CONFIG_KERNEL_XZ
++/* Prevent KASAN override of string helpers in decompressor */
++#undef memmove
+ #define memmove memmove
++#undef memcpy
+ #define memcpy memcpy
+ #include "../../../../lib/decompress_unxz.c"
+ #endif
+diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
+index a13d902064722..d9db752c51fe2 100644
+--- a/arch/arm/include/asm/uaccess.h
++++ b/arch/arm/include/asm/uaccess.h
+@@ -200,6 +200,7 @@ extern int __get_user_64t_4(void *);
+ 		register unsigned long __l asm("r1") = __limit;		\
+ 		register int __e asm("r0");				\
+ 		unsigned int __ua_flags = uaccess_save_and_enable();	\
++		int __tmp_e;						\
+ 		switch (sizeof(*(__p))) {				\
+ 		case 1:							\
+ 			if (sizeof((x)) >= 8)				\
+@@ -227,9 +228,10 @@ extern int __get_user_64t_4(void *);
+ 			break;						\
+ 		default: __e = __get_user_bad(); break;			\
+ 		}							\
++		__tmp_e = __e;						\
+ 		uaccess_restore(__ua_flags);				\
+ 		x = (typeof(*(p))) __r2;				\
+-		__e;							\
++		__tmp_e;						\
+ 	})
+ 
+ #define get_user(x, p)							\
+diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
+index 29070eb8df7d9..3fc7f9750ce4b 100644
+--- a/arch/arm/kernel/head.S
++++ b/arch/arm/kernel/head.S
+@@ -253,7 +253,7 @@ __create_page_tables:
+ 	add	r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ORDER)
+ 	ldr	r6, =(_end - 1)
+ 	adr_l	r5, kernel_sec_start		@ _pa(kernel_sec_start)
+-#ifdef CONFIG_CPU_ENDIAN_BE8
++#if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32
+ 	str	r8, [r5, #4]			@ Save physical start of kernel (BE)
+ #else
+ 	str	r8, [r5]			@ Save physical start of kernel (LE)
+@@ -266,7 +266,7 @@ __create_page_tables:
+ 	bls	1b
+ 	eor	r3, r3, r7			@ Remove the MMU flags
+ 	adr_l	r5, kernel_sec_end		@ _pa(kernel_sec_end)
+-#ifdef CONFIG_CPU_ENDIAN_BE8
++#if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32
+ 	str	r3, [r5, #4]			@ Save physical end of kernel (BE)
+ #else
+ 	str	r3, [r5]			@ Save physical end of kernel (LE)
+diff --git a/arch/arm/kernel/vmlinux-xip.lds.S b/arch/arm/kernel/vmlinux-xip.lds.S
+index 50136828f5b54..f14c2360ea0b1 100644
+--- a/arch/arm/kernel/vmlinux-xip.lds.S
++++ b/arch/arm/kernel/vmlinux-xip.lds.S
+@@ -40,6 +40,10 @@ SECTIONS
+ 		ARM_DISCARD
+ 		*(.alt.smp.init)
+ 		*(.pv_table)
++#ifndef CONFIG_ARM_UNWIND
++		*(.ARM.exidx) *(.ARM.exidx.*)
++		*(.ARM.extab) *(.ARM.extab.*)
++#endif
+ 	}
+ 
+ 	. = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR);
+@@ -172,7 +176,7 @@ ASSERT((__arch_info_end - __arch_info_begin), "no machine record defined")
+ ASSERT((_end - __bss_start) >= 12288, ".bss too small for CONFIG_XIP_DEFLATED_DATA")
+ #endif
+ 
+-#ifdef CONFIG_ARM_MPU
++#if defined(CONFIG_ARM_MPU) && !defined(CONFIG_COMPILE_TEST)
+ /*
+  * Due to PMSAv7 restriction on base address and size we have to
+  * enforce minimal alignment restrictions. It was seen that weaker
+diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
+index e2c743aa2eb2b..d9f7dfe2a7ed3 100644
+--- a/arch/arm/mm/proc-macros.S
++++ b/arch/arm/mm/proc-macros.S
+@@ -340,6 +340,7 @@ ENTRY(\name\()_cache_fns)
+ 
+ .macro define_tlb_functions name:req, flags_up:req, flags_smp
+ 	.type	\name\()_tlb_fns, #object
++	.align 2
+ ENTRY(\name\()_tlb_fns)
+ 	.long	\name\()_flush_user_tlb_range
+ 	.long	\name\()_flush_kern_tlb_range
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index 27e0af78e88b0..9d8634e2f12f7 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -439,7 +439,7 @@ static struct undef_hook kprobes_arm_break_hook = {
+ 
+ #endif /* !CONFIG_THUMB2_KERNEL */
+ 
+-int __init arch_init_kprobes()
++int __init arch_init_kprobes(void)
+ {
+ 	arm_probes_decode_init();
+ #ifdef CONFIG_THUMB2_KERNEL
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts b/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts
+index 02f8e72f0cad1..05486cccee1c4 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5-nanopi-neo2.dts
+@@ -75,7 +75,7 @@
+ 	pinctrl-0 = <&emac_rgmii_pins>;
+ 	phy-supply = <&reg_gmac_3v3>;
+ 	phy-handle = <&ext_rgmii_phy>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts
+index d17abb5158351..e99e7644ff392 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-s.dts
+@@ -70,7 +70,9 @@
+ 		regulator-name = "rst-usb-eth2";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_usb_eth2>;
+-		gpio = <&gpio3 2 GPIO_ACTIVE_LOW>;
++		gpio = <&gpio3 2 GPIO_ACTIVE_HIGH>;
++		enable-active-high;
++		regulator-always-on;
+ 	};
+ 
+ 	reg_vdd_5v: regulator-5v {
+@@ -95,7 +97,7 @@
+ 		clocks = <&osc_can>;
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <28 IRQ_TYPE_EDGE_FALLING>;
+-		spi-max-frequency = <100000>;
++		spi-max-frequency = <10000000>;
+ 		vdd-supply = <&reg_vdd_3v3>;
+ 		xceiver-supply = <&reg_vdd_5v>;
+ 	};
+@@ -111,7 +113,7 @@
+ &fec1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+-	phy-connection-type = "rgmii";
++	phy-connection-type = "rgmii-rxid";
+ 	phy-handle = <&ethphy>;
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
+index 9db9b90bf2bc9..42bbbb3f532bc 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-n801x-som.dtsi
+@@ -91,10 +91,12 @@
+ 			reg_vdd_soc: BUCK1 {
+ 				regulator-name = "buck1";
+ 				regulator-min-microvolt = <800000>;
+-				regulator-max-microvolt = <900000>;
++				regulator-max-microvolt = <850000>;
+ 				regulator-boot-on;
+ 				regulator-always-on;
+ 				regulator-ramp-delay = <3125>;
++				nxp,dvs-run-voltage = <850000>;
++				nxp,dvs-standby-voltage = <800000>;
+ 			};
+ 
+ 			reg_vdd_arm: BUCK2 {
+@@ -111,7 +113,7 @@
+ 			reg_vdd_dram: BUCK3 {
+ 				regulator-name = "buck3";
+ 				regulator-min-microvolt = <850000>;
+-				regulator-max-microvolt = <900000>;
++				regulator-max-microvolt = <950000>;
+ 				regulator-boot-on;
+ 				regulator-always-on;
+ 			};
+@@ -150,7 +152,7 @@
+ 
+ 			reg_vdd_snvs: LDO2 {
+ 				regulator-name = "ldo2";
+-				regulator-min-microvolt = <850000>;
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <900000>;
+ 				regulator-boot-on;
+ 				regulator-always-on;
+diff --git a/arch/nds32/kernel/ftrace.c b/arch/nds32/kernel/ftrace.c
+index 0e23e3a8df6b5..d55b73b18149e 100644
+--- a/arch/nds32/kernel/ftrace.c
++++ b/arch/nds32/kernel/ftrace.c
+@@ -6,7 +6,7 @@
+ 
+ #ifndef CONFIG_DYNAMIC_FTRACE
+ extern void (*ftrace_trace_function)(unsigned long, unsigned long,
+-				     struct ftrace_ops*, struct pt_regs*);
++				     struct ftrace_ops*, struct ftrace_regs*);
+ extern void ftrace_graph_caller(void);
+ 
+ noinline void __naked ftrace_stub(unsigned long ip, unsigned long parent_ip,
+diff --git a/arch/nios2/platform/Kconfig.platform b/arch/nios2/platform/Kconfig.platform
+index 9e32fb7f3d4ce..e849daff6fd16 100644
+--- a/arch/nios2/platform/Kconfig.platform
++++ b/arch/nios2/platform/Kconfig.platform
+@@ -37,6 +37,7 @@ config NIOS2_DTB_PHYS_ADDR
+ 
+ config NIOS2_DTB_SOURCE_BOOL
+ 	bool "Compile and link device tree into kernel image"
++	depends on !COMPILE_TEST
+ 	help
+ 	  This allows you to specify a dts (device tree source) file
+ 	  which will be compiled and linked into the kernel image.
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 4f7b70ae7c319..cd295e30b7acc 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -157,6 +157,12 @@ config PAGE_OFFSET
+ 	default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB
+ 	default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB
+ 
++config KASAN_SHADOW_OFFSET
++	hex
++	depends on KASAN_GENERIC
++	default 0xdfffffc800000000 if 64BIT
++	default 0xffffffff if 32BIT
++
+ config ARCH_FLATMEM_ENABLE
+ 	def_bool !NUMA
+ 
+diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h
+index a2b3d9cdbc868..b00f503ec1248 100644
+--- a/arch/riscv/include/asm/kasan.h
++++ b/arch/riscv/include/asm/kasan.h
+@@ -30,8 +30,7 @@
+ #define KASAN_SHADOW_SIZE	(UL(1) << ((CONFIG_VA_BITS - 1) - KASAN_SHADOW_SCALE_SHIFT))
+ #define KASAN_SHADOW_START	KERN_VIRT_START
+ #define KASAN_SHADOW_END	(KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
+-#define KASAN_SHADOW_OFFSET	(KASAN_SHADOW_END - (1ULL << \
+-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
++#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+ 
+ void kasan_init(void);
+ asmlinkage void kasan_early_init(void);
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index fce5184b22c34..52c5ff9804c55 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -193,6 +193,7 @@ setup_trap_vector:
+ 	csrw CSR_SCRATCH, zero
+ 	ret
+ 
++.align 2
+ .Lsecondary_park:
+ 	/* We lack SMP support or have too many harts, so park this hart */
+ 	wfi
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index d7189c8714a95..54294f83513d1 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -17,6 +17,9 @@ asmlinkage void __init kasan_early_init(void)
+ 	uintptr_t i;
+ 	pgd_t *pgd = early_pg_dir + pgd_index(KASAN_SHADOW_START);
+ 
++	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
++		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
++
+ 	for (i = 0; i < PTRS_PER_PTE; ++i)
+ 		set_pte(kasan_early_shadow_pte + i,
+ 			mk_pte(virt_to_page(kasan_early_shadow_page),
+@@ -172,21 +175,10 @@ void __init kasan_init(void)
+ 	phys_addr_t p_start, p_end;
+ 	u64 i;
+ 
+-	/*
+-	 * Populate all kernel virtual address space with kasan_early_shadow_page
+-	 * except for the linear mapping and the modules/kernel/BPF mapping.
+-	 */
+-	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
+-				    (void *)kasan_mem_to_shadow((void *)
+-								VMEMMAP_END));
+ 	if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
+ 		kasan_shallow_populate(
+ 			(void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+ 			(void *)kasan_mem_to_shadow((void *)VMALLOC_END));
+-	else
+-		kasan_populate_early_shadow(
+-			(void *)kasan_mem_to_shadow((void *)VMALLOC_START),
+-			(void *)kasan_mem_to_shadow((void *)VMALLOC_END));
+ 
+ 	/* Populate the linear mapping */
+ 	for_each_mem_range(i, &p_start, &p_end) {
+diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
+index fed86f42dfbe5..5d247198c30d3 100644
+--- a/arch/riscv/net/bpf_jit_core.c
++++ b/arch/riscv/net/bpf_jit_core.c
+@@ -125,7 +125,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 
+ 	if (i == NR_JIT_ITERATIONS) {
+ 		pr_err("bpf-jit: image did not converge in <%d passes!\n", i);
+-		bpf_jit_binary_free(jit_data->header);
++		if (jit_data->header)
++			bpf_jit_binary_free(jit_data->header);
+ 		prog = orig_prog;
+ 		goto out_offset;
+ 	}
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 16256e17a544a..ee9d052476b50 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -3053,13 +3053,14 @@ static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
+ 	int vcpu_idx, online_vcpus = atomic_read(&kvm->online_vcpus);
+ 	struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
+ 	struct kvm_vcpu *vcpu;
++	u8 vcpu_isc_mask;
+ 
+ 	for_each_set_bit(vcpu_idx, kvm->arch.idle_mask, online_vcpus) {
+ 		vcpu = kvm_get_vcpu(kvm, vcpu_idx);
+ 		if (psw_ioint_disabled(vcpu))
+ 			continue;
+-		deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+-		if (deliverable_mask) {
++		vcpu_isc_mask = (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
++		if (deliverable_mask & vcpu_isc_mask) {
+ 			/* lately kicked but not yet running */
+ 			if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
+ 				return;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 8580543c5bc33..46ad1bdd53a27 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -3341,6 +3341,7 @@ out_free_sie_block:
+ 
+ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
+ {
++	clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.gisa_int.kicked_mask);
+ 	return kvm_s390_vcpu_has_irq(vcpu, 0);
+ }
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 471b35d0b121e..41f7ee07271e1 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1084,7 +1084,7 @@ struct kvm_arch {
+ 	u64 cur_tsc_generation;
+ 	int nr_vcpus_matched_tsc;
+ 
+-	spinlock_t pvclock_gtod_sync_lock;
++	raw_spinlock_t pvclock_gtod_sync_lock;
+ 	bool use_master_clock;
+ 	u64 master_kernel_ns;
+ 	u64 master_cycle_now;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 9959888cb10c8..675a9bbf322e0 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2592,11 +2592,20 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+ 
+ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ {
+-	if (!setup_vmgexit_scratch(svm, in, svm->vmcb->control.exit_info_2))
++	int count;
++	int bytes;
++
++	if (svm->vmcb->control.exit_info_2 > INT_MAX)
++		return -EINVAL;
++
++	count = svm->vmcb->control.exit_info_2;
++	if (unlikely(check_mul_overflow(count, size, &bytes)))
++		return -EINVAL;
++
++	if (!setup_vmgexit_scratch(svm, in, bytes))
+ 		return -EINVAL;
+ 
+-	return kvm_sev_es_string_io(&svm->vcpu, size, port,
+-				    svm->ghcb_sa, svm->ghcb_sa_len / size, in);
++	return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->ghcb_sa, count, in);
+ }
+ 
+ void sev_es_init_vmcb(struct vcpu_svm *svm)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 8e9df0e00f3dd..6aea38dfb0bb0 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2537,7 +2537,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
+ 	kvm_vcpu_write_tsc_offset(vcpu, offset);
+ 	raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
+ 
+-	spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);
++	raw_spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);
+ 	if (!matched) {
+ 		kvm->arch.nr_vcpus_matched_tsc = 0;
+ 	} else if (!already_matched) {
+@@ -2545,7 +2545,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
+ 	}
+ 
+ 	kvm_track_tsc_matching(vcpu);
+-	spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
++	raw_spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);
+ }
+ 
+ static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,
+@@ -2775,9 +2775,9 @@ static void kvm_gen_update_masterclock(struct kvm *kvm)
+ 	kvm_make_mclock_inprogress_request(kvm);
+ 
+ 	/* no guest entries from this point */
+-	spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
++	raw_spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
+ 	pvclock_update_vm_gtod_copy(kvm);
+-	spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
++	raw_spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
+ 
+ 	kvm_for_each_vcpu(i, vcpu, kvm)
+ 		kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
+@@ -2795,15 +2795,15 @@ u64 get_kvmclock_ns(struct kvm *kvm)
+ 	unsigned long flags;
+ 	u64 ret;
+ 
+-	spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
++	raw_spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
+ 	if (!ka->use_master_clock) {
+-		spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
++		raw_spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
+ 		return get_kvmclock_base_ns() + ka->kvmclock_offset;
+ 	}
+ 
+ 	hv_clock.tsc_timestamp = ka->master_cycle_now;
+ 	hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset;
+-	spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
++	raw_spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
+ 
+ 	/* both __this_cpu_read() and rdtsc() should be on the same cpu */
+ 	get_cpu();
+@@ -2897,13 +2897,13 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
+ 	 * If the host uses TSC clock, then passthrough TSC as stable
+ 	 * to the guest.
+ 	 */
+-	spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
++	raw_spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
+ 	use_master_clock = ka->use_master_clock;
+ 	if (use_master_clock) {
+ 		host_tsc = ka->master_cycle_now;
+ 		kernel_ns = ka->master_kernel_ns;
+ 	}
+-	spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
++	raw_spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
+ 
+ 	/* Keep irq disabled to prevent changes to the clock */
+ 	local_irq_save(flags);
+@@ -6101,13 +6101,13 @@ set_pit2_out:
+ 		 * is slightly ahead) here we risk going negative on unsigned
+ 		 * 'system_time' when 'user_ns.clock' is very small.
+ 		 */
+-		spin_lock_irq(&ka->pvclock_gtod_sync_lock);
++		raw_spin_lock_irq(&ka->pvclock_gtod_sync_lock);
+ 		if (kvm->arch.use_master_clock)
+ 			now_ns = ka->master_kernel_ns;
+ 		else
+ 			now_ns = get_kvmclock_base_ns();
+ 		ka->kvmclock_offset = user_ns.clock - now_ns;
+-		spin_unlock_irq(&ka->pvclock_gtod_sync_lock);
++		raw_spin_unlock_irq(&ka->pvclock_gtod_sync_lock);
+ 
+ 		kvm_make_all_cpus_request(kvm, KVM_REQ_CLOCK_UPDATE);
+ 		break;
+@@ -8157,9 +8157,9 @@ static void kvm_hyperv_tsc_notifier(void)
+ 	list_for_each_entry(kvm, &vm_list, vm_list) {
+ 		struct kvm_arch *ka = &kvm->arch;
+ 
+-		spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
++		raw_spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);
+ 		pvclock_update_vm_gtod_copy(kvm);
+-		spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
++		raw_spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);
+ 
+ 		kvm_for_each_vcpu(cpu, vcpu, kvm)
+ 			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
+@@ -8799,9 +8799,17 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
+ 
+ 	kvm_run->cr8 = kvm_get_cr8(vcpu);
+ 	kvm_run->apic_base = kvm_get_apic_base(vcpu);
++
++	/*
++	 * The call to kvm_ready_for_interrupt_injection() may end up in
++	 * kvm_xen_has_interrupt() which may require the srcu lock to be
++	 * held, to protect against changes in the vcpu_info address.
++	 */
++	vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
+ 	kvm_run->ready_for_interrupt_injection =
+ 		pic_in_kernel(vcpu->kvm) ||
+ 		kvm_vcpu_ready_for_interrupt_injection(vcpu);
++	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
+ 
+ 	if (is_smm(vcpu))
+ 		kvm_run->flags |= KVM_RUN_X86_SMM;
+@@ -11148,7 +11156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ 
+ 	raw_spin_lock_init(&kvm->arch.tsc_write_lock);
+ 	mutex_init(&kvm->arch.apic_map_lock);
+-	spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock);
++	raw_spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock);
+ 
+ 	kvm->arch.kvmclock_offset = -get_kvmclock_base_ns();
+ 	pvclock_update_vm_gtod_copy(kvm);
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index ae17250e1efe0..283a4744a9e78 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -191,6 +191,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
+ 
+ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
+ {
++	int err;
+ 	u8 rc = 0;
+ 
+ 	/*
+@@ -217,13 +218,29 @@ int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
+ 	if (likely(slots->generation == ghc->generation &&
+ 		   !kvm_is_error_hva(ghc->hva) && ghc->memslot)) {
+ 		/* Fast path */
+-		__get_user(rc, (u8 __user *)ghc->hva + offset);
+-	} else {
+-		/* Slow path */
+-		kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
+-					     sizeof(rc));
++		pagefault_disable();
++		err = __get_user(rc, (u8 __user *)ghc->hva + offset);
++		pagefault_enable();
++		if (!err)
++			return rc;
+ 	}
+ 
++	/* Slow path */
++
++	/*
++	 * This function gets called from kvm_vcpu_block() after setting the
++	 * task to TASK_INTERRUPTIBLE, to see if it needs to wake immediately
++	 * from a HLT. So we really mustn't sleep. If the page ended up absent
++	 * at that point, just return 1 in order to trigger an immediate wake,
++	 * and we'll end up getting called again from a context where we *can*
++	 * fault in the page and wait for it.
++	 */
++	if (in_atomic() || !task_is_running(current))
++		return 1;
++
++	kvm_read_guest_offset_cached(v->kvm, ghc, &rc, offset,
++				     sizeof(rc));
++
+ 	return rc;
+ }
+ 
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 902c40d671202..4c7c0c17cb0a3 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -842,6 +842,24 @@ bool blk_queue_can_use_dma_map_merging(struct request_queue *q,
+ }
+ EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);
+ 
++static bool disk_has_partitions(struct gendisk *disk)
++{
++	unsigned long idx;
++	struct block_device *part;
++	bool ret = false;
++
++	rcu_read_lock();
++	xa_for_each(&disk->part_tbl, idx, part) {
++		if (bdev_is_partition(part)) {
++			ret = true;
++			break;
++		}
++	}
++	rcu_read_unlock();
++
++	return ret;
++}
++
+ /**
+  * blk_queue_set_zoned - configure a disk queue zoned model.
+  * @disk:	the gendisk of the queue to configure
+@@ -876,7 +894,7 @@ void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
+ 		 * we do nothing special as far as the block layer is concerned.
+ 		 */
+ 		if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) ||
+-		    !xa_empty(&disk->part_tbl))
++		    disk_has_partitions(disk))
+ 			model = BLK_ZONED_NONE;
+ 		break;
+ 	case BLK_ZONED_NONE:
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index 9d86203e1e7a1..c53633d47bfb3 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -3896,8 +3896,8 @@ static int mv_chip_id(struct ata_host *host, unsigned int board_idx)
+ 		break;
+ 
+ 	default:
+-		dev_err(host->dev, "BUG: invalid board index %u\n", board_idx);
+-		return 1;
++		dev_alert(host->dev, "BUG: invalid board index %u\n", board_idx);
++		return -EINVAL;
+ 	}
+ 
+ 	hpriv->hp_flags = hp_flags;
+diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c
+index cfa29dc89bbff..fabf87058d80b 100644
+--- a/drivers/base/regmap/regcache-rbtree.c
++++ b/drivers/base/regmap/regcache-rbtree.c
+@@ -281,14 +281,14 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 	if (!blk)
+ 		return -ENOMEM;
+ 
++	rbnode->block = blk;
++
+ 	if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) {
+ 		present = krealloc(rbnode->cache_present,
+ 				   BITS_TO_LONGS(blklen) * sizeof(*present),
+ 				   GFP_KERNEL);
+-		if (!present) {
+-			kfree(blk);
++		if (!present)
+ 			return -ENOMEM;
+-		}
+ 
+ 		memset(present + BITS_TO_LONGS(rbnode->blklen), 0,
+ 		       (BITS_TO_LONGS(blklen) - BITS_TO_LONGS(rbnode->blklen))
+@@ -305,7 +305,6 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 	}
+ 
+ 	/* update the rbnode block, its size and the base register */
+-	rbnode->block = blk;
+ 	rbnode->blklen = blklen;
+ 	rbnode->base_reg = base_reg;
+ 	rbnode->cache_present = present;
+diff --git a/drivers/gpio/gpio-xgs-iproc.c b/drivers/gpio/gpio-xgs-iproc.c
+index ad5489a65d542..dd40277b9d06d 100644
+--- a/drivers/gpio/gpio-xgs-iproc.c
++++ b/drivers/gpio/gpio-xgs-iproc.c
+@@ -224,7 +224,7 @@ static int iproc_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	chip->gc.label = dev_name(dev);
+-	if (of_property_read_u32(dn, "ngpios", &num_gpios))
++	if (!of_property_read_u32(dn, "ngpios", &num_gpios))
+ 		chip->gc.ngpio = num_gpios;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 94d029dbf30da..fbcee5d7d6a11 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -1237,7 +1237,7 @@ static int nv_common_early_init(void *handle)
+ 			AMD_PG_SUPPORT_VCN_DPG |
+ 			AMD_PG_SUPPORT_JPEG;
+ 		if (adev->pdev->device == 0x1681)
+-			adev->external_rev_id = adev->rev_id + 0x19;
++			adev->external_rev_id = 0x20;
+ 		else
+ 			adev->external_rev_id = adev->rev_id + 0x01;
+ 		break;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 1d15a9af99560..4c8edfdc3cac8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -263,7 +263,7 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	if (!wr_buf)
+ 		return -ENOSPC;
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					   (long *)param, buf,
+ 					   max_param_num,
+ 					   &param_nums)) {
+@@ -487,7 +487,7 @@ static ssize_t dp_phy_settings_write(struct file *f, const char __user *buf,
+ 	if (!wr_buf)
+ 		return -ENOSPC;
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					   (long *)param, buf,
+ 					   max_param_num,
+ 					   &param_nums)) {
+@@ -639,7 +639,7 @@ static ssize_t dp_phy_test_pattern_debugfs_write(struct file *f, const char __us
+ 	if (!wr_buf)
+ 		return -ENOSPC;
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					   (long *)param, buf,
+ 					   max_param_num,
+ 					   &param_nums)) {
+@@ -914,7 +914,7 @@ static ssize_t dp_dsc_passthrough_set(struct file *f, const char __user *buf,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					   &param, buf,
+ 					   max_param_num,
+ 					   &param_nums)) {
+@@ -1211,7 +1211,7 @@ static ssize_t trigger_hotplug(struct file *f, const char __user *buf,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 						(long *)param, buf,
+ 						max_param_num,
+ 						&param_nums)) {
+@@ -1396,7 +1396,7 @@ static ssize_t dp_dsc_clock_en_write(struct file *f, const char __user *buf,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					    (long *)param, buf,
+ 					    max_param_num,
+ 					    &param_nums)) {
+@@ -1581,7 +1581,7 @@ static ssize_t dp_dsc_slice_width_write(struct file *f, const char __user *buf,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					    (long *)param, buf,
+ 					    max_param_num,
+ 					    &param_nums)) {
+@@ -1766,7 +1766,7 @@ static ssize_t dp_dsc_slice_height_write(struct file *f, const char __user *buf,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					    (long *)param, buf,
+ 					    max_param_num,
+ 					    &param_nums)) {
+@@ -1944,7 +1944,7 @@ static ssize_t dp_dsc_bits_per_pixel_write(struct file *f, const char __user *bu
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					    (long *)param, buf,
+ 					    max_param_num,
+ 					    &param_nums)) {
+@@ -2382,7 +2382,7 @@ static ssize_t dp_max_bpc_write(struct file *f, const char __user *buf,
+ 		return -ENOSPC;
+ 	}
+ 
+-	if (parse_write_buffer_into_params(wr_buf, size,
++	if (parse_write_buffer_into_params(wr_buf, wr_buf_size,
+ 					   (long *)param, buf,
+ 					   max_param_num,
+ 					   &param_nums)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+index 4a4894e9d9c9a..377c4e53a2b37 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+@@ -366,32 +366,32 @@ static struct wm_table lpddr5_wm_table = {
+ 			.wm_inst = WM_A,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 5.32,
+-			.sr_enter_plus_exit_time_us = 6.38,
++			.sr_exit_time_us = 11.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_B,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 9.82,
+-			.sr_enter_plus_exit_time_us = 11.196,
++			.sr_exit_time_us = 11.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_C,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 9.89,
+-			.sr_enter_plus_exit_time_us = 11.24,
++			.sr_exit_time_us = 11.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 		{
+ 			.wm_inst = WM_D,
+ 			.wm_type = WM_TYPE_PSTATE_CHG,
+ 			.pstate_latency_us = 11.65333,
+-			.sr_exit_time_us = 9.748,
+-			.sr_enter_plus_exit_time_us = 11.102,
++			.sr_exit_time_us = 11.5,
++			.sr_enter_plus_exit_time_us = 14.5,
+ 			.valid = true,
+ 		},
+ 	}
+@@ -518,14 +518,21 @@ static unsigned int find_clk_for_voltage(
+ 		unsigned int voltage)
+ {
+ 	int i;
++	int max_voltage = 0;
++	int clock = 0;
+ 
+ 	for (i = 0; i < NUM_SOC_VOLTAGE_LEVELS; i++) {
+-		if (clock_table->SocVoltage[i] == voltage)
++		if (clock_table->SocVoltage[i] == voltage) {
+ 			return clocks[i];
++		} else if (clock_table->SocVoltage[i] >= max_voltage &&
++				clock_table->SocVoltage[i] < voltage) {
++			max_voltage = clock_table->SocVoltage[i];
++			clock = clocks[i];
++		}
+ 	}
+ 
+-	ASSERT(0);
+-	return 0;
++	ASSERT(clock);
++	return clock;
+ }
+ 
+ void dcn31_clk_mgr_helper_populate_bw_params(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+index 8189606537c5a..f215f671210f8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+@@ -76,10 +76,6 @@ void dcn31_init_hw(struct dc *dc)
+ 	if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+ 		dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+ 
+-	// Initialize the dccg
+-	if (res_pool->dccg->funcs->dccg_init)
+-		res_pool->dccg->funcs->dccg_init(res_pool->dccg);
+-
+ 	if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
+ 
+ 		REG_WRITE(REFCLK_CNTL, 0);
+@@ -106,6 +102,9 @@ void dcn31_init_hw(struct dc *dc)
+ 		hws->funcs.bios_golden_init(dc);
+ 		hws->funcs.disable_vga(dc->hwseq);
+ 	}
++	// Initialize the dccg
++	if (res_pool->dccg->funcs->dccg_init)
++		res_pool->dccg->funcs->dccg_init(res_pool->dccg);
+ 
+ 	if (dc->debug.enable_mem_low_power.bits.dmcu) {
+ 		// Force ERAM to shutdown if DMCU is not enabled
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index 7ea362a864c43..4c9eed3f0713f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -217,8 +217,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_1_soc = {
+ 	.num_states = 5,
+ 	.sr_exit_time_us = 9.0,
+ 	.sr_enter_plus_exit_time_us = 11.0,
+-	.sr_exit_z8_time_us = 402.0,
+-	.sr_enter_plus_exit_z8_time_us = 520.0,
++	.sr_exit_z8_time_us = 442.0,
++	.sr_enter_plus_exit_z8_time_us = 560.0,
+ 	.writeback_latency_us = 12.0,
+ 	.dram_channel_width_bytes = 4,
+ 	.round_trip_ping_latency_dcfclk_cycles = 106,
+@@ -928,7 +928,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 	.disable_dcc = DCC_ENABLE,
+ 	.vsr_support = true,
+ 	.performance_trace = false,
+-	.max_downscale_src_width = 3840,/*upto 4K*/
++	.max_downscale_src_width = 4096,/*upto true 4K*/
+ 	.disable_pplib_wm_range = false,
+ 	.scl_reset_length10 = true,
+ 	.sanity_checks = false,
+@@ -1591,6 +1591,13 @@ static int dcn31_populate_dml_pipes_from_context(
+ 		pipe = &res_ctx->pipe_ctx[i];
+ 		timing = &pipe->stream->timing;
+ 
++		/*
++		 * Immediate flip can be set dynamically after enabling the plane.
++		 * We need to require support for immediate flip or underflow can be
++		 * intermittently experienced depending on peak b/w requirements.
++		 */
++		pipes[pipe_cnt].pipe.src.immediate_flip = true;
++
+ 		pipes[pipe_cnt].pipe.src.unbounded_req_mode = false;
+ 		pipes[pipe_cnt].pipe.src.gpuvm = true;
+ 		pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_luma = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+index a9667068c6901..ab52dd7b330d4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+@@ -5399,9 +5399,9 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 
+ 					v->MaximumReadBandwidthWithPrefetch =
+ 							v->MaximumReadBandwidthWithPrefetch
+-									+ dml_max4(
+-											v->VActivePixelBandwidth[i][j][k],
+-											v->VActiveCursorBandwidth[i][j][k]
++									+ dml_max3(
++											v->VActivePixelBandwidth[i][j][k]
++													+ v->VActiveCursorBandwidth[i][j][k]
+ 													+ v->NoOfDPP[i][j][k]
+ 															* (v->meta_row_bandwidth[i][j][k]
+ 																	+ v->dpte_row_bandwidth[i][j][k]),
+diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+index 5adc471bef57f..3d2f0817e40a1 100644
+--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
++++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+@@ -227,7 +227,7 @@ enum {
+ #define FAMILY_YELLOW_CARP                     146
+ 
+ #define YELLOW_CARP_A0 0x01
+-#define YELLOW_CARP_B0 0x1A
++#define YELLOW_CARP_B0 0x20
+ #define YELLOW_CARP_UNKNOWN 0xFF
+ 
+ #ifndef ASICREV_IS_YELLOW_CARP
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index 1b02056bc8bde..422759f9078d9 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -105,6 +105,7 @@ static enum mod_hdcp_status mod_hdcp_remove_display_from_topology_v3(
+ 	dtm_cmd->dtm_status = TA_DTM_STATUS__GENERIC_FAILURE;
+ 
+ 	psp_dtm_invoke(psp, dtm_cmd->cmd_id);
++	mutex_unlock(&psp->dtm_context.mutex);
+ 
+ 	if (dtm_cmd->dtm_status != TA_DTM_STATUS__SUCCESS) {
+ 		status = mod_hdcp_remove_display_from_topology_v2(hdcp, index);
+@@ -115,8 +116,6 @@ static enum mod_hdcp_status mod_hdcp_remove_display_from_topology_v3(
+ 		HDCP_TOP_REMOVE_DISPLAY_TRACE(hdcp, display->index);
+ 	}
+ 
+-	mutex_unlock(&psp->dtm_context.mutex);
+-
+ 	return status;
+ }
+ 
+@@ -218,6 +217,7 @@ static enum mod_hdcp_status mod_hdcp_add_display_to_topology_v3(
+ 	dtm_cmd->dtm_in_message.topology_update_v3.link_hdcp_cap = link->hdcp_supported_informational;
+ 
+ 	psp_dtm_invoke(psp, dtm_cmd->cmd_id);
++	mutex_unlock(&psp->dtm_context.mutex);
+ 
+ 	if (dtm_cmd->dtm_status != TA_DTM_STATUS__SUCCESS) {
+ 		status = mod_hdcp_add_display_to_topology_v2(hdcp, display);
+@@ -227,8 +227,6 @@ static enum mod_hdcp_status mod_hdcp_add_display_to_topology_v3(
+ 		HDCP_TOP_ADD_DISPLAY_TRACE(hdcp, display->index);
+ 	}
+ 
+-	mutex_unlock(&psp->dtm_context.mutex);
+-
+ 	return status;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index d511e578ba79d..a1a150935264e 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1924,6 +1924,9 @@ void intel_dp_sync_state(struct intel_encoder *encoder,
+ {
+ 	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ 
++	if (!crtc_state)
++		return;
++
+ 	/*
+ 	 * Don't clobber DPCD if it's been already read out during output
+ 	 * setup (eDP) or detect.
+diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
+index 1257f4f11e66f..438bbc7b81474 100644
+--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
+@@ -64,7 +64,7 @@ intel_timeline_pin_map(struct intel_timeline *timeline)
+ 
+ 	timeline->hwsp_map = vaddr;
+ 	timeline->hwsp_seqno = memset(vaddr + ofs, 0, TIMELINE_SEQNO_BYTES);
+-	clflush(vaddr + ofs);
++	drm_clflush_virt_range(vaddr + ofs, TIMELINE_SEQNO_BYTES);
+ 
+ 	return 0;
+ }
+@@ -225,7 +225,7 @@ void intel_timeline_reset_seqno(const struct intel_timeline *tl)
+ 
+ 	memset(hwsp_seqno + 1, 0, TIMELINE_SEQNO_BYTES - sizeof(*hwsp_seqno));
+ 	WRITE_ONCE(*hwsp_seqno, tl->seqno);
+-	clflush(hwsp_seqno);
++	drm_clflush_virt_range(hwsp_seqno, TIMELINE_SEQNO_BYTES);
+ }
+ 
+ void intel_timeline_enter(struct intel_timeline *tl)
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index 1c5ffe2935af5..abf2d7a4fdf1a 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -190,6 +190,7 @@ static void ttm_transfered_destroy(struct ttm_buffer_object *bo)
+ 	struct ttm_transfer_obj *fbo;
+ 
+ 	fbo = container_of(bo, struct ttm_transfer_obj, base);
++	dma_resv_fini(&fbo->base.base._resv);
+ 	ttm_bo_put(fbo->bo);
+ 	kfree(fbo);
+ }
+diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
+index b61576f702b86..31fcf8e3bd728 100644
+--- a/drivers/infiniband/core/sa_query.c
++++ b/drivers/infiniband/core/sa_query.c
+@@ -760,8 +760,9 @@ static void ib_nl_set_path_rec_attrs(struct sk_buff *skb,
+ 
+ 	/* Construct the family header first */
+ 	header = skb_put(skb, NLMSG_ALIGN(sizeof(*header)));
+-	memcpy(header->device_name, dev_name(&query->port->agent->device->dev),
+-	       LS_DEVICE_NAME_MAX);
++	strscpy_pad(header->device_name,
++		    dev_name(&query->port->agent->device->dev),
++		    LS_DEVICE_NAME_MAX);
+ 	header->port_num = query->port->port_num;
+ 
+ 	if ((comp_mask & IB_SA_PATH_REC_REVERSIBLE) &&
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index e276522104c6a..b00f687e1a7c7 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -920,6 +920,7 @@ void sc_disable(struct send_context *sc)
+ {
+ 	u64 reg;
+ 	struct pio_buf *pbuf;
++	LIST_HEAD(wake_list);
+ 
+ 	if (!sc)
+ 		return;
+@@ -954,19 +955,21 @@ void sc_disable(struct send_context *sc)
+ 	spin_unlock(&sc->release_lock);
+ 
+ 	write_seqlock(&sc->waitlock);
+-	while (!list_empty(&sc->piowait)) {
++	if (!list_empty(&sc->piowait))
++		list_move(&sc->piowait, &wake_list);
++	write_sequnlock(&sc->waitlock);
++	while (!list_empty(&wake_list)) {
+ 		struct iowait *wait;
+ 		struct rvt_qp *qp;
+ 		struct hfi1_qp_priv *priv;
+ 
+-		wait = list_first_entry(&sc->piowait, struct iowait, list);
++		wait = list_first_entry(&wake_list, struct iowait, list);
+ 		qp = iowait_to_qp(wait);
+ 		priv = qp->priv;
+ 		list_del_init(&priv->s_iowait.list);
+ 		priv->s_iowait.lock = NULL;
+ 		hfi1_qp_wakeup(qp, RVT_S_WAIT_PIO | HFI1_S_WAIT_PIO_DRAIN);
+ 	}
+-	write_sequnlock(&sc->waitlock);
+ 
+ 	spin_unlock_irq(&sc->alloc_lock);
+ }
+diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c
+index 5fb92de1f015a..9b544a3b12886 100644
+--- a/drivers/infiniband/hw/irdma/uk.c
++++ b/drivers/infiniband/hw/irdma/uk.c
+@@ -1092,12 +1092,12 @@ irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, struct irdma_cq_poll_info *info)
+ 		if (cq->avoid_mem_cflct) {
+ 			ext_cqe = (__le64 *)((u8 *)cqe + 32);
+ 			get_64bit_val(ext_cqe, 24, &qword7);
+-			polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3);
++			polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword7);
+ 		} else {
+ 			peek_head = (cq->cq_ring.head + 1) % cq->cq_ring.size;
+ 			ext_cqe = cq->cq_base[peek_head].buf;
+ 			get_64bit_val(ext_cqe, 24, &qword7);
+-			polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3);
++			polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword7);
+ 			if (!peek_head)
+ 				polarity ^= 1;
+ 		}
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index fa393c5ea3973..4261705fa19d5 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -3405,9 +3405,13 @@ static void irdma_process_cqe(struct ib_wc *entry,
+ 		}
+ 
+ 		if (cq_poll_info->ud_vlan_valid) {
+-			entry->vlan_id = cq_poll_info->ud_vlan & VLAN_VID_MASK;
+-			entry->wc_flags |= IB_WC_WITH_VLAN;
++			u16 vlan = cq_poll_info->ud_vlan & VLAN_VID_MASK;
++
+ 			entry->sl = cq_poll_info->ud_vlan >> VLAN_PRIO_SHIFT;
++			if (vlan) {
++				entry->vlan_id = vlan;
++				entry->wc_flags |= IB_WC_WITH_VLAN;
++			}
+ 		} else {
+ 			entry->sl = 0;
+ 		}
+diff --git a/drivers/infiniband/hw/irdma/ws.c b/drivers/infiniband/hw/irdma/ws.c
+index b68c575eb78e7..b0d6ee0739f53 100644
+--- a/drivers/infiniband/hw/irdma/ws.c
++++ b/drivers/infiniband/hw/irdma/ws.c
+@@ -330,8 +330,10 @@ enum irdma_status_code irdma_ws_add(struct irdma_sc_vsi *vsi, u8 user_pri)
+ 
+ 		tc_node->enable = true;
+ 		ret = irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_MODIFY_NODE);
+-		if (ret)
++		if (ret) {
++			vsi->unregister_qset(vsi, tc_node);
+ 			goto reg_err;
++		}
+ 	}
+ 	ibdev_dbg(to_ibdev(vsi->dev),
+ 		  "WS: Using node %d which represents VSI %d TC %d\n",
+@@ -350,6 +352,10 @@ enum irdma_status_code irdma_ws_add(struct irdma_sc_vsi *vsi, u8 user_pri)
+ 	}
+ 	goto exit;
+ 
++reg_err:
++	irdma_ws_cqp_cmd(vsi, tc_node, IRDMA_OP_WS_DELETE_NODE);
++	list_del(&tc_node->siblings);
++	irdma_free_node(vsi, tc_node);
+ leaf_add_err:
+ 	if (list_empty(&vsi_node->child_list_head)) {
+ 		if (irdma_ws_cqp_cmd(vsi, vsi_node, IRDMA_OP_WS_DELETE_NODE))
+@@ -369,11 +375,6 @@ vsi_add_err:
+ exit:
+ 	mutex_unlock(&vsi->dev->ws_mutex);
+ 	return ret;
+-
+-reg_err:
+-	mutex_unlock(&vsi->dev->ws_mutex);
+-	irdma_ws_remove(vsi, user_pri);
+-	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 061dbee55cac1..4b598ec72372d 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1338,7 +1338,6 @@ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, struct ib_umem *umem,
+ 		goto err_2;
+ 	}
+ 	mr->mmkey.type = MLX5_MKEY_MR;
+-	mr->desc_size = sizeof(struct mlx5_mtt);
+ 	mr->umem = umem;
+ 	set_mr_fields(dev, mr, umem->length, access_flags);
+ 	kvfree(in);
+@@ -1532,6 +1531,7 @@ static struct ib_mr *create_user_odp_mr(struct ib_pd *pd, u64 start, u64 length,
+ 		ib_umem_release(&odp->umem);
+ 		return ERR_CAST(mr);
+ 	}
++	xa_init(&mr->implicit_children);
+ 
+ 	odp->private = mr;
+ 	err = mlx5r_store_odp_mkey(dev, &mr->mmkey);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index fd88b9ae96fe8..80d989edb7dd8 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -4309,6 +4309,8 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		MLX5_SET(dctc, dctc, mtu, attr->path_mtu);
+ 		MLX5_SET(dctc, dctc, my_addr_index, attr->ah_attr.grh.sgid_index);
+ 		MLX5_SET(dctc, dctc, hop_limit, attr->ah_attr.grh.hop_limit);
++		if (attr->ah_attr.type == RDMA_AH_ATTR_TYPE_ROCE)
++			MLX5_SET(dctc, dctc, eth_prio, attr->ah_attr.sl & 0x7);
+ 
+ 		err = mlx5_core_create_dct(dev, &qp->dct.mdct, qp->dct.in,
+ 					   MLX5_ST_SZ_BYTES(create_dct_in), out,
+diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c
+index a67599b5a550a..ac11943a5ddb0 100644
+--- a/drivers/infiniband/hw/qib/qib_user_sdma.c
++++ b/drivers/infiniband/hw/qib/qib_user_sdma.c
+@@ -602,7 +602,7 @@ done:
+ /*
+  * How many pages in this iovec element?
+  */
+-static int qib_user_sdma_num_pages(const struct iovec *iov)
++static size_t qib_user_sdma_num_pages(const struct iovec *iov)
+ {
+ 	const unsigned long addr  = (unsigned long) iov->iov_base;
+ 	const unsigned long  len  = iov->iov_len;
+@@ -658,7 +658,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev,
+ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
+ 				   struct qib_user_sdma_queue *pq,
+ 				   struct qib_user_sdma_pkt *pkt,
+-				   unsigned long addr, int tlen, int npages)
++				   unsigned long addr, int tlen, size_t npages)
+ {
+ 	struct page *pages[8];
+ 	int i, j;
+@@ -722,7 +722,7 @@ static int qib_user_sdma_pin_pkt(const struct qib_devdata *dd,
+ 	unsigned long idx;
+ 
+ 	for (idx = 0; idx < niov; idx++) {
+-		const int npages = qib_user_sdma_num_pages(iov + idx);
++		const size_t npages = qib_user_sdma_num_pages(iov + idx);
+ 		const unsigned long addr = (unsigned long) iov[idx].iov_base;
+ 
+ 		ret = qib_user_sdma_pin_pages(dd, pq, pkt, addr,
+@@ -824,8 +824,8 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 		unsigned pktnw;
+ 		unsigned pktnwc;
+ 		int nfrags = 0;
+-		int npages = 0;
+-		int bytes_togo = 0;
++		size_t npages = 0;
++		size_t bytes_togo = 0;
+ 		int tiddma = 0;
+ 		int cfur;
+ 
+@@ -885,7 +885,11 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 
+ 			npages += qib_user_sdma_num_pages(&iov[idx]);
+ 
+-			bytes_togo += slen;
++			if (check_add_overflow(bytes_togo, slen, &bytes_togo) ||
++			    bytes_togo > type_max(typeof(pkt->bytes_togo))) {
++				ret = -EINVAL;
++				goto free_pbc;
++			}
+ 			pktnwc += slen >> 2;
+ 			idx++;
+ 			nfrags++;
+@@ -904,8 +908,7 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 		}
+ 
+ 		if (frag_size) {
+-			int tidsmsize, n;
+-			size_t pktsize;
++			size_t tidsmsize, n, pktsize, sz, addrlimit;
+ 
+ 			n = npages*((2*PAGE_SIZE/frag_size)+1);
+ 			pktsize = struct_size(pkt, addr, n);
+@@ -923,14 +926,24 @@ static int qib_user_sdma_queue_pkts(const struct qib_devdata *dd,
+ 			else
+ 				tidsmsize = 0;
+ 
+-			pkt = kmalloc(pktsize+tidsmsize, GFP_KERNEL);
++			if (check_add_overflow(pktsize, tidsmsize, &sz)) {
++				ret = -EINVAL;
++				goto free_pbc;
++			}
++			pkt = kmalloc(sz, GFP_KERNEL);
+ 			if (!pkt) {
+ 				ret = -ENOMEM;
+ 				goto free_pbc;
+ 			}
+ 			pkt->largepkt = 1;
+ 			pkt->frag_size = frag_size;
+-			pkt->addrlimit = n + ARRAY_SIZE(pkt->addr);
++			if (check_add_overflow(n, ARRAY_SIZE(pkt->addr),
++					       &addrlimit) ||
++			    addrlimit > type_max(typeof(pkt->addrlimit))) {
++				ret = -EINVAL;
++				goto free_pbc;
++			}
++			pkt->addrlimit = addrlimit;
+ 
+ 			if (tiddma) {
+ 				char *tidsm = (char *)pkt + pktsize;
+diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c
+index 38559a956330b..31f841231609a 100644
+--- a/drivers/mmc/host/cqhci-core.c
++++ b/drivers/mmc/host/cqhci-core.c
+@@ -282,6 +282,9 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
+ 
+ 	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
+ 
++	if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT)
++		cqhci_writel(cq_host, 0, CQHCI_CTL);
++
+ 	mmc->cqe_on = true;
+ 
+ 	if (cq_host->ops->enable)
+diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c
+index 0c75810812a0a..1f8a3c0ddfe11 100644
+--- a/drivers/mmc/host/dw_mmc-exynos.c
++++ b/drivers/mmc/host/dw_mmc-exynos.c
+@@ -464,6 +464,18 @@ static s8 dw_mci_exynos_get_best_clksmpl(u8 candiates)
+ 		}
+ 	}
+ 
++	/*
++	 * If there is no cadiates value, then it needs to return -EIO.
++	 * If there are candiates values and don't find bset clk sample value,
++	 * then use a first candiates clock sample value.
++	 */
++	for (i = 0; i < iter; i++) {
++		__c = ror8(candiates, i);
++		if ((__c & 0x1) == 0x1) {
++			loc = i;
++			goto out;
++		}
++	}
+ out:
+ 	return loc;
+ }
+@@ -494,6 +506,8 @@ static int dw_mci_exynos_execute_tuning(struct dw_mci_slot *slot, u32 opcode)
+ 		priv->tuned_sample = found;
+ 	} else {
+ 		ret = -EIO;
++		dev_warn(&mmc->class_dev,
++			"There is no candiates value about clksmpl!\n");
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 4dfc246c5f95d..b06b4dcb7c782 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2577,6 +2577,25 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 		host->dma_mask = DMA_BIT_MASK(32);
+ 	mmc_dev(mmc)->dma_mask = &host->dma_mask;
+ 
++	host->timeout_clks = 3 * 1048576;
++	host->dma.gpd = dma_alloc_coherent(&pdev->dev,
++				2 * sizeof(struct mt_gpdma_desc),
++				&host->dma.gpd_addr, GFP_KERNEL);
++	host->dma.bd = dma_alloc_coherent(&pdev->dev,
++				MAX_BD_NUM * sizeof(struct mt_bdma_desc),
++				&host->dma.bd_addr, GFP_KERNEL);
++	if (!host->dma.gpd || !host->dma.bd) {
++		ret = -ENOMEM;
++		goto release_mem;
++	}
++	msdc_init_gpd_bd(host, &host->dma);
++	INIT_DELAYED_WORK(&host->req_timeout, msdc_request_timeout);
++	spin_lock_init(&host->lock);
++
++	platform_set_drvdata(pdev, mmc);
++	msdc_ungate_clock(host);
++	msdc_init_hw(host);
++
+ 	if (mmc->caps2 & MMC_CAP2_CQE) {
+ 		host->cq_host = devm_kzalloc(mmc->parent,
+ 					     sizeof(*host->cq_host),
+@@ -2597,25 +2616,6 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 		mmc->max_seg_size = 64 * 1024;
+ 	}
+ 
+-	host->timeout_clks = 3 * 1048576;
+-	host->dma.gpd = dma_alloc_coherent(&pdev->dev,
+-				2 * sizeof(struct mt_gpdma_desc),
+-				&host->dma.gpd_addr, GFP_KERNEL);
+-	host->dma.bd = dma_alloc_coherent(&pdev->dev,
+-				MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+-				&host->dma.bd_addr, GFP_KERNEL);
+-	if (!host->dma.gpd || !host->dma.bd) {
+-		ret = -ENOMEM;
+-		goto release_mem;
+-	}
+-	msdc_init_gpd_bd(host, &host->dma);
+-	INIT_DELAYED_WORK(&host->req_timeout, msdc_request_timeout);
+-	spin_lock_init(&host->lock);
+-
+-	platform_set_drvdata(pdev, mmc);
+-	msdc_ungate_clock(host);
+-	msdc_init_hw(host);
+-
+ 	ret = devm_request_irq(&pdev->dev, host->irq, msdc_irq,
+ 			       IRQF_TRIGGER_NONE, pdev->name, host);
+ 	if (ret)
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 72c0bf0c18875..812c1f42a5eaf 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -1133,6 +1133,7 @@ static void esdhc_reset_tuning(struct sdhci_host *host)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
+ 	u32 ctrl;
++	int ret;
+ 
+ 	/* Reset the tuning circuit */
+ 	if (esdhc_is_usdhc(imx_data)) {
+@@ -1145,7 +1146,22 @@ static void esdhc_reset_tuning(struct sdhci_host *host)
+ 		} else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) {
+ 			ctrl = readl(host->ioaddr + SDHCI_AUTO_CMD_STATUS);
+ 			ctrl &= ~ESDHC_MIX_CTRL_SMPCLK_SEL;
++			ctrl &= ~ESDHC_MIX_CTRL_EXE_TUNE;
+ 			writel(ctrl, host->ioaddr + SDHCI_AUTO_CMD_STATUS);
++			/* Make sure ESDHC_MIX_CTRL_EXE_TUNE cleared */
++			ret = readl_poll_timeout(host->ioaddr + SDHCI_AUTO_CMD_STATUS,
++				ctrl, !(ctrl & ESDHC_MIX_CTRL_EXE_TUNE), 1, 50);
++			if (ret == -ETIMEDOUT)
++				dev_warn(mmc_dev(host->mmc),
++				 "Warning! clear execute tuning bit failed\n");
++			/*
++			 * SDHCI_INT_DATA_AVAIL is W1C bit, set this bit will clear the
++			 * usdhc IP internal logic flag execute_tuning_with_clr_buf, which
++			 * will finally make sure the normal data transfer logic correct.
++			 */
++			ctrl = readl(host->ioaddr + SDHCI_INT_STATUS);
++			ctrl |= SDHCI_INT_DATA_AVAIL;
++			writel(ctrl, host->ioaddr + SDHCI_INT_STATUS);
+ 		}
+ 	}
+ }
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index be19785227fe4..d0f2edfe296c8 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -616,16 +616,12 @@ static int intel_select_drive_strength(struct mmc_card *card,
+ 	return intel_host->drv_strength;
+ }
+ 
+-static int bxt_get_cd(struct mmc_host *mmc)
++static int sdhci_get_cd_nogpio(struct mmc_host *mmc)
+ {
+-	int gpio_cd = mmc_gpio_get_cd(mmc);
+ 	struct sdhci_host *host = mmc_priv(mmc);
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+-	if (!gpio_cd)
+-		return 0;
+-
+ 	spin_lock_irqsave(&host->lock, flags);
+ 
+ 	if (host->flags & SDHCI_DEVICE_DEAD)
+@@ -638,6 +634,21 @@ out:
+ 	return ret;
+ }
+ 
++static int bxt_get_cd(struct mmc_host *mmc)
++{
++	int gpio_cd = mmc_gpio_get_cd(mmc);
++
++	if (!gpio_cd)
++		return 0;
++
++	return sdhci_get_cd_nogpio(mmc);
++}
++
++static int mrfld_get_cd(struct mmc_host *mmc)
++{
++	return sdhci_get_cd_nogpio(mmc);
++}
++
+ #define SDHCI_INTEL_PWR_TIMEOUT_CNT	20
+ #define SDHCI_INTEL_PWR_TIMEOUT_UDELAY	100
+ 
+@@ -1341,6 +1352,14 @@ static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
+ 					 MMC_CAP_1_8V_DDR;
+ 		break;
+ 	case INTEL_MRFLD_SD:
++		slot->cd_idx = 0;
++		slot->cd_override_level = true;
++		/*
++		 * There are two PCB designs of SD card slot with the opposite
++		 * card detection sense. Quirk this out by ignoring GPIO state
++		 * completely in the custom ->get_cd() callback.
++		 */
++		slot->host->mmc_host_ops.get_cd = mrfld_get_cd;
+ 		slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
+ 		break;
+ 	case INTEL_MRFLD_SDIO:
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index fff6c39a343e9..c5287be9bbed4 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2042,6 +2042,12 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
+ 			break;
+ 		case MMC_VDD_32_33:
+ 		case MMC_VDD_33_34:
++		/*
++		 * 3.4 ~ 3.6V are valid only for those platforms where it's
++		 * known that the voltage range is supported by hardware.
++		 */
++		case MMC_VDD_34_35:
++		case MMC_VDD_35_36:
+ 			pwr = SDHCI_POWER_330;
+ 			break;
+ 		default:
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index 7dfc26f48c18f..e2affa52ef469 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -195,6 +195,10 @@ static void tmio_mmc_reset(struct tmio_mmc_host *host)
+ 	sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all);
+ 	host->sdcard_irq_mask = host->sdcard_irq_mask_all;
+ 
++	if (host->native_hotplug)
++		tmio_mmc_enable_mmc_irqs(host,
++				TMIO_STAT_CARD_REMOVE | TMIO_STAT_CARD_INSERT);
++
+ 	tmio_mmc_set_bus_width(host, host->mmc->ios.bus_width);
+ 
+ 	if (host->pdata->flags & TMIO_MMC_SDIO_IRQ) {
+@@ -956,8 +960,15 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	case MMC_POWER_OFF:
+ 		tmio_mmc_power_off(host);
+ 		/* For R-Car Gen2+, we need to reset SDHI specific SCC */
+-		if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
++		if (host->pdata->flags & TMIO_MMC_MIN_RCAR2) {
+ 			host->reset(host);
++
++			if (host->native_hotplug)
++				tmio_mmc_enable_mmc_irqs(host,
++						TMIO_STAT_CARD_REMOVE |
++						TMIO_STAT_CARD_INSERT);
++		}
++
+ 		host->set_clock(host, 0);
+ 		break;
+ 	case MMC_POWER_UP:
+@@ -1185,10 +1196,6 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host)
+ 	_host->set_clock(_host, 0);
+ 	tmio_mmc_reset(_host);
+ 
+-	if (_host->native_hotplug)
+-		tmio_mmc_enable_mmc_irqs(_host,
+-				TMIO_STAT_CARD_REMOVE | TMIO_STAT_CARD_INSERT);
+-
+ 	spin_lock_init(&_host->lock);
+ 	mutex_init(&_host->ios_lock);
+ 
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 4950d10d3a191..97beece62fec4 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -576,7 +576,7 @@ static void check_vub300_port_status(struct vub300_mmc_host *vub300)
+ 				GET_SYSTEM_PORT_STATUS,
+ 				USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				0x0000, 0x0000, &vub300->system_port_status,
+-				sizeof(vub300->system_port_status), HZ);
++				sizeof(vub300->system_port_status), 1000);
+ 	if (sizeof(vub300->system_port_status) == retval)
+ 		new_system_port_status(vub300);
+ }
+@@ -1241,7 +1241,7 @@ static void __download_offload_pseudocode(struct vub300_mmc_host *vub300,
+ 						SET_INTERRUPT_PSEUDOCODE,
+ 						USB_DIR_OUT | USB_TYPE_VENDOR |
+ 						USB_RECIP_DEVICE, 0x0000, 0x0000,
+-						xfer_buffer, xfer_length, HZ);
++						xfer_buffer, xfer_length, 1000);
+ 			kfree(xfer_buffer);
+ 			if (retval < 0)
+ 				goto copy_error_message;
+@@ -1284,7 +1284,7 @@ static void __download_offload_pseudocode(struct vub300_mmc_host *vub300,
+ 						SET_TRANSFER_PSEUDOCODE,
+ 						USB_DIR_OUT | USB_TYPE_VENDOR |
+ 						USB_RECIP_DEVICE, 0x0000, 0x0000,
+-						xfer_buffer, xfer_length, HZ);
++						xfer_buffer, xfer_length, 1000);
+ 			kfree(xfer_buffer);
+ 			if (retval < 0)
+ 				goto copy_error_message;
+@@ -1991,7 +1991,7 @@ static void __set_clock_speed(struct vub300_mmc_host *vub300, u8 buf[8],
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_CLOCK_SPEED,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				0x00, 0x00, buf, buf_array_size, HZ);
++				0x00, 0x00, buf, buf_array_size, 1000);
+ 	if (retval != 8) {
+ 		dev_err(&vub300->udev->dev, "SET_CLOCK_SPEED"
+ 			" %dkHz failed with retval=%d\n", kHzClock, retval);
+@@ -2013,14 +2013,14 @@ static void vub300_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_SD_POWER,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				0x0000, 0x0000, NULL, 0, HZ);
++				0x0000, 0x0000, NULL, 0, 1000);
+ 		/* must wait for the VUB300 u-proc to boot up */
+ 		msleep(600);
+ 	} else if ((ios->power_mode == MMC_POWER_UP) && !vub300->card_powered) {
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_SD_POWER,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				0x0001, 0x0000, NULL, 0, HZ);
++				0x0001, 0x0000, NULL, 0, 1000);
+ 		msleep(600);
+ 		vub300->card_powered = 1;
+ 	} else if (ios->power_mode == MMC_POWER_ON) {
+@@ -2275,14 +2275,14 @@ static int vub300_probe(struct usb_interface *interface,
+ 				GET_HC_INF0,
+ 				USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				0x0000, 0x0000, &vub300->hc_info,
+-				sizeof(vub300->hc_info), HZ);
++				sizeof(vub300->hc_info), 1000);
+ 	if (retval < 0)
+ 		goto error5;
+ 	retval =
+ 		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_ROM_WAIT_STATES,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				firmware_rom_wait_states, 0x0000, NULL, 0, HZ);
++				firmware_rom_wait_states, 0x0000, NULL, 0, 1000);
+ 	if (retval < 0)
+ 		goto error5;
+ 	dev_info(&vub300->udev->dev,
+@@ -2297,7 +2297,7 @@ static int vub300_probe(struct usb_interface *interface,
+ 				GET_SYSTEM_PORT_STATUS,
+ 				USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				0x0000, 0x0000, &vub300->system_port_status,
+-				sizeof(vub300->system_port_status), HZ);
++				sizeof(vub300->system_port_status), 1000);
+ 	if (retval < 0) {
+ 		goto error4;
+ 	} else if (sizeof(vub300->system_port_status) == retval) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index dc5cce127d8ea..89b04703aacac 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -568,6 +568,7 @@ struct hnae3_ae_ops {
+ 			       u32 *auto_neg, u32 *rx_en, u32 *tx_en);
+ 	int (*set_pauseparam)(struct hnae3_handle *handle,
+ 			      u32 auto_neg, u32 rx_en, u32 tx_en);
++	int (*restore_pauseparam)(struct hnae3_handle *handle);
+ 
+ 	int (*set_autoneg)(struct hnae3_handle *handle, bool enable);
+ 	int (*get_autoneg)(struct hnae3_handle *handle);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index 80461ab0ce9e7..b22b8baec54c0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -138,7 +138,7 @@ static struct hns3_dbg_cmd_info hns3_dbg_cmd[] = {
+ 		.name = "uc",
+ 		.cmd = HNAE3_DBG_CMD_MAC_UC,
+ 		.dentry = HNS3_DBG_DENTRY_MAC,
+-		.buf_len = HNS3_DBG_READ_LEN,
++		.buf_len = HNS3_DBG_READ_LEN_128KB,
+ 		.init = hns3_dbg_common_file_init,
+ 	},
+ 	{
+@@ -257,7 +257,7 @@ static struct hns3_dbg_cmd_info hns3_dbg_cmd[] = {
+ 		.name = "tqp",
+ 		.cmd = HNAE3_DBG_CMD_REG_TQP,
+ 		.dentry = HNS3_DBG_DENTRY_REG,
+-		.buf_len = HNS3_DBG_READ_LEN,
++		.buf_len = HNS3_DBG_READ_LEN_128KB,
+ 		.init = hns3_dbg_common_file_init,
+ 	},
+ 	{
+@@ -299,7 +299,7 @@ static struct hns3_dbg_cmd_info hns3_dbg_cmd[] = {
+ 		.name = "fd_tcam",
+ 		.cmd = HNAE3_DBG_CMD_FD_TCAM,
+ 		.dentry = HNS3_DBG_DENTRY_FD,
+-		.buf_len = HNS3_DBG_READ_LEN,
++		.buf_len = HNS3_DBG_READ_LEN_1MB,
+ 		.init = hns3_dbg_common_file_init,
+ 	},
+ 	{
+@@ -463,7 +463,7 @@ static const struct hns3_dbg_item rx_queue_info_items[] = {
+ 	{ "TAIL", 2 },
+ 	{ "HEAD", 2 },
+ 	{ "FBDNUM", 2 },
+-	{ "PKTNUM", 2 },
++	{ "PKTNUM", 5 },
+ 	{ "COPYBREAK", 2 },
+ 	{ "RING_EN", 2 },
+ 	{ "RX_RING_EN", 2 },
+@@ -566,7 +566,7 @@ static const struct hns3_dbg_item tx_queue_info_items[] = {
+ 	{ "HEAD", 2 },
+ 	{ "FBDNUM", 2 },
+ 	{ "OFFSET", 2 },
+-	{ "PKTNUM", 2 },
++	{ "PKTNUM", 5 },
+ 	{ "RING_EN", 2 },
+ 	{ "TX_RING_EN", 2 },
+ 	{ "BASE_ADDR", 10 },
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 83ee0f41322c7..4e0cec9025e85 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -838,6 +838,26 @@ static int hns3_check_ksettings_param(const struct net_device *netdev,
+ 	return 0;
+ }
+ 
++static int hns3_set_phy_link_ksettings(struct net_device *netdev,
++				       const struct ethtool_link_ksettings *cmd)
++{
++	struct hnae3_handle *handle = hns3_get_handle(netdev);
++	const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
++	int ret;
++
++	if (cmd->base.speed == SPEED_1000 &&
++	    cmd->base.autoneg == AUTONEG_DISABLE)
++		return -EINVAL;
++
++	if (cmd->base.autoneg == AUTONEG_DISABLE && ops->restore_pauseparam) {
++		ret = ops->restore_pauseparam(handle);
++		if (ret)
++			return ret;
++	}
++
++	return phy_ethtool_ksettings_set(netdev->phydev, cmd);
++}
++
+ static int hns3_set_link_ksettings(struct net_device *netdev,
+ 				   const struct ethtool_link_ksettings *cmd)
+ {
+@@ -856,16 +876,11 @@ static int hns3_set_link_ksettings(struct net_device *netdev,
+ 		  cmd->base.autoneg, cmd->base.speed, cmd->base.duplex);
+ 
+ 	/* Only support ksettings_set for netdev with phy attached for now */
+-	if (netdev->phydev) {
+-		if (cmd->base.speed == SPEED_1000 &&
+-		    cmd->base.autoneg == AUTONEG_DISABLE)
+-			return -EINVAL;
+-
+-		return phy_ethtool_ksettings_set(netdev->phydev, cmd);
+-	} else if (test_bit(HNAE3_DEV_SUPPORT_PHY_IMP_B, ae_dev->caps) &&
+-		   ops->set_phy_link_ksettings) {
++	if (netdev->phydev)
++		return hns3_set_phy_link_ksettings(netdev, cmd);
++	else if (test_bit(HNAE3_DEV_SUPPORT_PHY_IMP_B, ae_dev->caps) &&
++		 ops->set_phy_link_ksettings)
+ 		return ops->set_phy_link_ksettings(handle, cmd);
+-	}
+ 
+ 	if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V2)
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+index e6e617aba2a4c..04e7c8d469696 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+@@ -391,7 +391,7 @@ static int hclge_dbg_dump_mac(struct hclge_dev *hdev, char *buf, int len)
+ static int hclge_dbg_dump_dcb_qset(struct hclge_dev *hdev, char *buf, int len,
+ 				   int *pos)
+ {
+-	struct hclge_dbg_bitmap_cmd *bitmap;
++	struct hclge_dbg_bitmap_cmd req;
+ 	struct hclge_desc desc;
+ 	u16 qset_id, qset_num;
+ 	int ret;
+@@ -408,12 +408,12 @@ static int hclge_dbg_dump_dcb_qset(struct hclge_dev *hdev, char *buf, int len,
+ 		if (ret)
+ 			return ret;
+ 
+-		bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
++		req.bitmap = (u8)le32_to_cpu(desc.data[1]);
+ 
+ 		*pos += scnprintf(buf + *pos, len - *pos,
+ 				  "%04u           %#x            %#x             %#x               %#x\n",
+-				  qset_id, bitmap->bit0, bitmap->bit1,
+-				  bitmap->bit2, bitmap->bit3);
++				  qset_id, req.bit0, req.bit1, req.bit2,
++				  req.bit3);
+ 	}
+ 
+ 	return 0;
+@@ -422,7 +422,7 @@ static int hclge_dbg_dump_dcb_qset(struct hclge_dev *hdev, char *buf, int len,
+ static int hclge_dbg_dump_dcb_pri(struct hclge_dev *hdev, char *buf, int len,
+ 				  int *pos)
+ {
+-	struct hclge_dbg_bitmap_cmd *bitmap;
++	struct hclge_dbg_bitmap_cmd req;
+ 	struct hclge_desc desc;
+ 	u8 pri_id, pri_num;
+ 	int ret;
+@@ -439,12 +439,11 @@ static int hclge_dbg_dump_dcb_pri(struct hclge_dev *hdev, char *buf, int len,
+ 		if (ret)
+ 			return ret;
+ 
+-		bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
++		req.bitmap = (u8)le32_to_cpu(desc.data[1]);
+ 
+ 		*pos += scnprintf(buf + *pos, len - *pos,
+ 				  "%03u       %#x           %#x                %#x\n",
+-				  pri_id, bitmap->bit0, bitmap->bit1,
+-				  bitmap->bit2);
++				  pri_id, req.bit0, req.bit1, req.bit2);
+ 	}
+ 
+ 	return 0;
+@@ -453,7 +452,7 @@ static int hclge_dbg_dump_dcb_pri(struct hclge_dev *hdev, char *buf, int len,
+ static int hclge_dbg_dump_dcb_pg(struct hclge_dev *hdev, char *buf, int len,
+ 				 int *pos)
+ {
+-	struct hclge_dbg_bitmap_cmd *bitmap;
++	struct hclge_dbg_bitmap_cmd req;
+ 	struct hclge_desc desc;
+ 	u8 pg_id;
+ 	int ret;
+@@ -466,12 +465,11 @@ static int hclge_dbg_dump_dcb_pg(struct hclge_dev *hdev, char *buf, int len,
+ 		if (ret)
+ 			return ret;
+ 
+-		bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
++		req.bitmap = (u8)le32_to_cpu(desc.data[1]);
+ 
+ 		*pos += scnprintf(buf + *pos, len - *pos,
+ 				  "%03u      %#x           %#x               %#x\n",
+-				  pg_id, bitmap->bit0, bitmap->bit1,
+-				  bitmap->bit2);
++				  pg_id, req.bit0, req.bit1, req.bit2);
+ 	}
+ 
+ 	return 0;
+@@ -511,7 +509,7 @@ static int hclge_dbg_dump_dcb_queue(struct hclge_dev *hdev, char *buf, int len,
+ static int hclge_dbg_dump_dcb_port(struct hclge_dev *hdev, char *buf, int len,
+ 				   int *pos)
+ {
+-	struct hclge_dbg_bitmap_cmd *bitmap;
++	struct hclge_dbg_bitmap_cmd req;
+ 	struct hclge_desc desc;
+ 	u8 port_id = 0;
+ 	int ret;
+@@ -521,12 +519,12 @@ static int hclge_dbg_dump_dcb_port(struct hclge_dev *hdev, char *buf, int len,
+ 	if (ret)
+ 		return ret;
+ 
+-	bitmap = (struct hclge_dbg_bitmap_cmd *)&desc.data[1];
++	req.bitmap = (u8)le32_to_cpu(desc.data[1]);
+ 
+ 	*pos += scnprintf(buf + *pos, len - *pos, "port_mask: %#x\n",
+-			 bitmap->bit0);
++			 req.bit0);
+ 	*pos += scnprintf(buf + *pos, len - *pos, "port_shaping_pass: %#x\n",
+-			 bitmap->bit1);
++			 req.bit1);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index be46b164b0e2c..721eb4e92f618 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -11014,6 +11014,35 @@ static int hclge_set_pauseparam(struct hnae3_handle *handle, u32 auto_neg,
+ 	return -EOPNOTSUPP;
+ }
+ 
++static int hclge_restore_pauseparam(struct hnae3_handle *handle)
++{
++	struct hclge_vport *vport = hclge_get_vport(handle);
++	struct hclge_dev *hdev = vport->back;
++	u32 auto_neg, rx_pause, tx_pause;
++	int ret;
++
++	hclge_get_pauseparam(handle, &auto_neg, &rx_pause, &tx_pause);
++	/* when autoneg is disabled, the pause setting of phy has no effect
++	 * unless the link goes down.
++	 */
++	ret = phy_suspend(hdev->hw.mac.phydev);
++	if (ret)
++		return ret;
++
++	phy_set_asym_pause(hdev->hw.mac.phydev, rx_pause, tx_pause);
++
++	ret = phy_resume(hdev->hw.mac.phydev);
++	if (ret)
++		return ret;
++
++	ret = hclge_mac_pause_setup_hw(hdev);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"restore pauseparam error, ret = %d.\n", ret);
++
++	return ret;
++}
++
+ static void hclge_get_ksettings_an_result(struct hnae3_handle *handle,
+ 					  u8 *auto_neg, u32 *speed, u8 *duplex)
+ {
+@@ -12943,6 +12972,7 @@ static const struct hnae3_ae_ops hclge_ops = {
+ 	.halt_autoneg = hclge_halt_autoneg,
+ 	.get_pauseparam = hclge_get_pauseparam,
+ 	.set_pauseparam = hclge_set_pauseparam,
++	.restore_pauseparam = hclge_restore_pauseparam,
+ 	.set_mtu = hclge_set_mtu,
+ 	.reset_queue = hclge_reset_tqp,
+ 	.get_stats = hclge_get_stats,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 95074e91a8466..124791e4bfeed 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1435,7 +1435,7 @@ static int hclge_bp_setup_hw(struct hclge_dev *hdev, u8 tc)
+ 	return 0;
+ }
+ 
+-static int hclge_mac_pause_setup_hw(struct hclge_dev *hdev)
++int hclge_mac_pause_setup_hw(struct hclge_dev *hdev)
+ {
+ 	bool tx_en, rx_en;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+index 2ee9b795f71dc..4b2c3a7889800 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+@@ -244,6 +244,7 @@ int hclge_tm_get_pri_weight(struct hclge_dev *hdev, u8 pri_id, u8 *weight);
+ int hclge_tm_get_pri_shaper(struct hclge_dev *hdev, u8 pri_id,
+ 			    enum hclge_opcode_type cmd,
+ 			    struct hclge_tm_shaper_para *para);
++int hclge_mac_pause_setup_hw(struct hclge_dev *hdev);
+ int hclge_tm_get_q_to_qs_map(struct hclge_dev *hdev, u16 q_id, u16 *qset_id);
+ int hclge_tm_get_q_to_tc(struct hclge_dev *hdev, u16 q_id, u8 *tc_id);
+ int hclge_tm_get_pg_to_pri_map(struct hclge_dev *hdev, u8 pg_id,
+diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
+index 37c18c66b5c72..e375ac849aecd 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lag.c
++++ b/drivers/net/ethernet/intel/ice/ice_lag.c
+@@ -100,9 +100,9 @@ static void ice_display_lag_info(struct ice_lag *lag)
+  */
+ static void ice_lag_info_event(struct ice_lag *lag, void *ptr)
+ {
+-	struct net_device *event_netdev, *netdev_tmp;
+ 	struct netdev_notifier_bonding_info *info;
+ 	struct netdev_bonding_info *bonding_info;
++	struct net_device *event_netdev;
+ 	const char *lag_netdev_name;
+ 
+ 	event_netdev = netdev_notifier_info_to_dev(ptr);
+@@ -123,19 +123,6 @@ static void ice_lag_info_event(struct ice_lag *lag, void *ptr)
+ 		goto lag_out;
+ 	}
+ 
+-	rcu_read_lock();
+-	for_each_netdev_in_bond_rcu(lag->upper_netdev, netdev_tmp) {
+-		if (!netif_is_ice(netdev_tmp))
+-			continue;
+-
+-		if (netdev_tmp && netdev_tmp != lag->netdev &&
+-		    lag->peer_netdev != netdev_tmp) {
+-			dev_hold(netdev_tmp);
+-			lag->peer_netdev = netdev_tmp;
+-		}
+-	}
+-	rcu_read_unlock();
+-
+ 	if (bonding_info->slave.state)
+ 		ice_lag_set_backup(lag);
+ 	else
+@@ -319,6 +306,9 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event,
+ 	case NETDEV_BONDING_INFO:
+ 		ice_lag_info_event(lag, ptr);
+ 		break;
++	case NETDEV_UNREGISTER:
++		ice_lag_unlink(lag, ptr);
++		break;
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index c2465b9d80567..545813657c939 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1582,6 +1582,9 @@ err_kworker:
+  */
+ void ice_ptp_release(struct ice_pf *pf)
+ {
++	if (!test_bit(ICE_FLAG_PTP, pf->flags))
++		return;
++
+ 	/* Disable timestamping for both Tx and Rx */
+ 	ice_ptp_cfg_timestamp(pf, false);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index 9b2dfbf90e510..a606de56678d4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -229,18 +229,85 @@ static const struct file_operations rvu_dbg_##name##_fops = { \
+ 
+ static void print_nix_qsize(struct seq_file *filp, struct rvu_pfvf *pfvf);
+ 
++static void get_lf_str_list(struct rvu_block block, int pcifunc,
++			    char *lfs)
++{
++	int lf = 0, seq = 0, len = 0, prev_lf = block.lf.max;
++
++	for_each_set_bit(lf, block.lf.bmap, block.lf.max) {
++		if (lf >= block.lf.max)
++			break;
++
++		if (block.fn_map[lf] != pcifunc)
++			continue;
++
++		if (lf == prev_lf + 1) {
++			prev_lf = lf;
++			seq = 1;
++			continue;
++		}
++
++		if (seq)
++			len += sprintf(lfs + len, "-%d,%d", prev_lf, lf);
++		else
++			len += (len ? sprintf(lfs + len, ",%d", lf) :
++				      sprintf(lfs + len, "%d", lf));
++
++		prev_lf = lf;
++		seq = 0;
++	}
++
++	if (seq)
++		len += sprintf(lfs + len, "-%d", prev_lf);
++
++	lfs[len] = '\0';
++}
++
++static int get_max_column_width(struct rvu *rvu)
++{
++	int index, pf, vf, lf_str_size = 12, buf_size = 256;
++	struct rvu_block block;
++	u16 pcifunc;
++	char *buf;
++
++	buf = kzalloc(buf_size, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
++		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
++			pcifunc = pf << 10 | vf;
++			if (!pcifunc)
++				continue;
++
++			for (index = 0; index < BLK_COUNT; index++) {
++				block = rvu->hw->block[index];
++				if (!strlen(block.name))
++					continue;
++
++				get_lf_str_list(block, pcifunc, buf);
++				if (lf_str_size <= strlen(buf))
++					lf_str_size = strlen(buf) + 1;
++			}
++		}
++	}
++
++	kfree(buf);
++	return lf_str_size;
++}
++
+ /* Dumps current provisioning status of all RVU block LFs */
+ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 					  char __user *buffer,
+ 					  size_t count, loff_t *ppos)
+ {
+-	int index, off = 0, flag = 0, go_back = 0, len = 0;
++	int index, off = 0, flag = 0, len = 0, i = 0;
+ 	struct rvu *rvu = filp->private_data;
+-	int lf, pf, vf, pcifunc;
++	int bytes_not_copied = 0;
+ 	struct rvu_block block;
+-	int bytes_not_copied;
+-	int lf_str_size = 12;
++	int pf, vf, pcifunc;
+ 	int buf_size = 2048;
++	int lf_str_size;
+ 	char *lfs;
+ 	char *buf;
+ 
+@@ -252,6 +319,9 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 	if (!buf)
+ 		return -ENOSPC;
+ 
++	/* Get the maximum width of a column */
++	lf_str_size = get_max_column_width(rvu);
++
+ 	lfs = kzalloc(lf_str_size, GFP_KERNEL);
+ 	if (!lfs) {
+ 		kfree(buf);
+@@ -265,65 +335,69 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 					 "%-*s", lf_str_size,
+ 					 rvu->hw->block[index].name);
+ 		}
++
+ 	off += scnprintf(&buf[off], buf_size - 1 - off, "\n");
++	bytes_not_copied = copy_to_user(buffer + (i * off), buf, off);
++	if (bytes_not_copied)
++		goto out;
++
++	i++;
++	*ppos += off;
+ 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ 		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
++			off = 0;
++			flag = 0;
+ 			pcifunc = pf << 10 | vf;
+ 			if (!pcifunc)
+ 				continue;
+ 
+ 			if (vf) {
+ 				sprintf(lfs, "PF%d:VF%d", pf, vf - 1);
+-				go_back = scnprintf(&buf[off],
+-						    buf_size - 1 - off,
+-						    "%-*s", lf_str_size, lfs);
++				off = scnprintf(&buf[off],
++						buf_size - 1 - off,
++						"%-*s", lf_str_size, lfs);
+ 			} else {
+ 				sprintf(lfs, "PF%d", pf);
+-				go_back = scnprintf(&buf[off],
+-						    buf_size - 1 - off,
+-						    "%-*s", lf_str_size, lfs);
++				off = scnprintf(&buf[off],
++						buf_size - 1 - off,
++						"%-*s", lf_str_size, lfs);
+ 			}
+ 
+-			off += go_back;
+-			for (index = 0; index < BLKTYPE_MAX; index++) {
++			for (index = 0; index < BLK_COUNT; index++) {
+ 				block = rvu->hw->block[index];
+ 				if (!strlen(block.name))
+ 					continue;
+ 				len = 0;
+ 				lfs[len] = '\0';
+-				for (lf = 0; lf < block.lf.max; lf++) {
+-					if (block.fn_map[lf] != pcifunc)
+-						continue;
++				get_lf_str_list(block, pcifunc, lfs);
++				if (strlen(lfs))
+ 					flag = 1;
+-					len += sprintf(&lfs[len], "%d,", lf);
+-				}
+ 
+-				if (flag)
+-					len--;
+-				lfs[len] = '\0';
+ 				off += scnprintf(&buf[off], buf_size - 1 - off,
+ 						 "%-*s", lf_str_size, lfs);
+-				if (!strlen(lfs))
+-					go_back += lf_str_size;
+ 			}
+-			if (!flag)
+-				off -= go_back;
+-			else
+-				flag = 0;
+-			off--;
+-			off +=	scnprintf(&buf[off], buf_size - 1 - off, "\n");
++			if (flag) {
++				off +=	scnprintf(&buf[off],
++						  buf_size - 1 - off, "\n");
++				bytes_not_copied = copy_to_user(buffer +
++								(i * off),
++								buf, off);
++				if (bytes_not_copied)
++					goto out;
++
++				i++;
++				*ppos += off;
++			}
+ 		}
+ 	}
+ 
+-	bytes_not_copied = copy_to_user(buffer, buf, off);
++out:
+ 	kfree(lfs);
+ 	kfree(buf);
+-
+ 	if (bytes_not_copied)
+ 		return -EFAULT;
+ 
+-	*ppos = off;
+-	return off;
++	return *ppos;
+ }
+ 
+ RVU_DEBUG_FOPS(rsrc_status, rsrc_attach_status, NULL);
+@@ -507,7 +581,7 @@ static ssize_t rvu_dbg_qsize_write(struct file *filp,
+ 	if (cmd_buf)
+ 		ret = -EINVAL;
+ 
+-	if (!strncmp(subtoken, "help", 4) || ret < 0) {
++	if (ret < 0 || !strncmp(subtoken, "help", 4)) {
+ 		dev_info(rvu->dev, "Use echo <%s-lf > qsize\n", blk_string);
+ 		goto qsize_write_done;
+ 	}
+@@ -1722,6 +1796,10 @@ static int rvu_dbg_nix_band_prof_ctx_display(struct seq_file *m, void *unused)
+ 	u16 pcifunc;
+ 	char *str;
+ 
++	/* Ingress policers do not exist on all platforms */
++	if (!nix_hw->ipolicer)
++		return 0;
++
+ 	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+ 		if (layer == BAND_PROF_INVAL_LAYER)
+ 			continue;
+@@ -1771,6 +1849,10 @@ static int rvu_dbg_nix_band_prof_rsrc_display(struct seq_file *m, void *unused)
+ 	int layer;
+ 	char *str;
+ 
++	/* Ingress policers do not exist on all platforms */
++	if (!nix_hw->ipolicer)
++		return 0;
++
+ 	seq_puts(m, "\nBandwidth profile resource free count\n");
+ 	seq_puts(m, "=====================================\n");
+ 	for (layer = 0; layer < BAND_PROF_NUM_LAYERS; layer++) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 87af164951eae..05b4149f79a5c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -2146,6 +2146,9 @@ static void nix_free_tx_vtag_entries(struct rvu *rvu, u16 pcifunc)
+ 		return;
+ 
+ 	nix_hw = get_nix_hw(rvu->hw, blkaddr);
++	if (!nix_hw)
++		return;
++
+ 	vlan = &nix_hw->txvlan;
+ 
+ 	mutex_lock(&vlan->rsrc_lock);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+index 13b0259f7ea69..fcace73eae40f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
+@@ -353,13 +353,10 @@ static int mlxsw_pci_rdq_skb_alloc(struct mlxsw_pci *mlxsw_pci,
+ 	struct sk_buff *skb;
+ 	int err;
+ 
+-	elem_info->u.rdq.skb = NULL;
+ 	skb = netdev_alloc_skb_ip_align(NULL, buf_len);
+ 	if (!skb)
+ 		return -ENOMEM;
+ 
+-	/* Assume that wqe was previously zeroed. */
+-
+ 	err = mlxsw_pci_wqe_frag_map(mlxsw_pci, wqe, 0, skb->data,
+ 				     buf_len, DMA_FROM_DEVICE);
+ 	if (err)
+@@ -597,21 +594,26 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
+ 	struct pci_dev *pdev = mlxsw_pci->pdev;
+ 	struct mlxsw_pci_queue_elem_info *elem_info;
+ 	struct mlxsw_rx_info rx_info = {};
+-	char *wqe;
++	char wqe[MLXSW_PCI_WQE_SIZE];
+ 	struct sk_buff *skb;
+ 	u16 byte_count;
+ 	int err;
+ 
+ 	elem_info = mlxsw_pci_queue_elem_info_consumer_get(q);
+-	skb = elem_info->u.sdq.skb;
+-	if (!skb)
+-		return;
+-	wqe = elem_info->elem;
+-	mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, 0, DMA_FROM_DEVICE);
++	skb = elem_info->u.rdq.skb;
++	memcpy(wqe, elem_info->elem, MLXSW_PCI_WQE_SIZE);
+ 
+ 	if (q->consumer_counter++ != consumer_counter_limit)
+ 		dev_dbg_ratelimited(&pdev->dev, "Consumer counter does not match limit in RDQ\n");
+ 
++	err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
++	if (err) {
++		dev_err_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
++		goto out;
++	}
++
++	mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, 0, DMA_FROM_DEVICE);
++
+ 	if (mlxsw_pci_cqe_lag_get(cqe_v, cqe)) {
+ 		rx_info.is_lag = true;
+ 		rx_info.u.lag_id = mlxsw_pci_cqe_lag_id_get(cqe_v, cqe);
+@@ -647,10 +649,7 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
+ 	skb_put(skb, byte_count);
+ 	mlxsw_core_skb_receive(mlxsw_pci->core, skb, &rx_info);
+ 
+-	memset(wqe, 0, q->elem_size);
+-	err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
+-	if (err)
+-		dev_dbg_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
++out:
+ 	/* Everything is set up, ring doorbell to pass elem to HW */
+ 	q->producer_counter++;
+ 	mlxsw_pci_queue_doorbell_producer_ring(mlxsw_pci, q);
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index dae10328c6cf7..d1c19ad4229c1 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1743,6 +1743,16 @@ static int lan743x_tx_ring_init(struct lan743x_tx *tx)
+ 		ret = -EINVAL;
+ 		goto cleanup;
+ 	}
++	if (dma_set_mask_and_coherent(&tx->adapter->pdev->dev,
++				      DMA_BIT_MASK(64))) {
++		if (dma_set_mask_and_coherent(&tx->adapter->pdev->dev,
++					      DMA_BIT_MASK(32))) {
++			dev_warn(&tx->adapter->pdev->dev,
++				 "lan743x_: No suitable DMA available\n");
++			ret = -ENOMEM;
++			goto cleanup;
++		}
++	}
+ 	ring_allocation_size = ALIGN(tx->ring_size *
+ 				     sizeof(struct lan743x_tx_descriptor),
+ 				     PAGE_SIZE);
+@@ -1934,7 +1944,8 @@ static void lan743x_rx_update_tail(struct lan743x_rx *rx, int index)
+ 				  index);
+ }
+ 
+-static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index)
++static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
++					gfp_t gfp)
+ {
+ 	struct net_device *netdev = rx->adapter->netdev;
+ 	struct device *dev = &rx->adapter->pdev->dev;
+@@ -1948,7 +1959,7 @@ static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index)
+ 
+ 	descriptor = &rx->ring_cpu_ptr[index];
+ 	buffer_info = &rx->buffer_info[index];
+-	skb = __netdev_alloc_skb(netdev, buffer_length, GFP_ATOMIC | GFP_DMA);
++	skb = __netdev_alloc_skb(netdev, buffer_length, gfp);
+ 	if (!skb)
+ 		return -ENOMEM;
+ 	dma_ptr = dma_map_single(dev, skb->data, buffer_length, DMA_FROM_DEVICE);
+@@ -2110,7 +2121,8 @@ static int lan743x_rx_process_buffer(struct lan743x_rx *rx)
+ 
+ 	/* save existing skb, allocate new skb and map to dma */
+ 	skb = buffer_info->skb;
+-	if (lan743x_rx_init_ring_element(rx, rx->last_head)) {
++	if (lan743x_rx_init_ring_element(rx, rx->last_head,
++					 GFP_ATOMIC | GFP_DMA)) {
+ 		/* failed to allocate next skb.
+ 		 * Memory is very low.
+ 		 * Drop this packet and reuse buffer.
+@@ -2276,6 +2288,16 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+ 		ret = -EINVAL;
+ 		goto cleanup;
+ 	}
++	if (dma_set_mask_and_coherent(&rx->adapter->pdev->dev,
++				      DMA_BIT_MASK(64))) {
++		if (dma_set_mask_and_coherent(&rx->adapter->pdev->dev,
++					      DMA_BIT_MASK(32))) {
++			dev_warn(&rx->adapter->pdev->dev,
++				 "lan743x_: No suitable DMA available\n");
++			ret = -ENOMEM;
++			goto cleanup;
++		}
++	}
+ 	ring_allocation_size = ALIGN(rx->ring_size *
+ 				     sizeof(struct lan743x_rx_descriptor),
+ 				     PAGE_SIZE);
+@@ -2315,13 +2337,16 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+ 
+ 	rx->last_head = 0;
+ 	for (index = 0; index < rx->ring_size; index++) {
+-		ret = lan743x_rx_init_ring_element(rx, index);
++		ret = lan743x_rx_init_ring_element(rx, index, GFP_KERNEL);
+ 		if (ret)
+ 			goto cleanup;
+ 	}
+ 	return 0;
+ 
+ cleanup:
++	netif_warn(rx->adapter, ifup, rx->adapter->netdev,
++		   "Error allocating memory for LAN743x\n");
++
+ 	lan743x_rx_ring_cleanup(rx);
+ 	return ret;
+ }
+@@ -3019,6 +3044,8 @@ static int lan743x_pm_resume(struct device *dev)
+ 	if (ret) {
+ 		netif_err(adapter, probe, adapter->netdev,
+ 			  "lan743x_hardware_init returned %d\n", ret);
++		lan743x_pci_cleanup(adapter);
++		return ret;
+ 	}
+ 
+ 	/* open netdev when netdev is at running state while resume.
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index 64c6842bd452c..6d8406cfd38a1 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -1015,9 +1015,6 @@ static int lpc_eth_close(struct net_device *ndev)
+ 	napi_disable(&pldat->napi);
+ 	netif_stop_queue(ndev);
+ 
+-	if (ndev->phydev)
+-		phy_stop(ndev->phydev);
+-
+ 	spin_lock_irqsave(&pldat->lock, flags);
+ 	__lpc_eth_reset(pldat);
+ 	netif_carrier_off(ndev);
+@@ -1025,6 +1022,8 @@ static int lpc_eth_close(struct net_device *ndev)
+ 	writel(0, LPC_ENET_MAC2(pldat->net_base));
+ 	spin_unlock_irqrestore(&pldat->lock, flags);
+ 
++	if (ndev->phydev)
++		phy_stop(ndev->phydev);
+ 	clk_disable_unprepare(pldat->clk);
+ 
+ 	return 0;
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 3fa9c15ec81e2..6865d9319197f 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -548,7 +548,6 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
+ 	err = device_register(&bus->dev);
+ 	if (err) {
+ 		pr_err("mii_bus %s failed to register\n", bus->id);
+-		put_device(&bus->dev);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 8eeb26d8aeb7d..cbf344c5db610 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -243,62 +243,10 @@ static void phy_sanitize_settings(struct phy_device *phydev)
+ 	}
+ }
+ 
+-int phy_ethtool_ksettings_set(struct phy_device *phydev,
+-			      const struct ethtool_link_ksettings *cmd)
+-{
+-	__ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
+-	u8 autoneg = cmd->base.autoneg;
+-	u8 duplex = cmd->base.duplex;
+-	u32 speed = cmd->base.speed;
+-
+-	if (cmd->base.phy_address != phydev->mdio.addr)
+-		return -EINVAL;
+-
+-	linkmode_copy(advertising, cmd->link_modes.advertising);
+-
+-	/* We make sure that we don't pass unsupported values in to the PHY */
+-	linkmode_and(advertising, advertising, phydev->supported);
+-
+-	/* Verify the settings we care about. */
+-	if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
+-		return -EINVAL;
+-
+-	if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising))
+-		return -EINVAL;
+-
+-	if (autoneg == AUTONEG_DISABLE &&
+-	    ((speed != SPEED_1000 &&
+-	      speed != SPEED_100 &&
+-	      speed != SPEED_10) ||
+-	     (duplex != DUPLEX_HALF &&
+-	      duplex != DUPLEX_FULL)))
+-		return -EINVAL;
+-
+-	phydev->autoneg = autoneg;
+-
+-	if (autoneg == AUTONEG_DISABLE) {
+-		phydev->speed = speed;
+-		phydev->duplex = duplex;
+-	}
+-
+-	linkmode_copy(phydev->advertising, advertising);
+-
+-	linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+-			 phydev->advertising, autoneg == AUTONEG_ENABLE);
+-
+-	phydev->master_slave_set = cmd->base.master_slave_cfg;
+-	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
+-
+-	/* Restart the PHY */
+-	phy_start_aneg(phydev);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL(phy_ethtool_ksettings_set);
+-
+ void phy_ethtool_ksettings_get(struct phy_device *phydev,
+ 			       struct ethtool_link_ksettings *cmd)
+ {
++	mutex_lock(&phydev->lock);
+ 	linkmode_copy(cmd->link_modes.supported, phydev->supported);
+ 	linkmode_copy(cmd->link_modes.advertising, phydev->advertising);
+ 	linkmode_copy(cmd->link_modes.lp_advertising, phydev->lp_advertising);
+@@ -317,6 +265,7 @@ void phy_ethtool_ksettings_get(struct phy_device *phydev,
+ 	cmd->base.autoneg = phydev->autoneg;
+ 	cmd->base.eth_tp_mdix_ctrl = phydev->mdix_ctrl;
+ 	cmd->base.eth_tp_mdix = phydev->mdix;
++	mutex_unlock(&phydev->lock);
+ }
+ EXPORT_SYMBOL(phy_ethtool_ksettings_get);
+ 
+@@ -751,7 +700,7 @@ static int phy_check_link_status(struct phy_device *phydev)
+ }
+ 
+ /**
+- * phy_start_aneg - start auto-negotiation for this PHY device
++ * _phy_start_aneg - start auto-negotiation for this PHY device
+  * @phydev: the phy_device struct
+  *
+  * Description: Sanitizes the settings (if we're not autonegotiating
+@@ -759,25 +708,43 @@ static int phy_check_link_status(struct phy_device *phydev)
+  *   If the PHYCONTROL Layer is operating, we change the state to
+  *   reflect the beginning of Auto-negotiation or forcing.
+  */
+-int phy_start_aneg(struct phy_device *phydev)
++static int _phy_start_aneg(struct phy_device *phydev)
+ {
+ 	int err;
+ 
++	lockdep_assert_held(&phydev->lock);
++
+ 	if (!phydev->drv)
+ 		return -EIO;
+ 
+-	mutex_lock(&phydev->lock);
+-
+ 	if (AUTONEG_DISABLE == phydev->autoneg)
+ 		phy_sanitize_settings(phydev);
+ 
+ 	err = phy_config_aneg(phydev);
+ 	if (err < 0)
+-		goto out_unlock;
++		return err;
+ 
+ 	if (phy_is_started(phydev))
+ 		err = phy_check_link_status(phydev);
+-out_unlock:
++
++	return err;
++}
++
++/**
++ * phy_start_aneg - start auto-negotiation for this PHY device
++ * @phydev: the phy_device struct
++ *
++ * Description: Sanitizes the settings (if we're not autonegotiating
++ *   them), and then calls the driver's config_aneg function.
++ *   If the PHYCONTROL Layer is operating, we change the state to
++ *   reflect the beginning of Auto-negotiation or forcing.
++ */
++int phy_start_aneg(struct phy_device *phydev)
++{
++	int err;
++
++	mutex_lock(&phydev->lock);
++	err = _phy_start_aneg(phydev);
+ 	mutex_unlock(&phydev->lock);
+ 
+ 	return err;
+@@ -800,6 +767,61 @@ static int phy_poll_aneg_done(struct phy_device *phydev)
+ 	return ret < 0 ? ret : 0;
+ }
+ 
++int phy_ethtool_ksettings_set(struct phy_device *phydev,
++			      const struct ethtool_link_ksettings *cmd)
++{
++	__ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
++	u8 autoneg = cmd->base.autoneg;
++	u8 duplex = cmd->base.duplex;
++	u32 speed = cmd->base.speed;
++
++	if (cmd->base.phy_address != phydev->mdio.addr)
++		return -EINVAL;
++
++	linkmode_copy(advertising, cmd->link_modes.advertising);
++
++	/* We make sure that we don't pass unsupported values in to the PHY */
++	linkmode_and(advertising, advertising, phydev->supported);
++
++	/* Verify the settings we care about. */
++	if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
++		return -EINVAL;
++
++	if (autoneg == AUTONEG_ENABLE && linkmode_empty(advertising))
++		return -EINVAL;
++
++	if (autoneg == AUTONEG_DISABLE &&
++	    ((speed != SPEED_1000 &&
++	      speed != SPEED_100 &&
++	      speed != SPEED_10) ||
++	     (duplex != DUPLEX_HALF &&
++	      duplex != DUPLEX_FULL)))
++		return -EINVAL;
++
++	mutex_lock(&phydev->lock);
++	phydev->autoneg = autoneg;
++
++	if (autoneg == AUTONEG_DISABLE) {
++		phydev->speed = speed;
++		phydev->duplex = duplex;
++	}
++
++	linkmode_copy(phydev->advertising, advertising);
++
++	linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
++			 phydev->advertising, autoneg == AUTONEG_ENABLE);
++
++	phydev->master_slave_set = cmd->base.master_slave_cfg;
++	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
++
++	/* Restart the PHY */
++	_phy_start_aneg(phydev);
++
++	mutex_unlock(&phydev->lock);
++	return 0;
++}
++EXPORT_SYMBOL(phy_ethtool_ksettings_set);
++
+ /**
+  * phy_speed_down - set speed to lowest speed supported by both link partners
+  * @phydev: the phy_device struct
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 6d092d78e0cbc..a7e58c327d7f6 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -3734,6 +3734,12 @@ static int lan78xx_probe(struct usb_interface *intf,
+ 
+ 	dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out, 1);
+ 
++	/* Reject broken descriptors. */
++	if (dev->maxpacket == 0) {
++		ret = -ENODEV;
++		goto out4;
++	}
++
+ 	/* driver requires remote-wakeup capability during autosuspend. */
+ 	intf->needs_remote_wakeup = 1;
+ 
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 470e1c1e63535..cf0728a63ace0 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1788,6 +1788,11 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ 	if (!dev->rx_urb_size)
+ 		dev->rx_urb_size = dev->hard_mtu;
+ 	dev->maxpacket = usb_maxpacket (dev->udev, dev->out, 1);
++	if (dev->maxpacket == 0) {
++		/* that is a broken device */
++		status = -ENODEV;
++		goto out4;
++	}
+ 
+ 	/* let userspace know we have a random address */
+ 	if (ether_addr_equal(net->dev_addr, node_id))
+diff --git a/drivers/nfc/port100.c b/drivers/nfc/port100.c
+index 4df926cc37d03..2777c0dd23f70 100644
+--- a/drivers/nfc/port100.c
++++ b/drivers/nfc/port100.c
+@@ -1003,11 +1003,11 @@ static u64 port100_get_command_type_mask(struct port100 *dev)
+ 
+ 	skb = port100_alloc_skb(dev, 0);
+ 	if (!skb)
+-		return -ENOMEM;
++		return 0;
+ 
+ 	resp = port100_send_cmd_sync(dev, PORT100_CMD_GET_COMMAND_TYPE, skb);
+ 	if (IS_ERR(resp))
+-		return PTR_ERR(resp);
++		return 0;
+ 
+ 	if (resp->len < 8)
+ 		mask = 0;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index fd28a23d45ed6..5e412c080101c 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -913,12 +913,14 @@ static void nvme_tcp_fail_request(struct nvme_tcp_request *req)
+ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_queue *queue = req->queue;
++	int req_data_len = req->data_len;
+ 
+ 	while (true) {
+ 		struct page *page = nvme_tcp_req_cur_page(req);
+ 		size_t offset = nvme_tcp_req_cur_offset(req);
+ 		size_t len = nvme_tcp_req_cur_length(req);
+ 		bool last = nvme_tcp_pdu_last_send(req, len);
++		int req_data_sent = req->data_sent;
+ 		int ret, flags = MSG_DONTWAIT;
+ 
+ 		if (last && !queue->data_digest && !nvme_tcp_queue_more(queue))
+@@ -945,7 +947,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
+ 		 * in the request where we don't want to modify it as we may
+ 		 * compete with the RX path completing the request.
+ 		 */
+-		if (req->data_sent + ret < req->data_len)
++		if (req_data_sent + ret < req_data_len)
+ 			nvme_tcp_advance_req(req, ret);
+ 
+ 		/* fully successful last send in current PDU */
+@@ -1035,10 +1037,11 @@ static int nvme_tcp_try_send_data_pdu(struct nvme_tcp_request *req)
+ static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
+ {
+ 	struct nvme_tcp_queue *queue = req->queue;
++	size_t offset = req->offset;
+ 	int ret;
+ 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
+ 	struct kvec iov = {
+-		.iov_base = &req->ddgst + req->offset,
++		.iov_base = (u8 *)&req->ddgst + req->offset,
+ 		.iov_len = NVME_TCP_DIGEST_LENGTH - req->offset
+ 	};
+ 
+@@ -1051,7 +1054,7 @@ static int nvme_tcp_try_send_ddgst(struct nvme_tcp_request *req)
+ 	if (unlikely(ret <= 0))
+ 		return ret;
+ 
+-	if (req->offset + ret == NVME_TCP_DIGEST_LENGTH) {
++	if (offset + ret == NVME_TCP_DIGEST_LENGTH) {
+ 		nvme_tcp_done_send_req(queue);
+ 		return 1;
+ 	}
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 07ee347ea3f3c..d641bfa07a801 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -702,7 +702,7 @@ static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
+ 	struct nvmet_tcp_queue *queue = cmd->queue;
+ 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
+ 	struct kvec iov = {
+-		.iov_base = &cmd->exp_ddgst + cmd->offset,
++		.iov_base = (u8 *)&cmd->exp_ddgst + cmd->offset,
+ 		.iov_len = NVME_TCP_DIGEST_LENGTH - cmd->offset
+ 	};
+ 	int ret;
+diff --git a/drivers/pinctrl/bcm/pinctrl-ns.c b/drivers/pinctrl/bcm/pinctrl-ns.c
+index e79690bd8b85f..d7f8175d2c1c8 100644
+--- a/drivers/pinctrl/bcm/pinctrl-ns.c
++++ b/drivers/pinctrl/bcm/pinctrl-ns.c
+@@ -5,7 +5,6 @@
+ 
+ #include <linux/err.h>
+ #include <linux/io.h>
+-#include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+@@ -13,7 +12,6 @@
+ #include <linux/pinctrl/pinctrl.h>
+ #include <linux/pinctrl/pinmux.h>
+ #include <linux/platform_device.h>
+-#include <linux/regmap.h>
+ #include <linux/slab.h>
+ 
+ #define FLAG_BCM4708		BIT(1)
+@@ -24,8 +22,7 @@ struct ns_pinctrl {
+ 	struct device *dev;
+ 	unsigned int chipset_flag;
+ 	struct pinctrl_dev *pctldev;
+-	struct regmap *regmap;
+-	u32 offset;
++	void __iomem *base;
+ 
+ 	struct pinctrl_desc pctldesc;
+ 	struct ns_pinctrl_group *groups;
+@@ -232,9 +229,9 @@ static int ns_pinctrl_set_mux(struct pinctrl_dev *pctrl_dev,
+ 		unset |= BIT(pin_number);
+ 	}
+ 
+-	regmap_read(ns_pinctrl->regmap, ns_pinctrl->offset, &tmp);
++	tmp = readl(ns_pinctrl->base);
+ 	tmp &= ~unset;
+-	regmap_write(ns_pinctrl->regmap, ns_pinctrl->offset, tmp);
++	writel(tmp, ns_pinctrl->base);
+ 
+ 	return 0;
+ }
+@@ -266,13 +263,13 @@ static const struct of_device_id ns_pinctrl_of_match_table[] = {
+ static int ns_pinctrl_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *np = dev->of_node;
+ 	const struct of_device_id *of_id;
+ 	struct ns_pinctrl *ns_pinctrl;
+ 	struct pinctrl_desc *pctldesc;
+ 	struct pinctrl_pin_desc *pin;
+ 	struct ns_pinctrl_group *group;
+ 	struct ns_pinctrl_function *function;
++	struct resource *res;
+ 	int i;
+ 
+ 	ns_pinctrl = devm_kzalloc(dev, sizeof(*ns_pinctrl), GFP_KERNEL);
+@@ -290,18 +287,12 @@ static int ns_pinctrl_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	ns_pinctrl->chipset_flag = (uintptr_t)of_id->data;
+ 
+-	ns_pinctrl->regmap = syscon_node_to_regmap(of_get_parent(np));
+-	if (IS_ERR(ns_pinctrl->regmap)) {
+-		int err = PTR_ERR(ns_pinctrl->regmap);
+-
+-		dev_err(dev, "Failed to map pinctrl regs: %d\n", err);
+-
+-		return err;
+-	}
+-
+-	if (of_property_read_u32(np, "offset", &ns_pinctrl->offset)) {
+-		dev_err(dev, "Failed to get register offset\n");
+-		return -ENOENT;
++	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
++					   "cru_gpio_control");
++	ns_pinctrl->base = devm_ioremap_resource(dev, res);
++	if (IS_ERR(ns_pinctrl->base)) {
++		dev_err(dev, "Failed to map pinctrl regs\n");
++		return PTR_ERR(ns_pinctrl->base);
+ 	}
+ 
+ 	memcpy(pctldesc, &ns_pinctrl_desc, sizeof(*pctldesc));
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 5b764740b8298..c5fd75bbf5d97 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -832,6 +832,34 @@ static const struct pinconf_ops amd_pinconf_ops = {
+ 	.pin_config_group_set = amd_pinconf_group_set,
+ };
+ 
++static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
++{
++	struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++	unsigned long flags;
++	u32 pin_reg, mask;
++	int i;
++
++	mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
++		BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
++		BIT(WAKE_CNTRL_OFF_S4);
++
++	for (i = 0; i < desc->npins; i++) {
++		int pin = desc->pins[i].number;
++		const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
++
++		if (!pd)
++			continue;
++
++		raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++
++		pin_reg = readl(gpio_dev->base + i * 4);
++		pin_reg &= ~mask;
++		writel(pin_reg, gpio_dev->base + i * 4);
++
++		raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
++	}
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static bool amd_gpio_should_save(struct amd_gpio *gpio_dev, unsigned int pin)
+ {
+@@ -969,6 +997,9 @@ static int amd_gpio_probe(struct platform_device *pdev)
+ 		return PTR_ERR(gpio_dev->pctrl);
+ 	}
+ 
++	/* Disable and mask interrupts */
++	amd_gpio_irq_init(gpio_dev);
++
+ 	girq = &gpio_dev->gc.irq;
+ 	girq->chip = &amd_gpio_irqchip;
+ 	/* This will let us handle the parent IRQ in the driver */
+diff --git a/drivers/reset/reset-brcmstb-rescal.c b/drivers/reset/reset-brcmstb-rescal.c
+index b6f074d6a65f8..433fa0c40e477 100644
+--- a/drivers/reset/reset-brcmstb-rescal.c
++++ b/drivers/reset/reset-brcmstb-rescal.c
+@@ -38,7 +38,7 @@ static int brcm_rescal_reset_set(struct reset_controller_dev *rcdev,
+ 	}
+ 
+ 	ret = readl_poll_timeout(base + BRCM_RESCAL_STATUS, reg,
+-				 !(reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
++				 (reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
+ 	if (ret) {
+ 		dev_err(data->dev, "time out on SATA/PCIe rescal\n");
+ 		return ret;
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index 935b01ee44b74..f100fe4e9b2a9 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -1696,6 +1696,7 @@ static int ibmvfc_send_event(struct ibmvfc_event *evt,
+ 
+ 	spin_lock_irqsave(&evt->queue->l_lock, flags);
+ 	list_add_tail(&evt->queue_list, &evt->queue->sent);
++	atomic_set(&evt->active, 1);
+ 
+ 	mb();
+ 
+@@ -1710,6 +1711,7 @@ static int ibmvfc_send_event(struct ibmvfc_event *evt,
+ 				     be64_to_cpu(crq_as_u64[1]));
+ 
+ 	if (rc) {
++		atomic_set(&evt->active, 0);
+ 		list_del(&evt->queue_list);
+ 		spin_unlock_irqrestore(&evt->queue->l_lock, flags);
+ 		del_timer(&evt->timer);
+@@ -1737,7 +1739,6 @@ static int ibmvfc_send_event(struct ibmvfc_event *evt,
+ 
+ 		evt->done(evt);
+ 	} else {
+-		atomic_set(&evt->active, 1);
+ 		spin_unlock_irqrestore(&evt->queue->l_lock, flags);
+ 		ibmvfc_trc_start(evt);
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index 427a2ff7e9da1..9cdedbff5b884 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -642,9 +642,9 @@ static int exynos_ufs_pre_pwr_mode(struct ufs_hba *hba,
+ 	}
+ 
+ 	/* setting for three timeout values for traffic class #0 */
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA0), 8064);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA1), 28224);
+-	ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA2), 20160);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(DL_FC0PROTTIMEOUTVAL), 8064);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(DL_TC0REPLAYTIMEOUTVAL), 28224);
++	ufshcd_dme_set(hba, UIC_ARG_MIB(DL_AFC0REQTIMEOUTVAL), 20160);
+ 
+ 	return 0;
+ out:
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index dd95dfd85e980..3035bb6f54585 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -576,7 +576,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
+ 	/* Last one doesn't continue. */
+ 	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
+ 	if (!indirect && vq->use_dma_api)
+-		vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags =
++		vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags &=
+ 			~VRING_DESC_F_NEXT;
+ 
+ 	if (indirect) {
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 643c6c2d0b728..ced2fc0deb8c4 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -71,8 +71,6 @@
+ #define TCOBASE(p)	((p)->tco_res->start)
+ /* SMI Control and Enable Register */
+ #define SMI_EN(p)	((p)->smi_res->start)
+-#define TCO_EN		(1 << 13)
+-#define GBL_SMI_EN	(1 << 0)
+ 
+ #define TCO_RLD(p)	(TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
+ #define TCOv1_TMR(p)	(TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
+@@ -357,12 +355,8 @@ static int iTCO_wdt_set_timeout(struct watchdog_device *wd_dev, unsigned int t)
+ 
+ 	tmrval = seconds_to_ticks(p, t);
+ 
+-	/*
+-	 * If TCO SMIs are off, the timer counts down twice before rebooting.
+-	 * Otherwise, the BIOS generally reboots when the SMI triggers.
+-	 */
+-	if (p->smi_res &&
+-	    (inl(SMI_EN(p)) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
++	/* For TCO v1 the timer counts down twice before rebooting */
++	if (p->iTCO_version == 1)
+ 		tmrval /= 2;
+ 
+ 	/* from the specs: */
+@@ -527,7 +521,7 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 		 * Disables TCO logic generating an SMI#
+ 		 */
+ 		val32 = inl(SMI_EN(p));
+-		val32 &= ~TCO_EN;	/* Turn off SMI clearing watchdog */
++		val32 &= 0xffffdfff;	/* Turn off SMI clearing watchdog */
+ 		outl(val32, SMI_EN(p));
+ 	}
+ 
+diff --git a/drivers/watchdog/sbsa_gwdt.c b/drivers/watchdog/sbsa_gwdt.c
+index ee9ff38929eb5..6f4319bdbc500 100644
+--- a/drivers/watchdog/sbsa_gwdt.c
++++ b/drivers/watchdog/sbsa_gwdt.c
+@@ -130,7 +130,7 @@ static u64 sbsa_gwdt_reg_read(struct sbsa_gwdt *gwdt)
+ 	if (gwdt->version == 0)
+ 		return readl(gwdt->control_base + SBSA_GWDT_WOR);
+ 	else
+-		return readq(gwdt->control_base + SBSA_GWDT_WOR);
++		return lo_hi_readq(gwdt->control_base + SBSA_GWDT_WOR);
+ }
+ 
+ static void sbsa_gwdt_reg_write(u64 val, struct sbsa_gwdt *gwdt)
+@@ -138,7 +138,7 @@ static void sbsa_gwdt_reg_write(u64 val, struct sbsa_gwdt *gwdt)
+ 	if (gwdt->version == 0)
+ 		writel((u32)val, gwdt->control_base + SBSA_GWDT_WOR);
+ 	else
+-		writeq(val, gwdt->control_base + SBSA_GWDT_WOR);
++		lo_hi_writeq(val, gwdt->control_base + SBSA_GWDT_WOR);
+ }
+ 
+ /*
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index 8521942f5af2b..481017e1dac5a 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -1251,7 +1251,7 @@ static int ocfs2_test_bg_bit_allocatable(struct buffer_head *bg_bh,
+ {
+ 	struct ocfs2_group_desc *bg = (struct ocfs2_group_desc *) bg_bh->b_data;
+ 	struct journal_head *jh;
+-	int ret;
++	int ret = 1;
+ 
+ 	if (ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap))
+ 		return 0;
+@@ -1259,14 +1259,18 @@ static int ocfs2_test_bg_bit_allocatable(struct buffer_head *bg_bh,
+ 	if (!buffer_jbd(bg_bh))
+ 		return 1;
+ 
+-	jh = bh2jh(bg_bh);
+-	spin_lock(&jh->b_state_lock);
+-	bg = (struct ocfs2_group_desc *) jh->b_committed_data;
+-	if (bg)
+-		ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
+-	else
+-		ret = 1;
+-	spin_unlock(&jh->b_state_lock);
++	jbd_lock_bh_journal_head(bg_bh);
++	if (buffer_jbd(bg_bh)) {
++		jh = bh2jh(bg_bh);
++		spin_lock(&jh->b_state_lock);
++		bg = (struct ocfs2_group_desc *) jh->b_committed_data;
++		if (bg)
++			ret = !ocfs2_test_bit(nr, (unsigned long *)bg->bg_bitmap);
++		else
++			ret = 1;
++		spin_unlock(&jh->b_state_lock);
++	}
++	jbd_unlock_bh_journal_head(bg_bh);
+ 
+ 	return ret;
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 11da5671d4f09..5c242e0477328 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -900,8 +900,11 @@ struct bpf_array_aux {
+ 	 * stored in the map to make sure that all callers and callees have
+ 	 * the same prog type and JITed flag.
+ 	 */
+-	enum bpf_prog_type type;
+-	bool jited;
++	struct {
++		spinlock_t lock;
++		enum bpf_prog_type type;
++		bool jited;
++	} owner;
+ 	/* Programs with direct jumps into programs part of this array. */
+ 	struct list_head poke_progs;
+ 	struct bpf_map *map;
+diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
+index ae3ac3a2018ca..2eb9c53468e77 100644
+--- a/include/linux/bpf_types.h
++++ b/include/linux/bpf_types.h
+@@ -101,14 +101,14 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_STACK_TRACE, stack_trace_map_ops)
+ #endif
+ BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY_OF_MAPS, array_of_maps_map_ops)
+ BPF_MAP_TYPE(BPF_MAP_TYPE_HASH_OF_MAPS, htab_of_maps_map_ops)
+-#ifdef CONFIG_NET
+-BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP, dev_map_ops)
+-BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP_HASH, dev_map_hash_ops)
+-BPF_MAP_TYPE(BPF_MAP_TYPE_SK_STORAGE, sk_storage_map_ops)
+ #ifdef CONFIG_BPF_LSM
+ BPF_MAP_TYPE(BPF_MAP_TYPE_INODE_STORAGE, inode_storage_map_ops)
+ #endif
+ BPF_MAP_TYPE(BPF_MAP_TYPE_TASK_STORAGE, task_storage_map_ops)
++#ifdef CONFIG_NET
++BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP, dev_map_ops)
++BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP_HASH, dev_map_hash_ops)
++BPF_MAP_TYPE(BPF_MAP_TYPE_SK_STORAGE, sk_storage_map_ops)
+ BPF_MAP_TYPE(BPF_MAP_TYPE_CPUMAP, cpu_map_ops)
+ #if defined(CONFIG_XDP_SOCKETS)
+ BPF_MAP_TYPE(BPF_MAP_TYPE_XSKMAP, xsk_map_ops)
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index 5922031ffab6e..c573fccfc4751 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -171,6 +171,15 @@ enum pageflags {
+ 	/* Compound pages. Stored in first tail page's flags */
+ 	PG_double_map = PG_workingset,
+ 
++#ifdef CONFIG_MEMORY_FAILURE
++	/*
++	 * Compound pages. Stored in first tail page's flags.
++	 * Indicates that at least one subpage is hwpoisoned in the
++	 * THP.
++	 */
++	PG_has_hwpoisoned = PG_mappedtodisk,
++#endif
++
+ 	/* non-lru isolated movable page */
+ 	PG_isolated = PG_reclaim,
+ 
+@@ -703,6 +712,20 @@ PAGEFLAG_FALSE(DoubleMap)
+ 	TESTSCFLAG_FALSE(DoubleMap)
+ #endif
+ 
++#if defined(CONFIG_MEMORY_FAILURE) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
++/*
++ * PageHasHWPoisoned indicates that at least one subpage is hwpoisoned in the
++ * compound page.
++ *
++ * This flag is set by hwpoison handler.  Cleared by THP split or free page.
++ */
++PAGEFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
++	TESTSCFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
++#else
++PAGEFLAG_FALSE(HasHWPoisoned)
++	TESTSCFLAG_FALSE(HasHWPoisoned)
++#endif
++
+ /*
+  * Check if a page is currently marked HWPoisoned. Note that this check is
+  * best effort only and inherently racy: there is no way to synchronize with
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 161cdf7df1a07..db581a761dcf6 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -5350,7 +5350,6 @@ static inline void wiphy_unlock(struct wiphy *wiphy)
+  *	netdev and may otherwise be used by driver read-only, will be update
+  *	by cfg80211 on change_interface
+  * @mgmt_registrations: list of registrations for management frames
+- * @mgmt_registrations_lock: lock for the list
+  * @mgmt_registrations_need_update: mgmt registrations were updated,
+  *	need to propagate the update to the driver
+  * @mtx: mutex used to lock data in this struct, may be used by drivers
+@@ -5397,7 +5396,6 @@ struct wireless_dev {
+ 	u32 identifier;
+ 
+ 	struct list_head mgmt_registrations;
+-	spinlock_t mgmt_registrations_lock;
+ 	u8 mgmt_registrations_need_update:1;
+ 
+ 	struct mutex mtx;
+diff --git a/include/net/tls.h b/include/net/tls.h
+index be4b3e1cac462..64217c9873c92 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -358,6 +358,7 @@ int tls_sk_query(struct sock *sk, int optname, char __user *optval,
+ 		int __user *optlen);
+ int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
+ 		  unsigned int optlen);
++void tls_err_abort(struct sock *sk, int err);
+ 
+ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
+ void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
+@@ -466,12 +467,6 @@ static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)
+ #endif
+ }
+ 
+-static inline void tls_err_abort(struct sock *sk, int err)
+-{
+-	sk->sk_err = err;
+-	sk_error_report(sk);
+-}
+-
+ static inline bool tls_bigint_increment(unsigned char *seq, int len)
+ {
+ 	int i;
+@@ -512,7 +507,7 @@ static inline void tls_advance_record_sn(struct sock *sk,
+ 					 struct cipher_context *ctx)
+ {
+ 	if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
+-		tls_err_abort(sk, EBADMSG);
++		tls_err_abort(sk, -EBADMSG);
+ 
+ 	if (prot->version != TLS_1_3_VERSION &&
+ 	    prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 3c4105603f9db..db3c88fe08940 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -1051,6 +1051,7 @@ static struct bpf_map *prog_array_map_alloc(union bpf_attr *attr)
+ 	INIT_WORK(&aux->work, prog_array_map_clear_deferred);
+ 	INIT_LIST_HEAD(&aux->poke_progs);
+ 	mutex_init(&aux->poke_mutex);
++	spin_lock_init(&aux->owner.lock);
+ 
+ 	map = array_map_alloc(attr);
+ 	if (IS_ERR(map)) {
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index c019611fbc8f4..4c0c0146f956c 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1821,20 +1821,26 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ bool bpf_prog_array_compatible(struct bpf_array *array,
+ 			       const struct bpf_prog *fp)
+ {
++	bool ret;
++
+ 	if (fp->kprobe_override)
+ 		return false;
+ 
+-	if (!array->aux->type) {
++	spin_lock(&array->aux->owner.lock);
++
++	if (!array->aux->owner.type) {
+ 		/* There's no owner yet where we could check for
+ 		 * compatibility.
+ 		 */
+-		array->aux->type  = fp->type;
+-		array->aux->jited = fp->jited;
+-		return true;
++		array->aux->owner.type  = fp->type;
++		array->aux->owner.jited = fp->jited;
++		ret = true;
++	} else {
++		ret = array->aux->owner.type  == fp->type &&
++		      array->aux->owner.jited == fp->jited;
+ 	}
+-
+-	return array->aux->type  == fp->type &&
+-	       array->aux->jited == fp->jited;
++	spin_unlock(&array->aux->owner.lock);
++	return ret;
+ }
+ 
+ static int bpf_check_tail_call(const struct bpf_prog *fp)
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index e343f158e5564..92ed4b2984b8d 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -543,8 +543,10 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
+ 
+ 	if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) {
+ 		array = container_of(map, struct bpf_array, map);
+-		type  = array->aux->type;
+-		jited = array->aux->jited;
++		spin_lock(&array->aux->owner.lock);
++		type  = array->aux->owner.type;
++		jited = array->aux->owner.jited;
++		spin_unlock(&array->aux->owner.lock);
+ 	}
+ 
+ 	seq_printf(m,
+@@ -1064,7 +1066,7 @@ static int map_lookup_elem(union bpf_attr *attr)
+ 	value_size = bpf_map_value_size(map);
+ 
+ 	err = -ENOMEM;
+-	value = kmalloc(value_size, GFP_USER | __GFP_NOWARN);
++	value = kvmalloc(value_size, GFP_USER | __GFP_NOWARN);
+ 	if (!value)
+ 		goto free_key;
+ 
+@@ -1079,7 +1081,7 @@ static int map_lookup_elem(union bpf_attr *attr)
+ 	err = 0;
+ 
+ free_value:
+-	kfree(value);
++	kvfree(value);
+ free_key:
+ 	kfree(key);
+ err_put:
+@@ -1125,16 +1127,10 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
+ 		goto err_put;
+ 	}
+ 
+-	if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH ||
+-	    map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH ||
+-	    map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY ||
+-	    map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE)
+-		value_size = round_up(map->value_size, 8) * num_possible_cpus();
+-	else
+-		value_size = map->value_size;
++	value_size = bpf_map_value_size(map);
+ 
+ 	err = -ENOMEM;
+-	value = kmalloc(value_size, GFP_USER | __GFP_NOWARN);
++	value = kvmalloc(value_size, GFP_USER | __GFP_NOWARN);
+ 	if (!value)
+ 		goto free_key;
+ 
+@@ -1145,7 +1141,7 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
+ 	err = bpf_map_update_value(map, f, key, value, attr->flags);
+ 
+ free_value:
+-	kfree(value);
++	kvfree(value);
+ free_key:
+ 	kfree(key);
+ err_put:
+@@ -1331,12 +1327,11 @@ int generic_map_update_batch(struct bpf_map *map,
+ 	void __user *values = u64_to_user_ptr(attr->batch.values);
+ 	void __user *keys = u64_to_user_ptr(attr->batch.keys);
+ 	u32 value_size, cp, max_count;
+-	int ufd = attr->map_fd;
++	int ufd = attr->batch.map_fd;
+ 	void *key, *value;
+ 	struct fd f;
+ 	int err = 0;
+ 
+-	f = fdget(ufd);
+ 	if (attr->batch.elem_flags & ~BPF_F_LOCK)
+ 		return -EINVAL;
+ 
+@@ -1355,12 +1350,13 @@ int generic_map_update_batch(struct bpf_map *map,
+ 	if (!key)
+ 		return -ENOMEM;
+ 
+-	value = kmalloc(value_size, GFP_USER | __GFP_NOWARN);
++	value = kvmalloc(value_size, GFP_USER | __GFP_NOWARN);
+ 	if (!value) {
+ 		kfree(key);
+ 		return -ENOMEM;
+ 	}
+ 
++	f = fdget(ufd); /* bpf_map_do_batch() guarantees ufd is valid */
+ 	for (cp = 0; cp < max_count; cp++) {
+ 		err = -EFAULT;
+ 		if (copy_from_user(key, keys + cp * map->key_size,
+@@ -1378,8 +1374,9 @@ int generic_map_update_batch(struct bpf_map *map,
+ 	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
+ 		err = -EFAULT;
+ 
+-	kfree(value);
++	kvfree(value);
+ 	kfree(key);
++	fdput(f);
+ 	return err;
+ }
+ 
+@@ -1417,7 +1414,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ 	if (!buf_prevkey)
+ 		return -ENOMEM;
+ 
+-	buf = kmalloc(map->key_size + value_size, GFP_USER | __GFP_NOWARN);
++	buf = kvmalloc(map->key_size + value_size, GFP_USER | __GFP_NOWARN);
+ 	if (!buf) {
+ 		kfree(buf_prevkey);
+ 		return -ENOMEM;
+@@ -1480,7 +1477,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ 
+ free_buf:
+ 	kfree(buf_prevkey);
+-	kfree(buf);
++	kvfree(buf);
+ 	return err;
+ }
+ 
+@@ -1535,7 +1532,7 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
+ 	value_size = bpf_map_value_size(map);
+ 
+ 	err = -ENOMEM;
+-	value = kmalloc(value_size, GFP_USER | __GFP_NOWARN);
++	value = kvmalloc(value_size, GFP_USER | __GFP_NOWARN);
+ 	if (!value)
+ 		goto free_key;
+ 
+@@ -1567,7 +1564,7 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
+ 	err = 0;
+ 
+ free_value:
+-	kfree(value);
++	kvfree(value);
+ free_key:
+ 	kfree(key);
+ err_put:
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 3a0161c21b6ba..38750c385dd2c 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2174,8 +2174,10 @@ static void cgroup_kill_sb(struct super_block *sb)
+ 	 * And don't kill the default root.
+ 	 */
+ 	if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
+-	    !percpu_ref_is_dying(&root->cgrp.self.refcnt))
++	    !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
++		cgroup_bpf_offline(&root->cgrp);
+ 		percpu_ref_kill(&root->cgrp.self.refcnt);
++	}
+ 	cgroup_put(&root->cgrp);
+ 	kernfs_kill_sb(sb);
+ }
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 163c2da2a6548..d4eb8590fa6bb 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2452,6 +2452,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
+ 	/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
+ 	lruvec = lock_page_lruvec(head);
+ 
++	ClearPageHasHWPoisoned(head);
++
+ 	for (i = nr - 1; i >= 1; i--) {
+ 		__split_huge_page_tail(head, i, lruvec, list);
+ 		/* Some pages can be beyond i_size: drop them from page cache */
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index b0412be08fa2c..b82b760acf949 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -445,22 +445,25 @@ static bool hugepage_vma_check(struct vm_area_struct *vma,
+ 	if (!transhuge_vma_enabled(vma, vm_flags))
+ 		return false;
+ 
++	if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) -
++				vma->vm_pgoff, HPAGE_PMD_NR))
++		return false;
++
+ 	/* Enabled via shmem mount options or sysfs settings. */
+-	if (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) {
+-		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
+-				HPAGE_PMD_NR);
+-	}
++	if (shmem_file(vma->vm_file))
++		return shmem_huge_enabled(vma);
+ 
+ 	/* THP settings require madvise. */
+ 	if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always())
+ 		return false;
+ 
+-	/* Read-only file mappings need to be aligned for THP to work. */
++	/* Only regular file is valid */
+ 	if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file &&
+-	    !inode_is_open_for_write(vma->vm_file->f_inode) &&
+ 	    (vm_flags & VM_EXEC)) {
+-		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
+-				HPAGE_PMD_NR);
++		struct inode *inode = vma->vm_file->f_inode;
++
++		return !inode_is_open_for_write(inode) &&
++			S_ISREG(inode->i_mode);
+ 	}
+ 
+ 	if (!vma->anon_vma || vma->vm_ops)
+@@ -1763,6 +1766,10 @@ static void collapse_file(struct mm_struct *mm,
+ 				filemap_flush(mapping);
+ 				result = SCAN_FAIL;
+ 				goto xa_unlocked;
++			} else if (PageWriteback(page)) {
++				xas_unlock_irq(&xas);
++				result = SCAN_FAIL;
++				goto xa_unlocked;
+ 			} else if (trylock_page(page)) {
+ 				get_page(page);
+ 				xas_unlock_irq(&xas);
+@@ -1798,7 +1805,8 @@ static void collapse_file(struct mm_struct *mm,
+ 			goto out_unlock;
+ 		}
+ 
+-		if (!is_shmem && PageDirty(page)) {
++		if (!is_shmem && (PageDirty(page) ||
++				  PageWriteback(page))) {
+ 			/*
+ 			 * khugepaged only works on read-only fd, so this
+ 			 * page is dirty because it hasn't been flushed
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 7df9fde18004c..c398d8524f6e0 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1148,20 +1148,6 @@ static int __get_hwpoison_page(struct page *page)
+ 	if (!HWPoisonHandlable(head))
+ 		return -EBUSY;
+ 
+-	if (PageTransHuge(head)) {
+-		/*
+-		 * Non anonymous thp exists only in allocation/free time. We
+-		 * can't handle such a case correctly, so let's give it up.
+-		 * This should be better than triggering BUG_ON when kernel
+-		 * tries to touch the "partially handled" page.
+-		 */
+-		if (!PageAnon(head)) {
+-			pr_err("Memory failure: %#lx: non anonymous thp\n",
+-				page_to_pfn(page));
+-			return 0;
+-		}
+-	}
+-
+ 	if (get_page_unless_zero(head)) {
+ 		if (head == compound_head(page))
+ 			return 1;
+@@ -1708,6 +1694,20 @@ try_again:
+ 	}
+ 
+ 	if (PageTransHuge(hpage)) {
++		/*
++		 * The flag must be set after the refcount is bumped
++		 * otherwise it may race with THP split.
++		 * And the flag can't be set in get_hwpoison_page() since
++		 * it is called by soft offline too and it is just called
++		 * for !MF_COUNT_INCREASE.  So here seems to be the best
++		 * place.
++		 *
++		 * Don't need care about the above error handling paths for
++		 * get_hwpoison_page() since they handle either free page
++		 * or unhandlable page.  The refcount is bumped iff the
++		 * page is a valid handlable page.
++		 */
++		SetPageHasHWPoisoned(hpage);
+ 		if (try_to_split_thp_page(p, "Memory Failure") < 0) {
+ 			action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
+ 			res = -EBUSY;
+diff --git a/mm/memory.c b/mm/memory.c
+index 25fc46e872142..738f4e1df81ee 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3905,6 +3905,15 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
+ 	if (compound_order(page) != HPAGE_PMD_ORDER)
+ 		return ret;
+ 
++	/*
++	 * Just backoff if any subpage of a THP is corrupted otherwise
++	 * the corrupted page may mapped by PMD silently to escape the
++	 * check.  This kind of THP just can be PTE mapped.  Access to
++	 * the corrupted subpage should trigger SIGBUS as expected.
++	 */
++	if (unlikely(PageHasHWPoisoned(page)))
++		return ret;
++
+ 	/*
+ 	 * Archs like ppc64 need additional space to store information
+ 	 * related to pte entry. Use the preallocated table for that.
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 7a28f7db7d286..7db847fa62f89 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1320,8 +1320,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
+ 
+ 		VM_BUG_ON_PAGE(compound && compound_order(page) != order, page);
+ 
+-		if (compound)
++		if (compound) {
+ 			ClearPageDoubleMap(page);
++			ClearPageHasHWPoisoned(page);
++		}
+ 		for (i = 1; i < (1 << order); i++) {
+ 			if (compound)
+ 				bad += free_tail_pages_check(page, page + i);
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index 63d42dcc9324a..eb55b419faf88 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -1556,10 +1556,14 @@ int batadv_bla_init(struct batadv_priv *bat_priv)
+ 		return 0;
+ 
+ 	bat_priv->bla.claim_hash = batadv_hash_new(128);
+-	bat_priv->bla.backbone_hash = batadv_hash_new(32);
++	if (!bat_priv->bla.claim_hash)
++		return -ENOMEM;
+ 
+-	if (!bat_priv->bla.claim_hash || !bat_priv->bla.backbone_hash)
++	bat_priv->bla.backbone_hash = batadv_hash_new(32);
++	if (!bat_priv->bla.backbone_hash) {
++		batadv_hash_destroy(bat_priv->bla.claim_hash);
+ 		return -ENOMEM;
++	}
+ 
+ 	batadv_hash_set_lock_class(bat_priv->bla.claim_hash,
+ 				   &batadv_claim_hash_lock_class_key);
+diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c
+index 3ddd66e4c29ef..5207cd8d6ad83 100644
+--- a/net/batman-adv/main.c
++++ b/net/batman-adv/main.c
+@@ -190,29 +190,41 @@ int batadv_mesh_init(struct net_device *soft_iface)
+ 
+ 	bat_priv->gw.generation = 0;
+ 
+-	ret = batadv_v_mesh_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
+-
+ 	ret = batadv_originator_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_orig;
++	}
+ 
+ 	ret = batadv_tt_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_tt;
++	}
++
++	ret = batadv_v_mesh_init(bat_priv);
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_v;
++	}
+ 
+ 	ret = batadv_bla_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_bla;
++	}
+ 
+ 	ret = batadv_dat_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_dat;
++	}
+ 
+ 	ret = batadv_nc_mesh_init(bat_priv);
+-	if (ret < 0)
+-		goto err;
++	if (ret < 0) {
++		atomic_set(&bat_priv->mesh_state, BATADV_MESH_DEACTIVATING);
++		goto err_nc;
++	}
+ 
+ 	batadv_gw_init(bat_priv);
+ 	batadv_mcast_init(bat_priv);
+@@ -222,8 +234,20 @@ int batadv_mesh_init(struct net_device *soft_iface)
+ 
+ 	return 0;
+ 
+-err:
+-	batadv_mesh_free(soft_iface);
++err_nc:
++	batadv_dat_free(bat_priv);
++err_dat:
++	batadv_bla_free(bat_priv);
++err_bla:
++	batadv_v_mesh_free(bat_priv);
++err_v:
++	batadv_tt_free(bat_priv);
++err_tt:
++	batadv_originator_free(bat_priv);
++err_orig:
++	batadv_purge_outstanding_packets(bat_priv, NULL);
++	atomic_set(&bat_priv->mesh_state, BATADV_MESH_INACTIVE);
++
+ 	return ret;
+ }
+ 
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index 4bb76b434d071..b175043efdaf6 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -152,8 +152,10 @@ int batadv_nc_mesh_init(struct batadv_priv *bat_priv)
+ 				   &batadv_nc_coding_hash_lock_class_key);
+ 
+ 	bat_priv->nc.decoding_hash = batadv_hash_new(128);
+-	if (!bat_priv->nc.decoding_hash)
++	if (!bat_priv->nc.decoding_hash) {
++		batadv_hash_destroy(bat_priv->nc.coding_hash);
+ 		goto err;
++	}
+ 
+ 	batadv_hash_set_lock_class(bat_priv->nc.decoding_hash,
+ 				   &batadv_nc_decoding_hash_lock_class_key);
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 434b4f0429092..87626894a3468 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -4193,8 +4193,10 @@ int batadv_tt_init(struct batadv_priv *bat_priv)
+ 		return ret;
+ 
+ 	ret = batadv_tt_global_init(bat_priv);
+-	if (ret < 0)
++	if (ret < 0) {
++		batadv_tt_local_table_free(bat_priv);
+ 		return ret;
++	}
+ 
+ 	batadv_tvlv_handler_register(bat_priv, batadv_tt_tvlv_ogm_handler_v1,
+ 				     batadv_tt_tvlv_unicast_handler_v1,
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 693f15a056304..9cb47618d4869 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3246,6 +3246,12 @@ static u16 skb_tx_hash(const struct net_device *dev,
+ 
+ 		qoffset = sb_dev->tc_to_txq[tc].offset;
+ 		qcount = sb_dev->tc_to_txq[tc].count;
++		if (unlikely(!qcount)) {
++			net_warn_ratelimited("%s: invalid qcount, qoffset %u for tc %u\n",
++					     sb_dev->name, qoffset, tc);
++			qoffset = 0;
++			qcount = dev->real_num_tx_queues;
++		}
+ 	}
+ 
+ 	if (skb_rx_queue_recorded(skb)) {
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index f6197774048b6..b2e49eb7001d6 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1973,9 +1973,9 @@ int netdev_register_kobject(struct net_device *ndev)
+ int netdev_change_owner(struct net_device *ndev, const struct net *net_old,
+ 			const struct net *net_new)
+ {
++	kuid_t old_uid = GLOBAL_ROOT_UID, new_uid = GLOBAL_ROOT_UID;
++	kgid_t old_gid = GLOBAL_ROOT_GID, new_gid = GLOBAL_ROOT_GID;
+ 	struct device *dev = &ndev->dev;
+-	kuid_t old_uid, new_uid;
+-	kgid_t old_gid, new_gid;
+ 	int error;
+ 
+ 	net_ns_get_ownership(net_old, &old_uid, &old_gid);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index d3e9386b493eb..9d068153c3168 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -232,6 +232,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ 	bool cork = false, enospc = sk_msg_full(msg);
+ 	struct sock *sk_redir;
+ 	u32 tosend, delta = 0;
++	u32 eval = __SK_NONE;
+ 	int ret;
+ 
+ more_data:
+@@ -275,13 +276,24 @@ more_data:
+ 	case __SK_REDIRECT:
+ 		sk_redir = psock->sk_redir;
+ 		sk_msg_apply_bytes(psock, tosend);
++		if (!psock->apply_bytes) {
++			/* Clean up before releasing the sock lock. */
++			eval = psock->eval;
++			psock->eval = __SK_NONE;
++			psock->sk_redir = NULL;
++		}
+ 		if (psock->cork) {
+ 			cork = true;
+ 			psock->cork = NULL;
+ 		}
+ 		sk_msg_return(sk, msg, tosend);
+ 		release_sock(sk);
++
+ 		ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
++
++		if (eval == __SK_REDIRECT)
++			sock_put(sk_redir);
++
+ 		lock_sock(sk);
+ 		if (unlikely(ret < 0)) {
+ 			int free = sk_msg_free_nocharge(sk, msg);
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 97095b7c9c648..5dcfd53a4ab6c 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -672,7 +672,7 @@ ieee80211_mesh_update_bss_params(struct ieee80211_sub_if_data *sdata,
+ 				 u8 *ie, u8 ie_len)
+ {
+ 	struct ieee80211_supported_band *sband;
+-	const u8 *cap;
++	const struct element *cap;
+ 	const struct ieee80211_he_operation *he_oper = NULL;
+ 
+ 	sband = ieee80211_get_sband(sdata);
+@@ -687,9 +687,10 @@ ieee80211_mesh_update_bss_params(struct ieee80211_sub_if_data *sdata,
+ 
+ 	sdata->vif.bss_conf.he_support = true;
+ 
+-	cap = cfg80211_find_ext_ie(WLAN_EID_EXT_HE_OPERATION, ie, ie_len);
+-	if (cap && cap[1] >= ieee80211_he_oper_size(&cap[3]))
+-		he_oper = (void *)(cap + 3);
++	cap = cfg80211_find_ext_elem(WLAN_EID_EXT_HE_OPERATION, ie, ie_len);
++	if (cap && cap->datalen >= 1 + sizeof(*he_oper) &&
++	    cap->datalen >= 1 + ieee80211_he_oper_size(cap->data + 1))
++		he_oper = (void *)(cap->data + 1);
+ 
+ 	if (he_oper)
+ 		sdata->vif.bss_conf.he_oper.params =
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 32df65f68c123..fb3da4d8f4a34 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -156,6 +156,12 @@ static enum sctp_disposition __sctp_sf_do_9_1_abort(
+ 					void *arg,
+ 					struct sctp_cmd_seq *commands);
+ 
++static enum sctp_disposition
++__sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
++			   const struct sctp_association *asoc,
++			   const union sctp_subtype type, void *arg,
++			   struct sctp_cmd_seq *commands);
++
+ /* Small helper function that checks if the chunk length
+  * is of the appropriate length.  The 'required_length' argument
+  * is set to be the size of a specific chunk we are testing.
+@@ -337,6 +343,14 @@ enum sctp_disposition sctp_sf_do_5_1B_init(struct net *net,
+ 	if (!chunk->singleton)
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
++	/* Make sure that the INIT chunk has a valid length.
++	 * Normally, this would cause an ABORT with a Protocol Violation
++	 * error, but since we don't have an association, we'll
++	 * just discard the packet.
++	 */
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* If the packet is an OOTB packet which is temporarily on the
+ 	 * control endpoint, respond with an ABORT.
+ 	 */
+@@ -351,14 +365,6 @@ enum sctp_disposition sctp_sf_do_5_1B_init(struct net *net,
+ 	if (chunk->sctp_hdr->vtag != 0)
+ 		return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
+ 
+-	/* Make sure that the INIT chunk has a valid length.
+-	 * Normally, this would cause an ABORT with a Protocol Violation
+-	 * error, but since we don't have an association, we'll
+-	 * just discard the packet.
+-	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+-
+ 	/* If the INIT is coming toward a closing socket, we'll send back
+ 	 * and ABORT.  Essentially, this catches the race of INIT being
+ 	 * backloged to the socket at the same time as the user issues close().
+@@ -704,6 +710,9 @@ enum sctp_disposition sctp_sf_do_5_1D_ce(struct net *net,
+ 	struct sock *sk;
+ 	int error = 0;
+ 
++	if (asoc && !sctp_vtag_verify(chunk, asoc))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* If the packet is an OOTB packet which is temporarily on the
+ 	 * control endpoint, respond with an ABORT.
+ 	 */
+@@ -718,7 +727,8 @@ enum sctp_disposition sctp_sf_do_5_1D_ce(struct net *net,
+ 	 * in sctp_unpack_cookie().
+ 	 */
+ 	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
+ 
+ 	/* If the endpoint is not listening or if the number of associations
+ 	 * on the TCP-style socket exceed the max backlog, respond with an
+@@ -1524,20 +1534,16 @@ static enum sctp_disposition sctp_sf_do_unexpected_init(
+ 	if (!chunk->singleton)
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
++	/* Make sure that the INIT chunk has a valid length. */
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* 3.1 A packet containing an INIT chunk MUST have a zero Verification
+ 	 * Tag.
+ 	 */
+ 	if (chunk->sctp_hdr->vtag != 0)
+ 		return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
+ 
+-	/* Make sure that the INIT chunk has a valid length.
+-	 * In this case, we generate a protocol violation since we have
+-	 * an association established.
+-	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
+-
+ 	if (SCTP_INPUT_CB(chunk->skb)->encap_port != chunk->transport->encap_port)
+ 		return sctp_sf_new_encap_port(net, ep, asoc, type, arg, commands);
+ 
+@@ -1882,9 +1888,9 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
+ 	 * its peer.
+ 	*/
+ 	if (sctp_state(asoc, SHUTDOWN_ACK_SENT)) {
+-		disposition = sctp_sf_do_9_2_reshutack(net, ep, asoc,
+-				SCTP_ST_CHUNK(chunk->chunk_hdr->type),
+-				chunk, commands);
++		disposition = __sctp_sf_do_9_2_reshutack(net, ep, asoc,
++							 SCTP_ST_CHUNK(chunk->chunk_hdr->type),
++							 chunk, commands);
+ 		if (SCTP_DISPOSITION_NOMEM == disposition)
+ 			goto nomem;
+ 
+@@ -2202,9 +2208,11 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
+ 	 * enough for the chunk header.  Cookie length verification is
+ 	 * done later.
+ 	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr))) {
++		if (!sctp_vtag_verify(chunk, asoc))
++			asoc = NULL;
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, commands);
++	}
+ 
+ 	/* "Decode" the chunk.  We have no optional parameters so we
+ 	 * are in good shape.
+@@ -2341,7 +2349,7 @@ enum sctp_disposition sctp_sf_shutdown_pending_abort(
+ 	 */
+ 	if (SCTP_ADDR_DEL ==
+ 		    sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	if (!sctp_err_chunk_valid(chunk))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+@@ -2387,7 +2395,7 @@ enum sctp_disposition sctp_sf_shutdown_sent_abort(
+ 	 */
+ 	if (SCTP_ADDR_DEL ==
+ 		    sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	if (!sctp_err_chunk_valid(chunk))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+@@ -2657,7 +2665,7 @@ enum sctp_disposition sctp_sf_do_9_1_abort(
+ 	 */
+ 	if (SCTP_ADDR_DEL ==
+ 		    sctp_bind_addr_state(&asoc->base.bind_addr, &chunk->dest))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	if (!sctp_err_chunk_valid(chunk))
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+@@ -2970,13 +2978,11 @@ enum sctp_disposition sctp_sf_do_9_2_shut_ctsn(
+  * that belong to this association, it should discard the INIT chunk and
+  * retransmit the SHUTDOWN ACK chunk.
+  */
+-enum sctp_disposition sctp_sf_do_9_2_reshutack(
+-					struct net *net,
+-					const struct sctp_endpoint *ep,
+-					const struct sctp_association *asoc,
+-					const union sctp_subtype type,
+-					void *arg,
+-					struct sctp_cmd_seq *commands)
++static enum sctp_disposition
++__sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
++			   const struct sctp_association *asoc,
++			   const union sctp_subtype type, void *arg,
++			   struct sctp_cmd_seq *commands)
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 	struct sctp_chunk *reply;
+@@ -3010,6 +3016,26 @@ nomem:
+ 	return SCTP_DISPOSITION_NOMEM;
+ }
+ 
++enum sctp_disposition
++sctp_sf_do_9_2_reshutack(struct net *net, const struct sctp_endpoint *ep,
++			 const struct sctp_association *asoc,
++			 const union sctp_subtype type, void *arg,
++			 struct sctp_cmd_seq *commands)
++{
++	struct sctp_chunk *chunk = arg;
++
++	if (!chunk->singleton)
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_init_chunk)))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
++	if (chunk->sctp_hdr->vtag != 0)
++		return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands);
++
++	return __sctp_sf_do_9_2_reshutack(net, ep, asoc, type, arg, commands);
++}
++
+ /*
+  * sctp_sf_do_ecn_cwr
+  *
+@@ -3662,6 +3688,9 @@ enum sctp_disposition sctp_sf_ootb(struct net *net,
+ 
+ 	SCTP_INC_STATS(net, SCTP_MIB_OUTOFBLUES);
+ 
++	if (asoc && !sctp_vtag_verify(chunk, asoc))
++		asoc = NULL;
++
+ 	ch = (struct sctp_chunkhdr *)chunk->chunk_hdr;
+ 	do {
+ 		/* Report violation if the chunk is less then minimal */
+@@ -3777,12 +3806,6 @@ static enum sctp_disposition sctp_sf_shut_8_4_5(
+ 
+ 	SCTP_INC_STATS(net, SCTP_MIB_OUTCTRLCHUNKS);
+ 
+-	/* If the chunk length is invalid, we don't want to process
+-	 * the reset of the packet.
+-	 */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+-		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+-
+ 	/* We need to discard the rest of the packet to prevent
+ 	 * potential boomming attacks from additional bundled chunks.
+ 	 * This is documented in SCTP Threats ID.
+@@ -3810,6 +3833,9 @@ enum sctp_disposition sctp_sf_do_8_5_1_E_sa(struct net *net,
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 
++	if (!sctp_vtag_verify(chunk, asoc))
++		asoc = NULL;
++
+ 	/* Make sure that the SHUTDOWN_ACK chunk has a valid length. */
+ 	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+ 		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+@@ -3845,6 +3871,11 @@ enum sctp_disposition sctp_sf_do_asconf(struct net *net,
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 	}
+ 
++	/* Make sure that the ASCONF ADDIP chunk has a valid length.  */
++	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_addip_chunk)))
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
++
+ 	/* ADD-IP: Section 4.1.1
+ 	 * This chunk MUST be sent in an authenticated way by using
+ 	 * the mechanism defined in [I-D.ietf-tsvwg-sctp-auth]. If this chunk
+@@ -3853,13 +3884,7 @@ enum sctp_disposition sctp_sf_do_asconf(struct net *net,
+ 	 */
+ 	if (!asoc->peer.asconf_capable ||
+ 	    (!net->sctp.addip_noauth && !chunk->auth))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg,
+-					     commands);
+-
+-	/* Make sure that the ASCONF ADDIP chunk has a valid length.  */
+-	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_addip_chunk)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	hdr = (struct sctp_addiphdr *)chunk->skb->data;
+ 	serial = ntohl(hdr->serial);
+@@ -3988,6 +4013,12 @@ enum sctp_disposition sctp_sf_do_asconf_ack(struct net *net,
+ 		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 	}
+ 
++	/* Make sure that the ADDIP chunk has a valid length.  */
++	if (!sctp_chunk_length_valid(asconf_ack,
++				     sizeof(struct sctp_addip_chunk)))
++		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
++						  commands);
++
+ 	/* ADD-IP, Section 4.1.2:
+ 	 * This chunk MUST be sent in an authenticated way by using
+ 	 * the mechanism defined in [I-D.ietf-tsvwg-sctp-auth]. If this chunk
+@@ -3996,14 +4027,7 @@ enum sctp_disposition sctp_sf_do_asconf_ack(struct net *net,
+ 	 */
+ 	if (!asoc->peer.asconf_capable ||
+ 	    (!net->sctp.addip_noauth && !asconf_ack->auth))
+-		return sctp_sf_discard_chunk(net, ep, asoc, type, arg,
+-					     commands);
+-
+-	/* Make sure that the ADDIP chunk has a valid length.  */
+-	if (!sctp_chunk_length_valid(asconf_ack,
+-				     sizeof(struct sctp_addip_chunk)))
+-		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+-						  commands);
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ 
+ 	addip_hdr = (struct sctp_addiphdr *)asconf_ack->skb->data;
+ 	rcvd_serial = ntohl(addip_hdr->serial);
+@@ -4575,6 +4599,9 @@ enum sctp_disposition sctp_sf_discard_chunk(struct net *net,
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 
++	if (asoc && !sctp_vtag_verify(chunk, asoc))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* Make sure that the chunk has a valid length.
+ 	 * Since we don't know the chunk type, we use a general
+ 	 * chunkhdr structure to make a comparison.
+@@ -4642,6 +4669,9 @@ enum sctp_disposition sctp_sf_violation(struct net *net,
+ {
+ 	struct sctp_chunk *chunk = arg;
+ 
++	if (!sctp_vtag_verify(chunk, asoc))
++		return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ 	/* Make sure that the chunk has a valid length. */
+ 	if (!sctp_chunk_length_valid(chunk, sizeof(struct sctp_chunkhdr)))
+ 		return sctp_sf_violation_chunklen(net, ep, asoc, type, arg,
+@@ -6348,6 +6378,7 @@ static struct sctp_packet *sctp_ootb_pkt_new(
+ 		 * yet.
+ 		 */
+ 		switch (chunk->chunk_hdr->type) {
++		case SCTP_CID_INIT:
+ 		case SCTP_CID_INIT_ACK:
+ 		{
+ 			struct sctp_initack_chunk *initack;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index c9391d38de85c..dc60c32bb70df 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -2285,43 +2285,53 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
+ 	u16 key_gen = msg_key_gen(hdr);
+ 	u16 size = msg_data_sz(hdr);
+ 	u8 *data = msg_data(hdr);
++	unsigned int keylen;
++
++	/* Verify whether the size can exist in the packet */
++	if (unlikely(size < sizeof(struct tipc_aead_key) + TIPC_AEAD_KEYLEN_MIN)) {
++		pr_debug("%s: message data size is too small\n", rx->name);
++		goto exit;
++	}
++
++	keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
++
++	/* Verify the supplied size values */
++	if (unlikely(size != keylen + sizeof(struct tipc_aead_key) ||
++		     keylen > TIPC_AEAD_KEY_SIZE_MAX)) {
++		pr_debug("%s: invalid MSG_CRYPTO key size\n", rx->name);
++		goto exit;
++	}
+ 
+ 	spin_lock(&rx->lock);
+ 	if (unlikely(rx->skey || (key_gen == rx->key_gen && rx->key.keys))) {
+ 		pr_err("%s: key existed <%p>, gen %d vs %d\n", rx->name,
+ 		       rx->skey, key_gen, rx->key_gen);
+-		goto exit;
++		goto exit_unlock;
+ 	}
+ 
+ 	/* Allocate memory for the key */
+ 	skey = kmalloc(size, GFP_ATOMIC);
+ 	if (unlikely(!skey)) {
+ 		pr_err("%s: unable to allocate memory for skey\n", rx->name);
+-		goto exit;
++		goto exit_unlock;
+ 	}
+ 
+ 	/* Copy key from msg data */
+-	skey->keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
++	skey->keylen = keylen;
+ 	memcpy(skey->alg_name, data, TIPC_AEAD_ALG_NAME);
+ 	memcpy(skey->key, data + TIPC_AEAD_ALG_NAME + sizeof(__be32),
+ 	       skey->keylen);
+ 
+-	/* Sanity check */
+-	if (unlikely(size != tipc_aead_key_size(skey))) {
+-		kfree(skey);
+-		skey = NULL;
+-		goto exit;
+-	}
+-
+ 	rx->key_gen = key_gen;
+ 	rx->skey_mode = msg_key_mode(hdr);
+ 	rx->skey = skey;
+ 	rx->nokey = 0;
+ 	mb(); /* for nokey flag */
+ 
+-exit:
++exit_unlock:
+ 	spin_unlock(&rx->lock);
+ 
++exit:
+ 	/* Schedule the key attaching on this crypto */
+ 	if (likely(skey && queue_delayed_work(tx->wq, &rx->work, 0)))
+ 		return true;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 4feb95e34b64b..3580e73fb317f 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -35,6 +35,7 @@
+  * SOFTWARE.
+  */
+ 
++#include <linux/bug.h>
+ #include <linux/sched/signal.h>
+ #include <linux/module.h>
+ #include <linux/splice.h>
+@@ -43,6 +44,14 @@
+ #include <net/strparser.h>
+ #include <net/tls.h>
+ 
++noinline void tls_err_abort(struct sock *sk, int err)
++{
++	WARN_ON_ONCE(err >= 0);
++	/* sk->sk_err should contain a positive error code. */
++	sk->sk_err = -err;
++	sk_error_report(sk);
++}
++
+ static int __skb_nsg(struct sk_buff *skb, int offset, int len,
+                      unsigned int recursion_level)
+ {
+@@ -419,7 +428,7 @@ int tls_tx_records(struct sock *sk, int flags)
+ 
+ tx_err:
+ 	if (rc < 0 && rc != -EAGAIN)
+-		tls_err_abort(sk, EBADMSG);
++		tls_err_abort(sk, -EBADMSG);
+ 
+ 	return rc;
+ }
+@@ -450,7 +459,7 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err)
+ 
+ 		/* If err is already set on socket, return the same code */
+ 		if (sk->sk_err) {
+-			ctx->async_wait.err = sk->sk_err;
++			ctx->async_wait.err = -sk->sk_err;
+ 		} else {
+ 			ctx->async_wait.err = err;
+ 			tls_err_abort(sk, err);
+@@ -763,7 +772,7 @@ static int tls_push_record(struct sock *sk, int flags,
+ 			       msg_pl->sg.size + prot->tail_size, i);
+ 	if (rc < 0) {
+ 		if (rc != -EINPROGRESS) {
+-			tls_err_abort(sk, EBADMSG);
++			tls_err_abort(sk, -EBADMSG);
+ 			if (split) {
+ 				tls_ctx->pending_open_record_frags = true;
+ 				tls_merge_open_record(sk, rec, tmp, orig_end);
+@@ -1827,7 +1836,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		err = decrypt_skb_update(sk, skb, &msg->msg_iter,
+ 					 &chunk, &zc, async_capable);
+ 		if (err < 0 && err != -EINPROGRESS) {
+-			tls_err_abort(sk, EBADMSG);
++			tls_err_abort(sk, -EBADMSG);
+ 			goto recv_end;
+ 		}
+ 
+@@ -2007,7 +2016,7 @@ ssize_t tls_sw_splice_read(struct socket *sock,  loff_t *ppos,
+ 		}
+ 
+ 		if (err < 0) {
+-			tls_err_abort(sk, EBADMSG);
++			tls_err_abort(sk, -EBADMSG);
+ 			goto splice_read_end;
+ 		}
+ 		ctx->decrypted = 1;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 03323121ca505..aaba847d79eb2 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -524,6 +524,7 @@ use_default_name:
+ 	INIT_WORK(&rdev->propagate_cac_done_wk, cfg80211_propagate_cac_done_wk);
+ 	INIT_WORK(&rdev->mgmt_registrations_update_wk,
+ 		  cfg80211_mgmt_registrations_update_wk);
++	spin_lock_init(&rdev->mgmt_registrations_lock);
+ 
+ #ifdef CONFIG_CFG80211_DEFAULT_PS
+ 	rdev->wiphy.flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT;
+@@ -1279,7 +1280,6 @@ void cfg80211_init_wdev(struct wireless_dev *wdev)
+ 	INIT_LIST_HEAD(&wdev->event_list);
+ 	spin_lock_init(&wdev->event_lock);
+ 	INIT_LIST_HEAD(&wdev->mgmt_registrations);
+-	spin_lock_init(&wdev->mgmt_registrations_lock);
+ 	INIT_LIST_HEAD(&wdev->pmsr_list);
+ 	spin_lock_init(&wdev->pmsr_lock);
+ 	INIT_WORK(&wdev->pmsr_free_wk, cfg80211_pmsr_free_wk);
+diff --git a/net/wireless/core.h b/net/wireless/core.h
+index b35d0db12f1d5..1720abf36f92a 100644
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -100,6 +100,8 @@ struct cfg80211_registered_device {
+ 	struct work_struct propagate_cac_done_wk;
+ 
+ 	struct work_struct mgmt_registrations_update_wk;
++	/* lock for all wdev lists */
++	spinlock_t mgmt_registrations_lock;
+ 
+ 	/* must be last because of the way we do wiphy_priv(),
+ 	 * and it should at least be aligned to NETDEV_ALIGN */
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 3aa69b375a107..783acd2c4211f 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -452,9 +452,9 @@ static void cfg80211_mgmt_registrations_update(struct wireless_dev *wdev)
+ 
+ 	lockdep_assert_held(&rdev->wiphy.mtx);
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 	if (!wdev->mgmt_registrations_need_update) {
+-		spin_unlock_bh(&wdev->mgmt_registrations_lock);
++		spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 		return;
+ 	}
+ 
+@@ -479,7 +479,7 @@ static void cfg80211_mgmt_registrations_update(struct wireless_dev *wdev)
+ 	rcu_read_unlock();
+ 
+ 	wdev->mgmt_registrations_need_update = 0;
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	rdev_update_mgmt_frame_registrations(rdev, wdev, &upd);
+ }
+@@ -503,6 +503,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 				int match_len, bool multicast_rx,
+ 				struct netlink_ext_ack *extack)
+ {
++	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
+ 	struct cfg80211_mgmt_registration *reg, *nreg;
+ 	int err = 0;
+ 	u16 mgmt_type;
+@@ -548,7 +549,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 	if (!nreg)
+ 		return -ENOMEM;
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	list_for_each_entry(reg, &wdev->mgmt_registrations, list) {
+ 		int mlen = min(match_len, reg->match_len);
+@@ -583,7 +584,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 		list_add(&nreg->list, &wdev->mgmt_registrations);
+ 	}
+ 	wdev->mgmt_registrations_need_update = 1;
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	cfg80211_mgmt_registrations_update(wdev);
+ 
+@@ -591,7 +592,7 @@ int cfg80211_mlme_register_mgmt(struct wireless_dev *wdev, u32 snd_portid,
+ 
+  out:
+ 	kfree(nreg);
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	return err;
+ }
+@@ -602,7 +603,7 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlportid)
+ 	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ 	struct cfg80211_mgmt_registration *reg, *tmp;
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	list_for_each_entry_safe(reg, tmp, &wdev->mgmt_registrations, list) {
+ 		if (reg->nlportid != nlportid)
+@@ -615,7 +616,7 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlportid)
+ 		schedule_work(&rdev->mgmt_registrations_update_wk);
+ 	}
+ 
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	if (nlportid && rdev->crit_proto_nlportid == nlportid) {
+ 		rdev->crit_proto_nlportid = 0;
+@@ -628,15 +629,16 @@ void cfg80211_mlme_unregister_socket(struct wireless_dev *wdev, u32 nlportid)
+ 
+ void cfg80211_mlme_purge_registrations(struct wireless_dev *wdev)
+ {
++	struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
+ 	struct cfg80211_mgmt_registration *reg, *tmp;
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 	list_for_each_entry_safe(reg, tmp, &wdev->mgmt_registrations, list) {
+ 		list_del(&reg->list);
+ 		kfree(reg);
+ 	}
+ 	wdev->mgmt_registrations_need_update = 1;
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	cfg80211_mgmt_registrations_update(wdev);
+ }
+@@ -784,7 +786,7 @@ bool cfg80211_rx_mgmt_khz(struct wireless_dev *wdev, int freq, int sig_dbm,
+ 	data = buf + ieee80211_hdrlen(mgmt->frame_control);
+ 	data_len = len - ieee80211_hdrlen(mgmt->frame_control);
+ 
+-	spin_lock_bh(&wdev->mgmt_registrations_lock);
++	spin_lock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	list_for_each_entry(reg, &wdev->mgmt_registrations, list) {
+ 		if (reg->frame_type != ftype)
+@@ -808,7 +810,7 @@ bool cfg80211_rx_mgmt_khz(struct wireless_dev *wdev, int freq, int sig_dbm,
+ 		break;
+ 	}
+ 
+-	spin_unlock_bh(&wdev->mgmt_registrations_lock);
++	spin_unlock_bh(&rdev->mgmt_registrations_lock);
+ 
+ 	trace_cfg80211_return_bool(result);
+ 	return result;
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 7897b1478c3c0..d5bd9f015d8bc 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -418,14 +418,17 @@ cfg80211_add_nontrans_list(struct cfg80211_bss *trans_bss,
+ 	}
+ 	ssid_len = ssid[1];
+ 	ssid = ssid + 2;
+-	rcu_read_unlock();
+ 
+ 	/* check if nontrans_bss is in the list */
+ 	list_for_each_entry(bss, &trans_bss->nontrans_list, nontrans_list) {
+-		if (is_bss(bss, nontrans_bss->bssid, ssid, ssid_len))
++		if (is_bss(bss, nontrans_bss->bssid, ssid, ssid_len)) {
++			rcu_read_unlock();
+ 			return 0;
++		}
+ 	}
+ 
++	rcu_read_unlock();
++
+ 	/* add to the list */
+ 	list_add_tail(&nontrans_bss->nontrans_list, &trans_bss->nontrans_list);
+ 	return 0;
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 18dba3d7c638b..a1a99a5749844 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1028,14 +1028,14 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ 	    !(rdev->wiphy.interface_modes & (1 << ntype)))
+ 		return -EOPNOTSUPP;
+ 
+-	/* if it's part of a bridge, reject changing type to station/ibss */
+-	if (netif_is_bridge_port(dev) &&
+-	    (ntype == NL80211_IFTYPE_ADHOC ||
+-	     ntype == NL80211_IFTYPE_STATION ||
+-	     ntype == NL80211_IFTYPE_P2P_CLIENT))
+-		return -EBUSY;
+-
+ 	if (ntype != otype) {
++		/* if it's part of a bridge, reject changing type to station/ibss */
++		if (netif_is_bridge_port(dev) &&
++		    (ntype == NL80211_IFTYPE_ADHOC ||
++		     ntype == NL80211_IFTYPE_STATION ||
++		     ntype == NL80211_IFTYPE_P2P_CLIENT))
++			return -EBUSY;
++
+ 		dev->ieee80211_ptr->use_4addr = false;
+ 		dev->ieee80211_ptr->mesh_id_up_len = 0;
+ 		wdev_lock(dev->ieee80211_ptr);
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 064da7f3618d3..6b6dbd84cdeba 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -469,7 +469,7 @@ static int evsel__check_attr(struct evsel *evsel, struct perf_session *session)
+ 		return -EINVAL;
+ 
+ 	if (PRINT_FIELD(WEIGHT) &&
+-	    evsel__check_stype(evsel, PERF_SAMPLE_WEIGHT, "WEIGHT", PERF_OUTPUT_WEIGHT))
++	    evsel__check_stype(evsel, PERF_SAMPLE_WEIGHT_TYPE, "WEIGHT", PERF_OUTPUT_WEIGHT))
+ 		return -EINVAL;
+ 
+ 	if (PRINT_FIELD(SYM) &&
+@@ -4024,11 +4024,15 @@ script_found:
+ 		goto out_delete;
+ 
+ 	uname(&uts);
+-	if (data.is_pipe ||  /* assume pipe_mode indicates native_arch */
+-	    !strcmp(uts.machine, session->header.env.arch) ||
+-	    (!strcmp(uts.machine, "x86_64") &&
+-	     !strcmp(session->header.env.arch, "i386")))
++	if (data.is_pipe) { /* Assume pipe_mode indicates native_arch */
+ 		native_arch = true;
++	} else if (session->header.env.arch) {
++		if (!strcmp(uts.machine, session->header.env.arch))
++			native_arch = true;
++		else if (!strcmp(uts.machine, "x86_64") &&
++			 !strcmp(session->header.env.arch, "i386"))
++			native_arch = true;
++	}
+ 
+ 	script.session = session;
+ 	script__setup_sample_type(&script);


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-06 13:41 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-06 13:41 UTC (permalink / raw
  To: gentoo-commits

commit:     257adff21499e9d222d24ca4de3f19cc03122e47
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov  6 13:41:36 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Nov  6 13:41:36 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=257adff2

Linux patch 5.14.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1016_linux-5.14.17.patch | 682 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 686 insertions(+)

diff --git a/0000_README b/0000_README
index 8bcce4c..55b967b 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1015_linux-5.14.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.16
 
+Patch:  1016_linux-5.14.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.14.17.patch b/1016_linux-5.14.17.patch
new file mode 100644
index 0000000..d62c7fc
--- /dev/null
+++ b/1016_linux-5.14.17.patch
@@ -0,0 +1,682 @@
+diff --git a/Makefile b/Makefile
+index 02b6dab373ddb..b792b6c178691 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index 939ca220bf78d..518e88780574e 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -379,9 +379,6 @@ static int amba_device_try_add(struct amba_device *dev, struct resource *parent)
+ 	void __iomem *tmp;
+ 	int i, ret;
+ 
+-	WARN_ON(dev->irq[0] == (unsigned int)-1);
+-	WARN_ON(dev->irq[1] == (unsigned int)-1);
+-
+ 	ret = request_resource(parent, &dev->res);
+ 	if (ret)
+ 		goto err_out;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index a1c5bd2859fc3..d90dc5efc3340 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1073,8 +1073,6 @@ struct amdgpu_device {
+ 	char				product_name[32];
+ 	char				serial[20];
+ 
+-	struct amdgpu_autodump		autodump;
+-
+ 	atomic_t			throttling_logging_enabled;
+ 	struct ratelimit_state		throttling_logging_rs;
+ 	uint32_t                        ras_hw_enabled;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 83db7d8fa1508..a0f197eaaec0a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -27,7 +27,6 @@
+ #include <linux/pci.h>
+ #include <linux/uaccess.h>
+ #include <linux/pm_runtime.h>
+-#include <linux/poll.h>
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_pm.h"
+@@ -37,85 +36,7 @@
+ #include "amdgpu_securedisplay.h"
+ #include "amdgpu_fw_attestation.h"
+ 
+-int amdgpu_debugfs_wait_dump(struct amdgpu_device *adev)
+-{
+ #if defined(CONFIG_DEBUG_FS)
+-	unsigned long timeout = 600 * HZ;
+-	int ret;
+-
+-	wake_up_interruptible(&adev->autodump.gpu_hang);
+-
+-	ret = wait_for_completion_interruptible_timeout(&adev->autodump.dumping, timeout);
+-	if (ret == 0) {
+-		pr_err("autodump: timeout, move on to gpu recovery\n");
+-		return -ETIMEDOUT;
+-	}
+-#endif
+-	return 0;
+-}
+-
+-#if defined(CONFIG_DEBUG_FS)
+-
+-static int amdgpu_debugfs_autodump_open(struct inode *inode, struct file *file)
+-{
+-	struct amdgpu_device *adev = inode->i_private;
+-	int ret;
+-
+-	file->private_data = adev;
+-
+-	ret = down_read_killable(&adev->reset_sem);
+-	if (ret)
+-		return ret;
+-
+-	if (adev->autodump.dumping.done) {
+-		reinit_completion(&adev->autodump.dumping);
+-		ret = 0;
+-	} else {
+-		ret = -EBUSY;
+-	}
+-
+-	up_read(&adev->reset_sem);
+-
+-	return ret;
+-}
+-
+-static int amdgpu_debugfs_autodump_release(struct inode *inode, struct file *file)
+-{
+-	struct amdgpu_device *adev = file->private_data;
+-
+-	complete_all(&adev->autodump.dumping);
+-	return 0;
+-}
+-
+-static unsigned int amdgpu_debugfs_autodump_poll(struct file *file, struct poll_table_struct *poll_table)
+-{
+-	struct amdgpu_device *adev = file->private_data;
+-
+-	poll_wait(file, &adev->autodump.gpu_hang, poll_table);
+-
+-	if (amdgpu_in_reset(adev))
+-		return POLLIN | POLLRDNORM | POLLWRNORM;
+-
+-	return 0;
+-}
+-
+-static const struct file_operations autodump_debug_fops = {
+-	.owner = THIS_MODULE,
+-	.open = amdgpu_debugfs_autodump_open,
+-	.poll = amdgpu_debugfs_autodump_poll,
+-	.release = amdgpu_debugfs_autodump_release,
+-};
+-
+-static void amdgpu_debugfs_autodump_init(struct amdgpu_device *adev)
+-{
+-	init_completion(&adev->autodump.dumping);
+-	complete_all(&adev->autodump.dumping);
+-	init_waitqueue_head(&adev->autodump.gpu_hang);
+-
+-	debugfs_create_file("amdgpu_autodump", 0600,
+-		adev_to_drm(adev)->primary->debugfs_root,
+-		adev, &autodump_debug_fops);
+-}
+ 
+ /**
+  * amdgpu_debugfs_process_reg_op - Handle MMIO register reads/writes
+@@ -1588,7 +1509,6 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
+ 	}
+ 
+ 	amdgpu_ras_debugfs_create_all(adev);
+-	amdgpu_debugfs_autodump_init(adev);
+ 	amdgpu_rap_debugfs_init(adev);
+ 	amdgpu_securedisplay_debugfs_init(adev);
+ 	amdgpu_fw_attestation_debugfs_init(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h
+index 141a8474e24f2..8b641f40fdf66 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.h
+@@ -26,10 +26,6 @@
+ /*
+  * Debugfs
+  */
+-struct amdgpu_autodump {
+-	struct completion		dumping;
+-	struct wait_queue_head		gpu_hang;
+-};
+ 
+ int amdgpu_debugfs_regs_init(struct amdgpu_device *adev);
+ int amdgpu_debugfs_init(struct amdgpu_device *adev);
+@@ -37,4 +33,3 @@ void amdgpu_debugfs_fini(struct amdgpu_device *adev);
+ void amdgpu_debugfs_fence_init(struct amdgpu_device *adev);
+ void amdgpu_debugfs_firmware_init(struct amdgpu_device *adev);
+ void amdgpu_debugfs_gem_init(struct amdgpu_device *adev);
+-int amdgpu_debugfs_wait_dump(struct amdgpu_device *adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index cd8cc7d31b49c..08e53ff747282 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2380,10 +2380,6 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
+ 	if (!adev->gmc.xgmi.pending_reset)
+ 		amdgpu_amdkfd_device_init(adev);
+ 
+-	r = amdgpu_amdkfd_resume_iommu(adev);
+-	if (r)
+-		goto init_failed;
+-
+ 	amdgpu_fru_get_product_info(adev);
+ 
+ init_failed:
+@@ -4411,10 +4407,6 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
+ 	if (reset_context->reset_req_dev == adev)
+ 		job = reset_context->job;
+ 
+-	/* no need to dump if device is not in good state during probe period */
+-	if (!adev->gmc.xgmi.pending_reset)
+-		amdgpu_debugfs_wait_dump(adev);
+-
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		/* stop the data exchange thread */
+ 		amdgpu_virt_fini_data_exchange(adev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 5ba8a4f353440..ef64fb8f1bbf5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -875,6 +875,9 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 
+ 	svm_migrate_init((struct amdgpu_device *)kfd->kgd);
+ 
++	if(kgd2kfd_resume_iommu(kfd))
++		goto device_iommu_error;
++
+ 	if (kfd_resume(kfd))
+ 		goto kfd_resume_error;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 4c8edfdc3cac8..f3e2ef74078a2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -247,6 +247,7 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ {
+ 	struct amdgpu_dm_connector *connector = file_inode(f)->i_private;
+ 	struct dc_link *link = connector->dc_link;
++	struct dc *dc = (struct dc *)link->dc;
+ 	struct dc_link_settings prefer_link_settings;
+ 	char *wr_buf = NULL;
+ 	const uint32_t wr_buf_size = 40;
+@@ -313,7 +314,7 @@ static ssize_t dp_link_settings_write(struct file *f, const char __user *buf,
+ 	prefer_link_settings.lane_count = param[0];
+ 	prefer_link_settings.link_rate = param[1];
+ 
+-	dp_retrain_link_dp_test(link, &prefer_link_settings, false);
++	dc_link_set_preferred_training_settings(dc, &prefer_link_settings, NULL, link, true);
+ 
+ 	kfree(wr_buf);
+ 	return size;
+diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+index fc77592d88a96..4bf1fdd933441 100644
+--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+@@ -2091,10 +2091,6 @@ static void __execlists_unhold(struct i915_request *rq)
+ 			if (p->flags & I915_DEPENDENCY_WEAK)
+ 				continue;
+ 
+-			/* Propagate any change in error status */
+-			if (rq->fence.error)
+-				i915_request_set_error_once(w, rq->fence.error);
+-
+ 			if (w->engine != rq->engine)
+ 				continue;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 5aa5ddefd22d2..f0f073bf72a48 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -11021,12 +11021,6 @@ enum skl_power_gate {
+ #define  DC_STATE_DEBUG_MASK_CORES	(1 << 0)
+ #define  DC_STATE_DEBUG_MASK_MEMORY_UP	(1 << 1)
+ 
+-#define BXT_P_CR_MC_BIOS_REQ_0_0_0	_MMIO(MCHBAR_MIRROR_BASE_SNB + 0x7114)
+-#define  BXT_REQ_DATA_MASK			0x3F
+-#define  BXT_DRAM_CHANNEL_ACTIVE_SHIFT		12
+-#define  BXT_DRAM_CHANNEL_ACTIVE_MASK		(0xF << 12)
+-#define  BXT_MEMORY_FREQ_MULTIPLIER_HZ		133333333
+-
+ #define BXT_D_CR_DRP0_DUNIT8			0x1000
+ #define BXT_D_CR_DRP0_DUNIT9			0x1200
+ #define  BXT_D_CR_DRP0_DUNIT_START		8
+@@ -11057,9 +11051,7 @@ enum skl_power_gate {
+ #define  BXT_DRAM_TYPE_LPDDR4			(0x2 << 22)
+ #define  BXT_DRAM_TYPE_DDR4			(0x4 << 22)
+ 
+-#define SKL_MEMORY_FREQ_MULTIPLIER_HZ		266666666
+ #define SKL_MC_BIOS_DATA_0_0_0_MCHBAR_PCU	_MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5E04)
+-#define  SKL_REQ_DATA_MASK			(0xF << 0)
+ 
+ #define SKL_MAD_INTER_CHANNEL_0_0_0_MCHBAR_MCMAIN _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5000)
+ #define  SKL_DRAM_DDR_TYPE_MASK			(0x3 << 0)
+diff --git a/drivers/gpu/drm/i915/intel_dram.c b/drivers/gpu/drm/i915/intel_dram.c
+index 50fdea84ba706..300dfe239b8cc 100644
+--- a/drivers/gpu/drm/i915/intel_dram.c
++++ b/drivers/gpu/drm/i915/intel_dram.c
+@@ -244,7 +244,6 @@ static int
+ skl_get_dram_info(struct drm_i915_private *i915)
+ {
+ 	struct dram_info *dram_info = &i915->dram_info;
+-	u32 mem_freq_khz, val;
+ 	int ret;
+ 
+ 	dram_info->type = skl_get_dram_type(i915);
+@@ -255,17 +254,6 @@ skl_get_dram_info(struct drm_i915_private *i915)
+ 	if (ret)
+ 		return ret;
+ 
+-	val = intel_uncore_read(&i915->uncore,
+-				SKL_MC_BIOS_DATA_0_0_0_MCHBAR_PCU);
+-	mem_freq_khz = DIV_ROUND_UP((val & SKL_REQ_DATA_MASK) *
+-				    SKL_MEMORY_FREQ_MULTIPLIER_HZ, 1000);
+-
+-	if (dram_info->num_channels * mem_freq_khz == 0) {
+-		drm_info(&i915->drm,
+-			 "Couldn't get system memory bandwidth\n");
+-		return -EINVAL;
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -350,24 +338,10 @@ static void bxt_get_dimm_info(struct dram_dimm_info *dimm, u32 val)
+ static int bxt_get_dram_info(struct drm_i915_private *i915)
+ {
+ 	struct dram_info *dram_info = &i915->dram_info;
+-	u32 dram_channels;
+-	u32 mem_freq_khz, val;
+-	u8 num_active_channels, valid_ranks = 0;
++	u32 val;
++	u8 valid_ranks = 0;
+ 	int i;
+ 
+-	val = intel_uncore_read(&i915->uncore, BXT_P_CR_MC_BIOS_REQ_0_0_0);
+-	mem_freq_khz = DIV_ROUND_UP((val & BXT_REQ_DATA_MASK) *
+-				    BXT_MEMORY_FREQ_MULTIPLIER_HZ, 1000);
+-
+-	dram_channels = val & BXT_DRAM_CHANNEL_ACTIVE_MASK;
+-	num_active_channels = hweight32(dram_channels);
+-
+-	if (mem_freq_khz * num_active_channels == 0) {
+-		drm_info(&i915->drm,
+-			 "Couldn't get system memory bandwidth\n");
+-		return -EINVAL;
+-	}
+-
+ 	/*
+ 	 * Now read each DUNIT8/9/10/11 to check the rank of each dimms.
+ 	 */
+diff --git a/drivers/media/firewire/firedtv-avc.c b/drivers/media/firewire/firedtv-avc.c
+index 2bf9467b917d1..71991f8638e6b 100644
+--- a/drivers/media/firewire/firedtv-avc.c
++++ b/drivers/media/firewire/firedtv-avc.c
+@@ -1165,7 +1165,11 @@ int avc_ca_pmt(struct firedtv *fdtv, char *msg, int length)
+ 		read_pos += program_info_length;
+ 		write_pos += program_info_length;
+ 	}
+-	while (read_pos < length) {
++	while (read_pos + 4 < length) {
++		if (write_pos + 4 >= sizeof(c->operand) - 4) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 		c->operand[write_pos++] = msg[read_pos++];
+ 		c->operand[write_pos++] = msg[read_pos++];
+ 		c->operand[write_pos++] = msg[read_pos++];
+@@ -1177,13 +1181,17 @@ int avc_ca_pmt(struct firedtv *fdtv, char *msg, int length)
+ 		c->operand[write_pos++] = es_info_length >> 8;
+ 		c->operand[write_pos++] = es_info_length & 0xff;
+ 		if (es_info_length > 0) {
++			if (read_pos >= length) {
++				ret = -EINVAL;
++				goto out;
++			}
+ 			pmt_cmd_id = msg[read_pos++];
+ 			if (pmt_cmd_id != 1 && pmt_cmd_id != 4)
+ 				dev_err(fdtv->device, "invalid pmt_cmd_id %d at stream level\n",
+ 					pmt_cmd_id);
+ 
+-			if (es_info_length > sizeof(c->operand) - 4 -
+-					     write_pos) {
++			if (es_info_length > sizeof(c->operand) - 4 - write_pos ||
++			    es_info_length > length - read_pos) {
+ 				ret = -EINVAL;
+ 				goto out;
+ 			}
+diff --git a/drivers/media/firewire/firedtv-ci.c b/drivers/media/firewire/firedtv-ci.c
+index 9363d005e2b61..e0d57e09dab0c 100644
+--- a/drivers/media/firewire/firedtv-ci.c
++++ b/drivers/media/firewire/firedtv-ci.c
+@@ -134,6 +134,8 @@ static int fdtv_ca_pmt(struct firedtv *fdtv, void *arg)
+ 	} else {
+ 		data_length = msg->msg[3];
+ 	}
++	if (data_length > sizeof(msg->msg) - data_pos)
++		return -EINVAL;
+ 
+ 	return avc_ca_pmt(fdtv, &msg->msg[data_pos], data_length);
+ }
+diff --git a/drivers/net/ethernet/sfc/ethtool_common.c b/drivers/net/ethernet/sfc/ethtool_common.c
+index bf1443539a1a4..bd552c7dffcb1 100644
+--- a/drivers/net/ethernet/sfc/ethtool_common.c
++++ b/drivers/net/ethernet/sfc/ethtool_common.c
+@@ -563,20 +563,14 @@ int efx_ethtool_get_link_ksettings(struct net_device *net_dev,
+ {
+ 	struct efx_nic *efx = netdev_priv(net_dev);
+ 	struct efx_link_state *link_state = &efx->link_state;
+-	u32 supported;
+ 
+ 	mutex_lock(&efx->mac_lock);
+ 	efx_mcdi_phy_get_link_ksettings(efx, cmd);
+ 	mutex_unlock(&efx->mac_lock);
+ 
+ 	/* Both MACs support pause frames (bidirectional and respond-only) */
+-	ethtool_convert_link_mode_to_legacy_u32(&supported,
+-						cmd->link_modes.supported);
+-
+-	supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+-
+-	ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+-						supported);
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Asym_Pause);
+ 
+ 	if (LOOPBACK_INTERNAL(efx)) {
+ 		cmd->base.speed = link_state->speed;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 8bbe2a7bb1412..2b1b944d4b281 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1367,8 +1367,6 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 	bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
+ 	bool is_ndisc = ipv6_ndisc_frame(skb);
+ 
+-	nf_reset_ct(skb);
+-
+ 	/* loopback, multicast & non-ND link-local traffic; do not push through
+ 	 * packet taps again. Reset pkt_type for upper layers to process skb.
+ 	 * For strict packets with a source LLA, determine the dst using the
+@@ -1431,8 +1429,6 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ 	skb->skb_iif = vrf_dev->ifindex;
+ 	IPCB(skb)->flags |= IPSKB_L3SLAVE;
+ 
+-	nf_reset_ct(skb);
+-
+ 	if (ipv4_is_multicast(ip_hdr(skb)->daddr))
+ 		goto out;
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 67f4db662402b..c7592143f2eb9 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -604,15 +604,6 @@ static int wcn36xx_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 				}
+ 			}
+ 		}
+-		/* FIXME: Only enable bmps support when encryption is enabled.
+-		 * For any reasons, when connected to open/no-security BSS,
+-		 * the wcn36xx controller in bmps mode does not forward
+-		 * 'wake-up' beacons despite AP sends DTIM with station AID.
+-		 * It could be due to a firmware issue or to the way driver
+-		 * configure the station.
+-		 */
+-		if (vif->type == NL80211_IFTYPE_STATION)
+-			vif_priv->allow_bmps = true;
+ 		break;
+ 	case DISABLE_KEY:
+ 		if (!(IEEE80211_KEY_FLAG_PAIRWISE & key_conf->flags)) {
+@@ -913,7 +904,6 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
+ 				    vif->addr,
+ 				    bss_conf->aid);
+ 			vif_priv->sta_assoc = false;
+-			vif_priv->allow_bmps = false;
+ 			wcn36xx_smd_set_link_st(wcn,
+ 						bss_conf->bssid,
+ 						vif->addr,
+diff --git a/drivers/net/wireless/ath/wcn36xx/pmc.c b/drivers/net/wireless/ath/wcn36xx/pmc.c
+index 2d0780fefd477..2936aaf532738 100644
+--- a/drivers/net/wireless/ath/wcn36xx/pmc.c
++++ b/drivers/net/wireless/ath/wcn36xx/pmc.c
+@@ -23,10 +23,7 @@ int wcn36xx_pmc_enter_bmps_state(struct wcn36xx *wcn,
+ {
+ 	int ret = 0;
+ 	struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
+-
+-	if (!vif_priv->allow_bmps)
+-		return -ENOTSUPP;
+-
++	/* TODO: Make sure the TX chain clean */
+ 	ret = wcn36xx_smd_enter_bmps(wcn, vif);
+ 	if (!ret) {
+ 		wcn36xx_dbg(WCN36XX_DBG_PMC, "Entered BMPS\n");
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 0feb235b5a426..7989aee004194 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -128,7 +128,6 @@ struct wcn36xx_vif {
+ 	enum wcn36xx_hal_bss_type bss_type;
+ 
+ 	/* Power management */
+-	bool allow_bmps;
+ 	enum wcn36xx_power_state pw_state;
+ 
+ 	u8 bss_index;
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index d26025cf5de35..71dd0989c78ab 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -553,8 +553,10 @@ EXPORT_SYMBOL(scsi_device_get);
+  */
+ void scsi_device_put(struct scsi_device *sdev)
+ {
+-	module_put(sdev->host->hostt->module);
++	struct module *mod = sdev->host->hostt->module;
++
+ 	put_device(&sdev->sdev_gendev);
++	module_put(mod);
+ }
+ EXPORT_SYMBOL(scsi_device_put);
+ 
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index c0d31119d6d7b..ed05c3565e831 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -448,9 +448,12 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work)
+ 	struct scsi_vpd *vpd_pg80 = NULL, *vpd_pg83 = NULL;
+ 	struct scsi_vpd *vpd_pg0 = NULL, *vpd_pg89 = NULL;
+ 	unsigned long flags;
++	struct module *mod;
+ 
+ 	sdev = container_of(work, struct scsi_device, ew.work);
+ 
++	mod = sdev->host->hostt->module;
++
+ 	scsi_dh_release_device(sdev);
+ 
+ 	parent = sdev->sdev_gendev.parent;
+@@ -501,11 +504,17 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work)
+ 
+ 	if (parent)
+ 		put_device(parent);
++	module_put(mod);
+ }
+ 
+ static void scsi_device_dev_release(struct device *dev)
+ {
+ 	struct scsi_device *sdp = to_scsi_device(dev);
++
++	/* Set module pointer as NULL in case of module unloading */
++	if (!try_module_get(sdp->host->hostt->module))
++		sdp->host->hostt->module = NULL;
++
+ 	execute_in_process_context(scsi_device_dev_release_usercontext,
+ 				   &sdp->ew);
+ }
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index 34a9ac1f2b9b1..8b7a01773aec2 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -244,6 +244,8 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd)
+ 		goto out_regulator_disable;
+ 	}
+ 
++	reset_control_assert(domain->reset);
++
+ 	if (domain->bits.pxx) {
+ 		/* request the domain to power up */
+ 		regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PUP_REQ,
+@@ -266,8 +268,6 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd)
+ 				  GPC_PGC_CTRL_PCR);
+ 	}
+ 
+-	reset_control_assert(domain->reset);
+-
+ 	/* delay for reset to propagate */
+ 	udelay(5);
+ 
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 99ff2d23be05e..0f8b7c93310ea 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2775,7 +2775,6 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ 	int retval;
+ 	struct usb_device *rhdev;
+-	struct usb_hcd *shared_hcd;
+ 
+ 	if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ 		hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2936,26 +2935,13 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 		goto err_hcd_driver_start;
+ 	}
+ 
+-	/* starting here, usbcore will pay attention to the shared HCD roothub */
+-	shared_hcd = hcd->shared_hcd;
+-	if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
+-		retval = register_root_hub(shared_hcd);
+-		if (retval != 0)
+-			goto err_register_root_hub;
+-
+-		if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
+-			usb_hcd_poll_rh_status(shared_hcd);
+-	}
+-
+ 	/* starting here, usbcore will pay attention to this root hub */
+-	if (!HCD_DEFER_RH_REGISTER(hcd)) {
+-		retval = register_root_hub(hcd);
+-		if (retval != 0)
+-			goto err_register_root_hub;
++	retval = register_root_hub(hcd);
++	if (retval != 0)
++		goto err_register_root_hub;
+ 
+-		if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+-			usb_hcd_poll_rh_status(hcd);
+-	}
++	if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++		usb_hcd_poll_rh_status(hcd);
+ 
+ 	return retval;
+ 
+@@ -2999,7 +2985,6 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ 	struct usb_device *rhdev = hcd->self.root_hub;
+-	bool rh_registered;
+ 
+ 	dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+ 
+@@ -3010,7 +2995,6 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 
+ 	dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ 	spin_lock_irq (&hcd_root_hub_lock);
+-	rh_registered = hcd->rh_registered;
+ 	hcd->rh_registered = 0;
+ 	spin_unlock_irq (&hcd_root_hub_lock);
+ 
+@@ -3020,8 +3004,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 	cancel_work_sync(&hcd->died_work);
+ 
+ 	mutex_lock(&usb_bus_idr_lock);
+-	if (rh_registered)
+-		usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
++	usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
+ 	mutex_unlock(&usb_bus_idr_lock);
+ 
+ 	/*
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index cb730683f898f..4e32b96ccc889 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -692,7 +692,6 @@ int xhci_run(struct usb_hcd *hcd)
+ 		if (ret)
+ 			xhci_free_command(xhci, command);
+ 	}
+-	set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Finished xhci_run for USB2 roothub");
+ 
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 2c1fc9212cf28..548a028f2dabb 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,7 +124,6 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+ #define HCD_FLAG_DEAD			6	/* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED	7	/* authorize interfaces? */
+-#define HCD_FLAG_DEFER_RH_REGISTER	8	/* Defer roothub registration */
+ 
+ 	/* The flags can be tested using these macros; they are likely to
+ 	 * be slightly faster than test_bit().
+@@ -135,7 +134,6 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
+-#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+ 
+ 	/*
+ 	 * Specifies if interfaces are authorized by default
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index c5794e83fd800..8f6823df944ff 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -528,6 +528,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x2573, 0x0008),
+ 		.map = maya44_map,
+ 	},
++	{
++		.id = USB_ID(0x2708, 0x0002), /* Audient iD14 */
++		.ignore_ctl_error = 1,
++	},
+ 	{
+ 		/* KEF X300A */
+ 		.id = USB_ID(0x27ac, 0x1000),
+@@ -538,6 +542,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x25c4, 0x0003),
+ 		.map = scms_usb3318_map,
+ 	},
++	{
++		.id = USB_ID(0x30be, 0x0101), /*  Schiit Hel */
++		.ignore_ctl_error = 1,
++	},
+ 	{
+ 		/* Bose Companion 5 */
+ 		.id = USB_ID(0x05a7, 0x1020),


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-12 14:19 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-12 14:19 UTC (permalink / raw
  To: gentoo-commits

commit:     878ceaa29dbd4f8f69ee544de5f0084208a8daac
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 12 14:19:07 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 12 14:19:07 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=878ceaa2

Linux patch 5.14.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1017_linux-5.14.18.patch | 1209 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1213 insertions(+)

diff --git a/0000_README b/0000_README
index 55b967b9..092a08df 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1016_linux-5.14.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.17
 
+Patch:  1017_linux-5.14.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-5.14.18.patch b/1017_linux-5.14.18.patch
new file mode 100644
index 00000000..440b3f2d
--- /dev/null
+++ b/1017_linux-5.14.18.patch
@@ -0,0 +1,1209 @@
+diff --git a/Makefile b/Makefile
+index b792b6c178691..292faf977bb71 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index 8c065da73f8e5..4e0f52660842b 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -96,7 +96,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
+ static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
+ {
+ 	ioapic->rtc_status.pending_eoi = 0;
+-	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID + 1);
++	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID);
+ }
+ 
+ static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
+diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h
+index 11e4065e16176..660401700075d 100644
+--- a/arch/x86/kvm/ioapic.h
++++ b/arch/x86/kvm/ioapic.h
+@@ -43,13 +43,13 @@ struct kvm_vcpu;
+ 
+ struct dest_map {
+ 	/* vcpu bitmap where IRQ has been sent */
+-	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID + 1);
++	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID);
+ 
+ 	/*
+ 	 * Vector sent to a given vcpu, only valid when
+ 	 * the vcpu's bit in map is set
+ 	 */
+-	u8 vectors[KVM_MAX_VCPU_ID + 1];
++	u8 vectors[KVM_MAX_VCPU_ID];
+ };
+ 
+ 
+diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
+index eb7b227fc6cfe..31d6456d8ac33 100644
+--- a/arch/x86/kvm/mmu/spte.h
++++ b/arch/x86/kvm/mmu/spte.h
+@@ -310,12 +310,7 @@ static inline bool __is_bad_mt_xwr(struct rsvd_bits_validate *rsvd_check,
+ static __always_inline bool is_rsvd_spte(struct rsvd_bits_validate *rsvd_check,
+ 					 u64 spte, int level)
+ {
+-	/*
+-	 * Use a bitwise-OR instead of a logical-OR to aggregate the reserved
+-	 * bits and EPT's invalid memtype/XWR checks to avoid an extra Jcc
+-	 * (this is extremely unlikely to be short-circuited as true).
+-	 */
+-	return __is_bad_mt_xwr(rsvd_check, spte) |
++	return __is_bad_mt_xwr(rsvd_check, spte) ||
+ 	       __is_rsvd_bits_set(rsvd_check, spte, level);
+ }
+ 
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 9edb776249efd..f601b2fbe9d62 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1870,7 +1870,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 		binder_dec_node(buffer->target_node, 1, 0);
+ 
+ 	off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
+-	off_end_offset = is_failure ? failed_at :
++	off_end_offset = is_failure && failed_at ? failed_at :
+ 				off_start_offset + buffer->offsets_size;
+ 	for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
+ 	     buffer_offset += sizeof(binder_size_t)) {
+@@ -1956,9 +1956,8 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ 			binder_size_t fd_buf_size;
+ 			binder_size_t num_valid;
+ 
+-			if (proc->tsk != current->group_leader) {
++			if (is_failure) {
+ 				/*
+-				 * Nothing to do if running in sender context
+ 				 * The fd fixups have not been applied so no
+ 				 * fds need to be closed.
+ 				 */
+@@ -2056,7 +2055,7 @@ static int binder_translate_binder(struct flat_binder_object *fp,
+ 		ret = -EINVAL;
+ 		goto done;
+ 	}
+-	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
++	if (security_binder_transfer_binder(proc->cred, target_proc->cred)) {
+ 		ret = -EPERM;
+ 		goto done;
+ 	}
+@@ -2102,7 +2101,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
+ 				  proc->pid, thread->pid, fp->handle);
+ 		return -EINVAL;
+ 	}
+-	if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
++	if (security_binder_transfer_binder(proc->cred, target_proc->cred)) {
+ 		ret = -EPERM;
+ 		goto done;
+ 	}
+@@ -2190,7 +2189,7 @@ static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
+ 		ret = -EBADF;
+ 		goto err_fget;
+ 	}
+-	ret = security_binder_transfer_file(proc->tsk, target_proc->tsk, file);
++	ret = security_binder_transfer_file(proc->cred, target_proc->cred, file);
+ 	if (ret < 0) {
+ 		ret = -EPERM;
+ 		goto err_security;
+@@ -2595,8 +2594,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 			return_error_line = __LINE__;
+ 			goto err_invalid_target_handle;
+ 		}
+-		if (security_binder_transaction(proc->tsk,
+-						target_proc->tsk) < 0) {
++		if (security_binder_transaction(proc->cred,
++						target_proc->cred) < 0) {
+ 			return_error = BR_FAILED_REPLY;
+ 			return_error_param = -EPERM;
+ 			return_error_line = __LINE__;
+@@ -2711,7 +2710,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		t->from = thread;
+ 	else
+ 		t->from = NULL;
+-	t->sender_euid = task_euid(proc->tsk);
++	t->sender_euid = proc->cred->euid;
+ 	t->to_proc = target_proc;
+ 	t->to_thread = target_thread;
+ 	t->code = tr->code;
+@@ -2722,16 +2721,7 @@ static void binder_transaction(struct binder_proc *proc,
+ 		u32 secid;
+ 		size_t added_size;
+ 
+-		/*
+-		 * Arguably this should be the task's subjective LSM secid but
+-		 * we can't reliably access the subjective creds of a task
+-		 * other than our own so we must use the objective creds, which
+-		 * are safe to access.  The downside is that if a task is
+-		 * temporarily overriding it's creds it will not be reflected
+-		 * here; however, it isn't clear that binder would handle that
+-		 * case well anyway.
+-		 */
+-		security_task_getsecid_obj(proc->tsk, &secid);
++		security_cred_getsecid(proc->cred, &secid);
+ 		ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
+ 		if (ret) {
+ 			return_error = BR_FAILED_REPLY;
+@@ -3185,6 +3175,7 @@ err_invalid_target_handle:
+  * binder_free_buf() - free the specified buffer
+  * @proc:	binder proc that owns buffer
+  * @buffer:	buffer to be freed
++ * @is_failure:	failed to send transaction
+  *
+  * If buffer for an async transaction, enqueue the next async
+  * transaction from the node.
+@@ -3194,7 +3185,7 @@ err_invalid_target_handle:
+ static void
+ binder_free_buf(struct binder_proc *proc,
+ 		struct binder_thread *thread,
+-		struct binder_buffer *buffer)
++		struct binder_buffer *buffer, bool is_failure)
+ {
+ 	binder_inner_proc_lock(proc);
+ 	if (buffer->transaction) {
+@@ -3222,7 +3213,7 @@ binder_free_buf(struct binder_proc *proc,
+ 		binder_node_inner_unlock(buf_node);
+ 	}
+ 	trace_binder_transaction_buffer_release(buffer);
+-	binder_transaction_buffer_release(proc, thread, buffer, 0, false);
++	binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure);
+ 	binder_alloc_free_buf(&proc->alloc, buffer);
+ }
+ 
+@@ -3424,7 +3415,7 @@ static int binder_thread_write(struct binder_proc *proc,
+ 				     proc->pid, thread->pid, (u64)data_ptr,
+ 				     buffer->debug_id,
+ 				     buffer->transaction ? "active" : "finished");
+-			binder_free_buf(proc, thread, buffer);
++			binder_free_buf(proc, thread, buffer, false);
+ 			break;
+ 		}
+ 
+@@ -4117,7 +4108,7 @@ retry:
+ 			buffer->transaction = NULL;
+ 			binder_cleanup_transaction(t, "fd fixups failed",
+ 						   BR_FAILED_REPLY);
+-			binder_free_buf(proc, thread, buffer);
++			binder_free_buf(proc, thread, buffer, true);
+ 			binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
+ 				     "%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
+ 				     proc->pid, thread->pid,
+@@ -4353,6 +4344,7 @@ static void binder_free_proc(struct binder_proc *proc)
+ 	}
+ 	binder_alloc_deferred_release(&proc->alloc);
+ 	put_task_struct(proc->tsk);
++	put_cred(proc->cred);
+ 	binder_stats_deleted(BINDER_STAT_PROC);
+ 	kfree(proc);
+ }
+@@ -4564,7 +4556,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp,
+ 		ret = -EBUSY;
+ 		goto out;
+ 	}
+-	ret = security_binder_set_context_mgr(proc->tsk);
++	ret = security_binder_set_context_mgr(proc->cred);
+ 	if (ret < 0)
+ 		goto out;
+ 	if (uid_valid(context->binder_context_mgr_uid)) {
+@@ -5055,6 +5047,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
+ 	spin_lock_init(&proc->outer_lock);
+ 	get_task_struct(current->group_leader);
+ 	proc->tsk = current->group_leader;
++	proc->cred = get_cred(filp->f_cred);
+ 	INIT_LIST_HEAD(&proc->todo);
+ 	init_waitqueue_head(&proc->freeze_wait);
+ 	proc->default_priority = task_nice(current);
+diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
+index 402c4d4362a83..d6b6b8cb73465 100644
+--- a/drivers/android/binder_internal.h
++++ b/drivers/android/binder_internal.h
+@@ -364,6 +364,9 @@ struct binder_ref {
+  *                        (invariant after initialized)
+  * @tsk                   task_struct for group_leader of process
+  *                        (invariant after initialized)
++ * @cred                  struct cred associated with the `struct file`
++ *                        in binder_open()
++ *                        (invariant after initialized)
+  * @deferred_work_node:   element for binder_deferred_list
+  *                        (protected by binder_deferred_lock)
+  * @deferred_work:        bitmap of deferred work to perform
+@@ -426,6 +429,7 @@ struct binder_proc {
+ 	struct list_head waiting_threads;
+ 	int pid;
+ 	struct task_struct *tsk;
++	const struct cred *cred;
+ 	struct hlist_node deferred_work_node;
+ 	int deferred_work;
+ 	int outstanding_txns;
+diff --git a/drivers/comedi/drivers/dt9812.c b/drivers/comedi/drivers/dt9812.c
+index 634f57730c1e0..704b04d2980d3 100644
+--- a/drivers/comedi/drivers/dt9812.c
++++ b/drivers/comedi/drivers/dt9812.c
+@@ -32,6 +32,7 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/errno.h>
++#include <linux/slab.h>
+ #include <linux/uaccess.h>
+ 
+ #include "../comedi_usb.h"
+@@ -237,22 +238,42 @@ static int dt9812_read_info(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
++	size_t tbuf_size;
+ 	int count, ret;
++	void *tbuf;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_R_FLASH_DATA);
+-	cmd.u.flash_data_info.address =
++	tbuf_size = max(sizeof(*cmd), buf_size);
++
++	tbuf = kzalloc(tbuf_size, GFP_KERNEL);
++	if (!tbuf)
++		return -ENOMEM;
++
++	cmd = tbuf;
++
++	cmd->cmd = cpu_to_le32(DT9812_R_FLASH_DATA);
++	cmd->u.flash_data_info.address =
+ 	    cpu_to_le16(DT9812_DIAGS_BOARD_INFO_ADDR + offset);
+-	cmd.u.flash_data_info.numbytes = cpu_to_le16(buf_size);
++	cmd->u.flash_data_info.numbytes = cpu_to_le16(buf_size);
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+ 	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			   &cmd, 32, &count, DT9812_USB_TIMEOUT);
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
+ 	if (ret)
+-		return ret;
++		goto out;
++
++	ret = usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
++			   tbuf, buf_size, &count, DT9812_USB_TIMEOUT);
++	if (!ret) {
++		if (count == buf_size)
++			memcpy(buf, tbuf, buf_size);
++		else
++			ret = -EREMOTEIO;
++	}
++out:
++	kfree(tbuf);
+ 
+-	return usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
+-			    buf, buf_size, &count, DT9812_USB_TIMEOUT);
++	return ret;
+ }
+ 
+ static int dt9812_read_multiple_registers(struct comedi_device *dev,
+@@ -261,22 +282,42 @@ static int dt9812_read_multiple_registers(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
+ 	int i, count, ret;
++	size_t buf_size;
++	void *buf;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_R_MULTI_BYTE_REG);
+-	cmd.u.read_multi_info.count = reg_count;
++	buf_size = max_t(size_t, sizeof(*cmd), reg_count);
++
++	buf = kzalloc(buf_size, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	cmd = buf;
++
++	cmd->cmd = cpu_to_le32(DT9812_R_MULTI_BYTE_REG);
++	cmd->u.read_multi_info.count = reg_count;
+ 	for (i = 0; i < reg_count; i++)
+-		cmd.u.read_multi_info.address[i] = address[i];
++		cmd->u.read_multi_info.address[i] = address[i];
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+ 	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			   &cmd, 32, &count, DT9812_USB_TIMEOUT);
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
+ 	if (ret)
+-		return ret;
++		goto out;
++
++	ret = usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
++			   buf, reg_count, &count, DT9812_USB_TIMEOUT);
++	if (!ret) {
++		if (count == reg_count)
++			memcpy(value, buf, reg_count);
++		else
++			ret = -EREMOTEIO;
++	}
++out:
++	kfree(buf);
+ 
+-	return usb_bulk_msg(usb, usb_rcvbulkpipe(usb, devpriv->cmd_rd.addr),
+-			    value, reg_count, &count, DT9812_USB_TIMEOUT);
++	return ret;
+ }
+ 
+ static int dt9812_write_multiple_registers(struct comedi_device *dev,
+@@ -285,19 +326,27 @@ static int dt9812_write_multiple_registers(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
+ 	int i, count;
++	int ret;
++
++	cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_W_MULTI_BYTE_REG);
+-	cmd.u.read_multi_info.count = reg_count;
++	cmd->cmd = cpu_to_le32(DT9812_W_MULTI_BYTE_REG);
++	cmd->u.read_multi_info.count = reg_count;
+ 	for (i = 0; i < reg_count; i++) {
+-		cmd.u.write_multi_info.write[i].address = address[i];
+-		cmd.u.write_multi_info.write[i].value = value[i];
++		cmd->u.write_multi_info.write[i].address = address[i];
++		cmd->u.write_multi_info.write[i].value = value[i];
+ 	}
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+-	return usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			    &cmd, 32, &count, DT9812_USB_TIMEOUT);
++	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
++	kfree(cmd);
++
++	return ret;
+ }
+ 
+ static int dt9812_rmw_multiple_registers(struct comedi_device *dev,
+@@ -306,17 +355,25 @@ static int dt9812_rmw_multiple_registers(struct comedi_device *dev,
+ {
+ 	struct usb_device *usb = comedi_to_usb_dev(dev);
+ 	struct dt9812_private *devpriv = dev->private;
+-	struct dt9812_usb_cmd cmd;
++	struct dt9812_usb_cmd *cmd;
+ 	int i, count;
++	int ret;
++
++	cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
+ 
+-	cmd.cmd = cpu_to_le32(DT9812_RMW_MULTI_BYTE_REG);
+-	cmd.u.rmw_multi_info.count = reg_count;
++	cmd->cmd = cpu_to_le32(DT9812_RMW_MULTI_BYTE_REG);
++	cmd->u.rmw_multi_info.count = reg_count;
+ 	for (i = 0; i < reg_count; i++)
+-		cmd.u.rmw_multi_info.rmw[i] = rmw[i];
++		cmd->u.rmw_multi_info.rmw[i] = rmw[i];
+ 
+ 	/* DT9812 only responds to 32 byte writes!! */
+-	return usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
+-			    &cmd, 32, &count, DT9812_USB_TIMEOUT);
++	ret = usb_bulk_msg(usb, usb_sndbulkpipe(usb, devpriv->cmd_wr.addr),
++			   cmd, sizeof(*cmd), &count, DT9812_USB_TIMEOUT);
++	kfree(cmd);
++
++	return ret;
+ }
+ 
+ static int dt9812_digital_in(struct comedi_device *dev, u8 *bits)
+diff --git a/drivers/comedi/drivers/ni_usb6501.c b/drivers/comedi/drivers/ni_usb6501.c
+index 5b6d9d783b2f7..c42987b74b1dc 100644
+--- a/drivers/comedi/drivers/ni_usb6501.c
++++ b/drivers/comedi/drivers/ni_usb6501.c
+@@ -144,6 +144,10 @@ static const u8 READ_COUNTER_RESPONSE[]	= {0x00, 0x01, 0x00, 0x10,
+ 					   0x00, 0x00, 0x00, 0x02,
+ 					   0x00, 0x00, 0x00, 0x00};
+ 
++/* Largest supported packets */
++static const size_t TX_MAX_SIZE	= sizeof(SET_PORT_DIR_REQUEST);
++static const size_t RX_MAX_SIZE	= sizeof(READ_PORT_RESPONSE);
++
+ enum commands {
+ 	READ_PORT,
+ 	WRITE_PORT,
+@@ -501,6 +505,12 @@ static int ni6501_find_endpoints(struct comedi_device *dev)
+ 	if (!devpriv->ep_rx || !devpriv->ep_tx)
+ 		return -ENODEV;
+ 
++	if (usb_endpoint_maxp(devpriv->ep_rx) < RX_MAX_SIZE)
++		return -ENODEV;
++
++	if (usb_endpoint_maxp(devpriv->ep_tx) < TX_MAX_SIZE)
++		return -ENODEV;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/comedi/drivers/vmk80xx.c b/drivers/comedi/drivers/vmk80xx.c
+index 9f920819cd742..4b00a9ea611ab 100644
+--- a/drivers/comedi/drivers/vmk80xx.c
++++ b/drivers/comedi/drivers/vmk80xx.c
+@@ -90,6 +90,9 @@ enum {
+ #define IC3_VERSION		BIT(0)
+ #define IC6_VERSION		BIT(1)
+ 
++#define MIN_BUF_SIZE		64
++#define PACKET_TIMEOUT		10000	/* ms */
++
+ enum vmk80xx_model {
+ 	VMK8055_MODEL,
+ 	VMK8061_MODEL
+@@ -157,22 +160,21 @@ static void vmk80xx_do_bulk_msg(struct comedi_device *dev)
+ 	__u8 rx_addr;
+ 	unsigned int tx_pipe;
+ 	unsigned int rx_pipe;
+-	size_t size;
++	size_t tx_size;
++	size_t rx_size;
+ 
+ 	tx_addr = devpriv->ep_tx->bEndpointAddress;
+ 	rx_addr = devpriv->ep_rx->bEndpointAddress;
+ 	tx_pipe = usb_sndbulkpipe(usb, tx_addr);
+ 	rx_pipe = usb_rcvbulkpipe(usb, rx_addr);
++	tx_size = usb_endpoint_maxp(devpriv->ep_tx);
++	rx_size = usb_endpoint_maxp(devpriv->ep_rx);
+ 
+-	/*
+-	 * The max packet size attributes of the K8061
+-	 * input/output endpoints are identical
+-	 */
+-	size = usb_endpoint_maxp(devpriv->ep_tx);
++	usb_bulk_msg(usb, tx_pipe, devpriv->usb_tx_buf, tx_size, NULL,
++		     PACKET_TIMEOUT);
+ 
+-	usb_bulk_msg(usb, tx_pipe, devpriv->usb_tx_buf,
+-		     size, NULL, devpriv->ep_tx->bInterval);
+-	usb_bulk_msg(usb, rx_pipe, devpriv->usb_rx_buf, size, NULL, HZ * 10);
++	usb_bulk_msg(usb, rx_pipe, devpriv->usb_rx_buf, rx_size, NULL,
++		     PACKET_TIMEOUT);
+ }
+ 
+ static int vmk80xx_read_packet(struct comedi_device *dev)
+@@ -191,7 +193,7 @@ static int vmk80xx_read_packet(struct comedi_device *dev)
+ 	pipe = usb_rcvintpipe(usb, ep->bEndpointAddress);
+ 	return usb_interrupt_msg(usb, pipe, devpriv->usb_rx_buf,
+ 				 usb_endpoint_maxp(ep), NULL,
+-				 HZ * 10);
++				 PACKET_TIMEOUT);
+ }
+ 
+ static int vmk80xx_write_packet(struct comedi_device *dev, int cmd)
+@@ -212,7 +214,7 @@ static int vmk80xx_write_packet(struct comedi_device *dev, int cmd)
+ 	pipe = usb_sndintpipe(usb, ep->bEndpointAddress);
+ 	return usb_interrupt_msg(usb, pipe, devpriv->usb_tx_buf,
+ 				 usb_endpoint_maxp(ep), NULL,
+-				 HZ * 10);
++				 PACKET_TIMEOUT);
+ }
+ 
+ static int vmk80xx_reset_device(struct comedi_device *dev)
+@@ -678,12 +680,12 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev)
+ 	struct vmk80xx_private *devpriv = dev->private;
+ 	size_t size;
+ 
+-	size = usb_endpoint_maxp(devpriv->ep_rx);
++	size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
+ 	devpriv->usb_rx_buf = kzalloc(size, GFP_KERNEL);
+ 	if (!devpriv->usb_rx_buf)
+ 		return -ENOMEM;
+ 
+-	size = usb_endpoint_maxp(devpriv->ep_tx);
++	size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
+ 	devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
+ 	if (!devpriv->usb_tx_buf)
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 416976f098882..e97f92915ed98 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -61,7 +61,7 @@ static int rsi_usb_card_write(struct rsi_hw *adapter,
+ 			      (void *)seg,
+ 			      (int)len,
+ 			      &transfer,
+-			      HZ * 5);
++			      USB_CTRL_SET_TIMEOUT);
+ 
+ 	if (status < 0) {
+ 		rsi_dbg(ERR_ZONE,
+diff --git a/drivers/staging/media/ipu3/ipu3-css-fw.c b/drivers/staging/media/ipu3/ipu3-css-fw.c
+index 45aff76198e2c..981693eed8155 100644
+--- a/drivers/staging/media/ipu3/ipu3-css-fw.c
++++ b/drivers/staging/media/ipu3/ipu3-css-fw.c
+@@ -124,12 +124,11 @@ int imgu_css_fw_init(struct imgu_css *css)
+ 	/* Check and display fw header info */
+ 
+ 	css->fwp = (struct imgu_fw_header *)css->fw->data;
+-	if (css->fw->size < sizeof(struct imgu_fw_header *) ||
++	if (css->fw->size < struct_size(css->fwp, binary_header, 1) ||
+ 	    css->fwp->file_header.h_size != sizeof(struct imgu_fw_bi_file_h))
+ 		goto bad_fw;
+-	if (sizeof(struct imgu_fw_bi_file_h) +
+-	    css->fwp->file_header.binary_nr * sizeof(struct imgu_fw_info) >
+-	    css->fw->size)
++	if (struct_size(css->fwp, binary_header,
++			css->fwp->file_header.binary_nr) > css->fw->size)
+ 		goto bad_fw;
+ 
+ 	dev_info(dev, "loaded firmware version %.64s, %u binaries, %zu bytes\n",
+diff --git a/drivers/staging/media/ipu3/ipu3-css-fw.h b/drivers/staging/media/ipu3/ipu3-css-fw.h
+index 3c078f15a2959..c0bc57fd678a7 100644
+--- a/drivers/staging/media/ipu3/ipu3-css-fw.h
++++ b/drivers/staging/media/ipu3/ipu3-css-fw.h
+@@ -171,7 +171,7 @@ struct imgu_fw_bi_file_h {
+ 
+ struct imgu_fw_header {
+ 	struct imgu_fw_bi_file_h file_header;
+-	struct imgu_fw_info binary_header[1];	/* binary_nr items */
++	struct imgu_fw_info binary_header[];	/* binary_nr items */
+ };
+ 
+ /******************* Firmware functions *******************/
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index b6698656fc014..cf5cfee2936fd 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -229,7 +229,7 @@ int write_nic_byte_E(struct net_device *dev, int indx, u8 data)
+ 
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+-				 indx | 0xfe00, 0, usbdata, 1, HZ / 2);
++				 indx | 0xfe00, 0, usbdata, 1, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -251,7 +251,7 @@ int read_nic_byte_E(struct net_device *dev, int indx, u8 *data)
+ 
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+-				 indx | 0xfe00, 0, usbdata, 1, HZ / 2);
++				 indx | 0xfe00, 0, usbdata, 1, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -279,7 +279,7 @@ int write_nic_byte(struct net_device *dev, int indx, u8 data)
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 1, HZ / 2);
++				 usbdata, 1, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -305,7 +305,7 @@ int write_nic_word(struct net_device *dev, int indx, u16 data)
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 2, HZ / 2);
++				 usbdata, 2, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -331,7 +331,7 @@ int write_nic_dword(struct net_device *dev, int indx, u32 data)
+ 	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				 RTL8187_REQ_SET_REGS, RTL8187_REQT_WRITE,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 4, HZ / 2);
++				 usbdata, 4, 500);
+ 	kfree(usbdata);
+ 
+ 	if (status < 0) {
+@@ -355,7 +355,7 @@ int read_nic_byte(struct net_device *dev, int indx, u8 *data)
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 1, HZ / 2);
++				 usbdata, 1, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -380,7 +380,7 @@ int read_nic_word(struct net_device *dev, int indx, u16 *data)
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 2, HZ / 2);
++				 usbdata, 2, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -404,7 +404,7 @@ static int read_nic_word_E(struct net_device *dev, int indx, u16 *data)
+ 
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+-				 indx | 0xfe00, 0, usbdata, 2, HZ / 2);
++				 indx | 0xfe00, 0, usbdata, 2, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+@@ -430,7 +430,7 @@ int read_nic_dword(struct net_device *dev, int indx, u32 *data)
+ 	status = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ 				 RTL8187_REQ_GET_REGS, RTL8187_REQT_READ,
+ 				 (indx & 0xff) | 0xff00, (indx >> 8) & 0x0f,
+-				 usbdata, 4, HZ / 2);
++				 usbdata, 4, 500);
+ 	*data = *usbdata;
+ 	kfree(usbdata);
+ 
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index 505ebeb643dc2..cae04272deffe 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -595,12 +595,12 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ 
+ 	/* never exit with a firmware callback pending */
+ 	wait_for_completion(&padapter->rtl8712_fw_ready);
++	if (pnetdev->reg_state != NETREG_UNINITIALIZED)
++		unregister_netdev(pnetdev); /* will call netdev_close() */
+ 	usb_set_intfdata(pusb_intf, NULL);
+ 	release_firmware(padapter->fw);
+ 	if (drvpriv.drv_registered)
+ 		padapter->surprise_removed = true;
+-	if (pnetdev->reg_state != NETREG_UNINITIALIZED)
+-		unregister_netdev(pnetdev); /* will call netdev_close() */
+ 	r8712_flush_rwctrl_works(padapter);
+ 	r8712_flush_led_works(padapter);
+ 	udelay(1);
+diff --git a/drivers/staging/rtl8712/usb_ops_linux.c b/drivers/staging/rtl8712/usb_ops_linux.c
+index 655497cead122..f984a5ab2c6ff 100644
+--- a/drivers/staging/rtl8712/usb_ops_linux.c
++++ b/drivers/staging/rtl8712/usb_ops_linux.c
+@@ -494,7 +494,7 @@ int r8712_usbctrl_vendorreq(struct intf_priv *pintfpriv, u8 request, u16 value,
+ 		memcpy(pIo_buf, pdata, len);
+ 	}
+ 	status = usb_control_msg(udev, pipe, request, reqtype, value, index,
+-				 pIo_buf, len, HZ / 2);
++				 pIo_buf, len, 500);
+ 	if (status > 0) {  /* Success this control transfer. */
+ 		if (requesttype == 0x01) {
+ 			/* For Control read transfer, we have to copy the read
+diff --git a/drivers/usb/gadget/udc/Kconfig b/drivers/usb/gadget/udc/Kconfig
+index 8c614bb86c665..69394dc1cdfb6 100644
+--- a/drivers/usb/gadget/udc/Kconfig
++++ b/drivers/usb/gadget/udc/Kconfig
+@@ -330,6 +330,7 @@ config USB_AMD5536UDC
+ config USB_FSL_QE
+ 	tristate "Freescale QE/CPM USB Device Controller"
+ 	depends on FSL_SOC && (QUICC_ENGINE || CPM)
++	depends on !64BIT || BROKEN
+ 	help
+ 	   Some of Freescale PowerPC processors have a Full Speed
+ 	   QE/CPM2 USB controller, which support device mode with 4
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 10b0365f34399..3d4da8e0c4db7 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -634,7 +634,16 @@ static int ehci_run (struct usb_hcd *hcd)
+ 	/* Wait until HC become operational */
+ 	ehci_readl(ehci, &ehci->regs->command);	/* unblock posted writes */
+ 	msleep(5);
+-	rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT, 0, 100 * 1000);
++
++	/* For Aspeed, STS_HALT also depends on ASS/PSS status.
++	 * Check CMD_RUN instead.
++	 */
++	if (ehci->is_aspeed)
++		rc = ehci_handshake(ehci, &ehci->regs->command, CMD_RUN,
++				    1, 100 * 1000);
++	else
++		rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT,
++				    0, 100 * 1000);
+ 
+ 	up_write(&ehci_cf_port_reset_rwsem);
+ 
+diff --git a/drivers/usb/host/ehci-platform.c b/drivers/usb/host/ehci-platform.c
+index c70f2d0b4aaf0..c3dc906274d93 100644
+--- a/drivers/usb/host/ehci-platform.c
++++ b/drivers/usb/host/ehci-platform.c
+@@ -297,6 +297,12 @@ static int ehci_platform_probe(struct platform_device *dev)
+ 					  "has-transaction-translator"))
+ 			hcd->has_tt = 1;
+ 
++		if (of_device_is_compatible(dev->dev.of_node,
++					    "aspeed,ast2500-ehci") ||
++		    of_device_is_compatible(dev->dev.of_node,
++					    "aspeed,ast2600-ehci"))
++			ehci->is_aspeed = 1;
++
+ 		if (soc_device_match(quirk_poll_match))
+ 			priv->quirk_poll = true;
+ 
+diff --git a/drivers/usb/host/ehci.h b/drivers/usb/host/ehci.h
+index 80bb823aa9fe8..fdd073cc053b8 100644
+--- a/drivers/usb/host/ehci.h
++++ b/drivers/usb/host/ehci.h
+@@ -219,6 +219,7 @@ struct ehci_hcd {			/* one per controller */
+ 	unsigned		need_oc_pp_cycle:1; /* MPC834X port power */
+ 	unsigned		imx28_write_fix:1; /* For Freescale i.MX28 */
+ 	unsigned		spurious_oc:1;
++	unsigned		is_aspeed:1;
+ 
+ 	/* required for usb32 quirk */
+ 	#define OHCI_CTRL_HCFS          (3 << 6)
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index 98c0f4c1bffd9..51274b87f46c9 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1247,9 +1247,11 @@ static int musb_gadget_queue(struct usb_ep *ep, struct usb_request *req,
+ 		status = musb_queue_resume_work(musb,
+ 						musb_ep_restart_resume_work,
+ 						request);
+-		if (status < 0)
++		if (status < 0) {
+ 			dev_err(musb->controller, "%s resume work: %i\n",
+ 				__func__, status);
++			list_del(&request->list);
++		}
+ 	}
+ 
+ unlock:
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index c6b3fcf901805..29191d33c0e3e 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -406,6 +406,16 @@ UNUSUAL_DEV(  0x04b8, 0x0602, 0x0110, 0x0110,
+ 		"785EPX Storage",
+ 		USB_SC_SCSI, USB_PR_BULK, NULL, US_FL_SINGLE_LUN),
+ 
++/*
++ * Reported by James Buren <braewoods+lkml@braewoods.net>
++ * Virtual ISOs cannot be remounted if ejected while the device is locked
++ * Disable locking to mimic Windows behavior that bypasses the issue
++ */
++UNUSUAL_DEV(  0x04c5, 0x2028, 0x0001, 0x0001,
++		"iODD",
++		"2531/2541",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_NOT_LOCKABLE),
++
+ /*
+  * Not sure who reported this originally but
+  * Pavel Machek <pavel@ucw.cz> reported that the extra US_FL_SINGLE_LUN
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index 678e2c51b855c..0c6eacfcbeef1 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -1322,6 +1322,8 @@ static int isofs_read_inode(struct inode *inode, int relocated)
+ 
+ 	de = (struct iso_directory_record *) (bh->b_data + offset);
+ 	de_len = *(unsigned char *) de;
++	if (de_len < sizeof(struct iso_directory_record))
++		goto fail;
+ 
+ 	if (offset + de_len > bufsize) {
+ 		int frag1 = bufsize - offset;
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index e5b5f7709d48f..c060f818a91b6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -67,6 +67,7 @@
+ #include <linux/mm.h>
+ #include <linux/swap.h>
+ #include <linux/rcupdate.h>
++#include <linux/kallsyms.h>
+ #include <linux/stacktrace.h>
+ #include <linux/resource.h>
+ #include <linux/module.h>
+@@ -385,17 +386,19 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
+ 			  struct pid *pid, struct task_struct *task)
+ {
+ 	unsigned long wchan;
++	char symname[KSYM_NAME_LEN];
+ 
+-	if (ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS))
+-		wchan = get_wchan(task);
+-	else
+-		wchan = 0;
++	if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS))
++		goto print0;
+ 
+-	if (wchan)
+-		seq_printf(m, "%ps", (void *) wchan);
+-	else
+-		seq_putc(m, '0');
++	wchan = get_wchan(task);
++	if (wchan && !lookup_symbol_name(wchan, symname)) {
++		seq_puts(m, symname);
++		return 0;
++	}
+ 
++print0:
++	seq_putc(m, '0');
+ 	return 0;
+ }
+ #endif /* CONFIG_KALLSYMS */
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 2adeea44c0d53..61590c1f2d333 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -26,13 +26,13 @@
+  *   #undef LSM_HOOK
+  * };
+  */
+-LSM_HOOK(int, 0, binder_set_context_mgr, struct task_struct *mgr)
+-LSM_HOOK(int, 0, binder_transaction, struct task_struct *from,
+-	 struct task_struct *to)
+-LSM_HOOK(int, 0, binder_transfer_binder, struct task_struct *from,
+-	 struct task_struct *to)
+-LSM_HOOK(int, 0, binder_transfer_file, struct task_struct *from,
+-	 struct task_struct *to, struct file *file)
++LSM_HOOK(int, 0, binder_set_context_mgr, const struct cred *mgr)
++LSM_HOOK(int, 0, binder_transaction, const struct cred *from,
++	 const struct cred *to)
++LSM_HOOK(int, 0, binder_transfer_binder, const struct cred *from,
++	 const struct cred *to)
++LSM_HOOK(int, 0, binder_transfer_file, const struct cred *from,
++	 const struct cred *to, struct file *file)
+ LSM_HOOK(int, 0, ptrace_access_check, struct task_struct *child,
+ 	 unsigned int mode)
+ LSM_HOOK(int, 0, ptrace_traceme, struct task_struct *parent)
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index 5c4c5c0602cb7..59024618554e2 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -1313,22 +1313,22 @@
+  *
+  * @binder_set_context_mgr:
+  *	Check whether @mgr is allowed to be the binder context manager.
+- *	@mgr contains the task_struct for the task being registered.
++ *	@mgr contains the struct cred for the current binder process.
+  *	Return 0 if permission is granted.
+  * @binder_transaction:
+  *	Check whether @from is allowed to invoke a binder transaction call
+  *	to @to.
+- *	@from contains the task_struct for the sending task.
+- *	@to contains the task_struct for the receiving task.
++ *	@from contains the struct cred for the sending process.
++ *	@to contains the struct cred for the receiving process.
+  * @binder_transfer_binder:
+  *	Check whether @from is allowed to transfer a binder reference to @to.
+- *	@from contains the task_struct for the sending task.
+- *	@to contains the task_struct for the receiving task.
++ *	@from contains the struct cred for the sending process.
++ *	@to contains the struct cred for the receiving process.
+  * @binder_transfer_file:
+  *	Check whether @from is allowed to transfer @file to @to.
+- *	@from contains the task_struct for the sending task.
++ *	@from contains the struct cred for the sending process.
+  *	@file contains the struct file being transferred.
+- *	@to contains the task_struct for the receiving task.
++ *	@to contains the struct cred for the receiving process.
+  *
+  * @ptrace_access_check:
+  *	Check permission before allowing the current process to trace the
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 5b7288521300b..46a02ce34d00b 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -258,13 +258,13 @@ extern int security_init(void);
+ extern int early_security_init(void);
+ 
+ /* Security operations */
+-int security_binder_set_context_mgr(struct task_struct *mgr);
+-int security_binder_transaction(struct task_struct *from,
+-				struct task_struct *to);
+-int security_binder_transfer_binder(struct task_struct *from,
+-				    struct task_struct *to);
+-int security_binder_transfer_file(struct task_struct *from,
+-				  struct task_struct *to, struct file *file);
++int security_binder_set_context_mgr(const struct cred *mgr);
++int security_binder_transaction(const struct cred *from,
++				const struct cred *to);
++int security_binder_transfer_binder(const struct cred *from,
++				    const struct cred *to);
++int security_binder_transfer_file(const struct cred *from,
++				  const struct cred *to, struct file *file);
+ int security_ptrace_access_check(struct task_struct *child, unsigned int mode);
+ int security_ptrace_traceme(struct task_struct *parent);
+ int security_capget(struct task_struct *target,
+@@ -508,25 +508,25 @@ static inline int early_security_init(void)
+ 	return 0;
+ }
+ 
+-static inline int security_binder_set_context_mgr(struct task_struct *mgr)
++static inline int security_binder_set_context_mgr(const struct cred *mgr)
+ {
+ 	return 0;
+ }
+ 
+-static inline int security_binder_transaction(struct task_struct *from,
+-					      struct task_struct *to)
++static inline int security_binder_transaction(const struct cred *from,
++					      const struct cred *to)
+ {
+ 	return 0;
+ }
+ 
+-static inline int security_binder_transfer_binder(struct task_struct *from,
+-						  struct task_struct *to)
++static inline int security_binder_transfer_binder(const struct cred *from,
++						  const struct cred *to)
+ {
+ 	return 0;
+ }
+ 
+-static inline int security_binder_transfer_file(struct task_struct *from,
+-						struct task_struct *to,
++static inline int security_binder_transfer_file(const struct cred *from,
++						const struct cred *to,
+ 						struct file *file)
+ {
+ 	return 0;
+@@ -1041,6 +1041,11 @@ static inline void security_transfer_creds(struct cred *new,
+ {
+ }
+ 
++static inline void security_cred_getsecid(const struct cred *c, u32 *secid)
++{
++	*secid = 0;
++}
++
+ static inline int security_kernel_act_as(struct cred *cred, u32 secid)
+ {
+ 	return 0;
+diff --git a/security/security.c b/security/security.c
+index 9ffa9e9c5c554..67264cb08fb31 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -747,25 +747,25 @@ static int lsm_superblock_alloc(struct super_block *sb)
+ 
+ /* Security operations */
+ 
+-int security_binder_set_context_mgr(struct task_struct *mgr)
++int security_binder_set_context_mgr(const struct cred *mgr)
+ {
+ 	return call_int_hook(binder_set_context_mgr, 0, mgr);
+ }
+ 
+-int security_binder_transaction(struct task_struct *from,
+-				struct task_struct *to)
++int security_binder_transaction(const struct cred *from,
++				const struct cred *to)
+ {
+ 	return call_int_hook(binder_transaction, 0, from, to);
+ }
+ 
+-int security_binder_transfer_binder(struct task_struct *from,
+-				    struct task_struct *to)
++int security_binder_transfer_binder(const struct cred *from,
++				    const struct cred *to)
+ {
+ 	return call_int_hook(binder_transfer_binder, 0, from, to);
+ }
+ 
+-int security_binder_transfer_file(struct task_struct *from,
+-				  struct task_struct *to, struct file *file)
++int security_binder_transfer_file(const struct cred *from,
++				  const struct cred *to, struct file *file)
+ {
+ 	return call_int_hook(binder_transfer_file, 0, from, to, file);
+ }
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 572e564bf6cd9..b99f9080d1d4c 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -255,29 +255,6 @@ static inline u32 task_sid_obj(const struct task_struct *task)
+ 	return sid;
+ }
+ 
+-/*
+- * get the security ID of a task for use with binder
+- */
+-static inline u32 task_sid_binder(const struct task_struct *task)
+-{
+-	/*
+-	 * In many case where this function is used we should be using the
+-	 * task's subjective SID, but we can't reliably access the subjective
+-	 * creds of a task other than our own so we must use the objective
+-	 * creds/SID, which are safe to access.  The downside is that if a task
+-	 * is temporarily overriding it's creds it will not be reflected here;
+-	 * however, it isn't clear that binder would handle that case well
+-	 * anyway.
+-	 *
+-	 * If this ever changes and we can safely reference the subjective
+-	 * creds/SID of another task, this function will make it easier to
+-	 * identify the various places where we make use of the task SIDs in
+-	 * the binder code.  It is also likely that we will need to adjust
+-	 * the main drivers/android binder code as well.
+-	 */
+-	return task_sid_obj(task);
+-}
+-
+ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dentry);
+ 
+ /*
+@@ -2064,18 +2041,19 @@ static inline u32 open_file_to_av(struct file *file)
+ 
+ /* Hook functions begin here. */
+ 
+-static int selinux_binder_set_context_mgr(struct task_struct *mgr)
++static int selinux_binder_set_context_mgr(const struct cred *mgr)
+ {
+ 	return avc_has_perm(&selinux_state,
+-			    current_sid(), task_sid_binder(mgr), SECCLASS_BINDER,
++			    current_sid(), cred_sid(mgr), SECCLASS_BINDER,
+ 			    BINDER__SET_CONTEXT_MGR, NULL);
+ }
+ 
+-static int selinux_binder_transaction(struct task_struct *from,
+-				      struct task_struct *to)
++static int selinux_binder_transaction(const struct cred *from,
++				      const struct cred *to)
+ {
+ 	u32 mysid = current_sid();
+-	u32 fromsid = task_sid_binder(from);
++	u32 fromsid = cred_sid(from);
++	u32 tosid = cred_sid(to);
+ 	int rc;
+ 
+ 	if (mysid != fromsid) {
+@@ -2086,24 +2064,24 @@ static int selinux_binder_transaction(struct task_struct *from,
+ 			return rc;
+ 	}
+ 
+-	return avc_has_perm(&selinux_state, fromsid, task_sid_binder(to),
++	return avc_has_perm(&selinux_state, fromsid, tosid,
+ 			    SECCLASS_BINDER, BINDER__CALL, NULL);
+ }
+ 
+-static int selinux_binder_transfer_binder(struct task_struct *from,
+-					  struct task_struct *to)
++static int selinux_binder_transfer_binder(const struct cred *from,
++					  const struct cred *to)
+ {
+ 	return avc_has_perm(&selinux_state,
+-			    task_sid_binder(from), task_sid_binder(to),
++			    cred_sid(from), cred_sid(to),
+ 			    SECCLASS_BINDER, BINDER__TRANSFER,
+ 			    NULL);
+ }
+ 
+-static int selinux_binder_transfer_file(struct task_struct *from,
+-					struct task_struct *to,
++static int selinux_binder_transfer_file(const struct cred *from,
++					const struct cred *to,
+ 					struct file *file)
+ {
+-	u32 sid = task_sid_binder(to);
++	u32 sid = cred_sid(to);
+ 	struct file_security_struct *fsec = selinux_file(file);
+ 	struct dentry *dentry = file->f_path.dentry;
+ 	struct inode_security_struct *isec;
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 71323d807dbf4..dc9fa312faddf 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -243,13 +243,18 @@ int snd_pcm_info_user(struct snd_pcm_substream *substream,
+ 
+ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ {
++	struct snd_dma_buffer *dmabuf;
++
+ 	if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP))
+ 		return false;
+ 
+ 	if (substream->ops->mmap || substream->ops->page)
+ 		return true;
+ 
+-	switch (substream->dma_buffer.dev.type) {
++	dmabuf = snd_pcm_get_dma_buf(substream);
++	if (!dmabuf)
++		dmabuf = &substream->dma_buffer;
++	switch (dmabuf->dev.type) {
+ 	case SNDRV_DMA_TYPE_UNKNOWN:
+ 		/* we can't know the device, so just assume that the driver does
+ 		 * everything right
+@@ -259,7 +264,7 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 	case SNDRV_DMA_TYPE_VMALLOC:
+ 		return true;
+ 	default:
+-		return dma_can_mmap(substream->dma_buffer.dev.dev);
++		return dma_can_mmap(dmabuf->dev.dev);
+ 	}
+ }
+ 
+diff --git a/sound/pci/cs46xx/cs46xx_lib.c b/sound/pci/cs46xx/cs46xx_lib.c
+index 1e1eb17f8e077..d43927dcd61ea 100644
+--- a/sound/pci/cs46xx/cs46xx_lib.c
++++ b/sound/pci/cs46xx/cs46xx_lib.c
+@@ -1121,9 +1121,7 @@ static int snd_cs46xx_playback_hw_params(struct snd_pcm_substream *substream,
+ 	if (params_periods(hw_params) == CS46XX_FRAGS) {
+ 		if (runtime->dma_area != cpcm->hw_buf.area)
+ 			snd_pcm_lib_free_pages(substream);
+-		runtime->dma_area = cpcm->hw_buf.area;
+-		runtime->dma_addr = cpcm->hw_buf.addr;
+-		runtime->dma_bytes = cpcm->hw_buf.bytes;
++		snd_pcm_set_runtime_buffer(substream, &cpcm->hw_buf);
+ 
+ 
+ #ifdef CONFIG_SND_CS46XX_NEW_DSP
+@@ -1143,11 +1141,8 @@ static int snd_cs46xx_playback_hw_params(struct snd_pcm_substream *substream,
+ #endif
+ 
+ 	} else {
+-		if (runtime->dma_area == cpcm->hw_buf.area) {
+-			runtime->dma_area = NULL;
+-			runtime->dma_addr = 0;
+-			runtime->dma_bytes = 0;
+-		}
++		if (runtime->dma_area == cpcm->hw_buf.area)
++			snd_pcm_set_runtime_buffer(substream, NULL);
+ 		err = snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(hw_params));
+ 		if (err < 0) {
+ #ifdef CONFIG_SND_CS46XX_NEW_DSP
+@@ -1196,9 +1191,7 @@ static int snd_cs46xx_playback_hw_free(struct snd_pcm_substream *substream)
+ 	if (runtime->dma_area != cpcm->hw_buf.area)
+ 		snd_pcm_lib_free_pages(substream);
+     
+-	runtime->dma_area = NULL;
+-	runtime->dma_addr = 0;
+-	runtime->dma_bytes = 0;
++	snd_pcm_set_runtime_buffer(substream, NULL);
+ 
+ 	return 0;
+ }
+@@ -1287,16 +1280,11 @@ static int snd_cs46xx_capture_hw_params(struct snd_pcm_substream *substream,
+ 	if (runtime->periods == CS46XX_FRAGS) {
+ 		if (runtime->dma_area != chip->capt.hw_buf.area)
+ 			snd_pcm_lib_free_pages(substream);
+-		runtime->dma_area = chip->capt.hw_buf.area;
+-		runtime->dma_addr = chip->capt.hw_buf.addr;
+-		runtime->dma_bytes = chip->capt.hw_buf.bytes;
++		snd_pcm_set_runtime_buffer(substream, &chip->capt.hw_buf);
+ 		substream->ops = &snd_cs46xx_capture_ops;
+ 	} else {
+-		if (runtime->dma_area == chip->capt.hw_buf.area) {
+-			runtime->dma_area = NULL;
+-			runtime->dma_addr = 0;
+-			runtime->dma_bytes = 0;
+-		}
++		if (runtime->dma_area == chip->capt.hw_buf.area)
++			snd_pcm_set_runtime_buffer(substream, NULL);
+ 		err = snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(hw_params));
+ 		if (err < 0)
+ 			return err;
+@@ -1313,9 +1301,7 @@ static int snd_cs46xx_capture_hw_free(struct snd_pcm_substream *substream)
+ 
+ 	if (runtime->dma_area != chip->capt.hw_buf.area)
+ 		snd_pcm_lib_free_pages(substream);
+-	runtime->dma_area = NULL;
+-	runtime->dma_addr = 0;
+-	runtime->dma_bytes = 0;
++	snd_pcm_set_runtime_buffer(substream, NULL);
+ 
+ 	return 0;
+ }


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-17 11:59 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-17 11:59 UTC (permalink / raw
  To: gentoo-commits

commit:     d4e591c7d2bf78f019530e9585b282acf6390825
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 17 11:59:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 17 11:59:28 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d4e591c7

Linux patch 5.14.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1018_linux-5.14.19.patch | 33248 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 33252 insertions(+)

diff --git a/0000_README b/0000_README
index 092a08df..534f161e 100644
--- a/0000_README
+++ b/0000_README
@@ -119,6 +119,10 @@ Patch:  1017_linux-5.14.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.18
 
+Patch:  1018_linux-5.14.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.14.19.patch b/1018_linux-5.14.19.patch
new file mode 100644
index 00000000..1e18cb2e
--- /dev/null
+++ b/1018_linux-5.14.19.patch
@@ -0,0 +1,33248 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index bdb22006f713f..661116439e9d6 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -6311,6 +6311,13 @@
+ 			improve timer resolution at the expense of processing
+ 			more timer interrupts.
+ 
++	xen.balloon_boot_timeout= [XEN]
++			The time (in seconds) to wait before giving up to boot
++			in case initial ballooning fails to free enough memory.
++			Applies only when running as HVM or PVH guest and
++			started with less memory configured than allowed at
++			max. Default is 180.
++
+ 	xen.event_eoi_delay=	[XEN]
+ 			How long to delay EOI handling in case of event
+ 			storms (jiffies). Default is 10.
+diff --git a/Documentation/devicetree/bindings/iio/dac/adi,ad5766.yaml b/Documentation/devicetree/bindings/iio/dac/adi,ad5766.yaml
+index d5c54813ce872..a8f7720d1e3e2 100644
+--- a/Documentation/devicetree/bindings/iio/dac/adi,ad5766.yaml
++++ b/Documentation/devicetree/bindings/iio/dac/adi,ad5766.yaml
+@@ -54,7 +54,7 @@ examples:
+ 
+           ad5766@0 {
+               compatible = "adi,ad5766";
+-              output-range-microvolts = <(-5000) 5000>;
++              output-range-microvolts = <(-5000000) 5000000>;
+               reg = <0>;
+               spi-cpol;
+               spi-max-frequency = <1000000>;
+diff --git a/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt b/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt
+index 093edda0c8dfc..6cd83d920155f 100644
+--- a/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt
++++ b/Documentation/devicetree/bindings/regulator/samsung,s5m8767.txt
+@@ -13,6 +13,14 @@ common regulator binding documented in:
+ 
+ 
+ Required properties of the main device node (the parent!):
++ - s5m8767,pmic-buck-ds-gpios: GPIO specifiers for three host gpio's used
++   for selecting GPIO DVS lines. It is one-to-one mapped to dvs gpio lines.
++
++ [1] If either of the 's5m8767,pmic-buck[2/3/4]-uses-gpio-dvs' optional
++     property is specified, then all the eight voltage values for the
++     's5m8767,pmic-buck[2/3/4]-dvs-voltage' should be specified.
++
++Optional properties of the main device node (the parent!):
+  - s5m8767,pmic-buck2-dvs-voltage: A set of 8 voltage values in micro-volt (uV)
+    units for buck2 when changing voltage using gpio dvs. Refer to [1] below
+    for additional information.
+@@ -25,26 +33,13 @@ Required properties of the main device node (the parent!):
+    units for buck4 when changing voltage using gpio dvs. Refer to [1] below
+    for additional information.
+ 
+- - s5m8767,pmic-buck-ds-gpios: GPIO specifiers for three host gpio's used
+-   for selecting GPIO DVS lines. It is one-to-one mapped to dvs gpio lines.
+-
+- [1] If none of the 's5m8767,pmic-buck[2/3/4]-uses-gpio-dvs' optional
+-     property is specified, the 's5m8767,pmic-buck[2/3/4]-dvs-voltage'
+-     property should specify atleast one voltage level (which would be a
+-     safe operating voltage).
+-
+-     If either of the 's5m8767,pmic-buck[2/3/4]-uses-gpio-dvs' optional
+-     property is specified, then all the eight voltage values for the
+-     's5m8767,pmic-buck[2/3/4]-dvs-voltage' should be specified.
+-
+-Optional properties of the main device node (the parent!):
+  - s5m8767,pmic-buck2-uses-gpio-dvs: 'buck2' can be controlled by gpio dvs.
+  - s5m8767,pmic-buck3-uses-gpio-dvs: 'buck3' can be controlled by gpio dvs.
+  - s5m8767,pmic-buck4-uses-gpio-dvs: 'buck4' can be controlled by gpio dvs.
+ 
+ Additional properties required if either of the optional properties are used:
+ 
+- - s5m8767,pmic-buck234-default-dvs-idx: Default voltage setting selected from
++ - s5m8767,pmic-buck-default-dvs-idx: Default voltage setting selected from
+    the possible 8 options selectable by the dvs gpios. The value of this
+    property should be between 0 and 7. If not specified or if out of range, the
+    default value of this property is set to 0.
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index 44b67ebd6e40d..936fae06db770 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -176,11 +176,11 @@ Master Keys
+ 
+ Each encrypted directory tree is protected by a *master key*.  Master
+ keys can be up to 64 bytes long, and must be at least as long as the
+-greater of the key length needed by the contents and filenames
+-encryption modes being used.  For example, if AES-256-XTS is used for
+-contents encryption, the master key must be 64 bytes (512 bits).  Note
+-that the XTS mode is defined to require a key twice as long as that
+-required by the underlying block cipher.
++greater of the security strength of the contents and filenames
++encryption modes being used.  For example, if any AES-256 mode is
++used, the master key must be at least 256 bits, i.e. 32 bytes.  A
++stricter requirement applies if the key is used by a v1 encryption
++policy and AES-256-XTS is used; such keys must be 64 bytes.
+ 
+ To "unlock" an encrypted directory tree, userspace must provide the
+ appropriate master key.  There can be any number of master keys, each
+diff --git a/Makefile b/Makefile
+index 292faf977bb71..f4773aee95c4e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 129df498a8e12..2b0116b52f6b1 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -1231,6 +1231,9 @@ config RELR
+ config ARCH_HAS_MEM_ENCRYPT
+ 	bool
+ 
++config ARCH_HAS_CC_PLATFORM
++	bool
++
+ config HAVE_SPARSE_SYSCALL_NR
+        bool
+        help
+diff --git a/arch/alpha/include/asm/processor.h b/arch/alpha/include/asm/processor.h
+index 6100431da07a3..090499c99c1c1 100644
+--- a/arch/alpha/include/asm/processor.h
++++ b/arch/alpha/include/asm/processor.h
+@@ -42,7 +42,7 @@ extern void start_thread(struct pt_regs *, unsigned long, unsigned long);
+ struct task_struct;
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
+ 
+diff --git a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
+index a5123ea426ce5..5f8527081da92 100644
+--- a/arch/alpha/kernel/process.c
++++ b/arch/alpha/kernel/process.c
+@@ -376,12 +376,11 @@ thread_saved_pc(struct task_struct *t)
+ }
+ 
+ unsigned long
+-get_wchan(struct task_struct *p)
++__get_wchan(struct task_struct *p)
+ {
+ 	unsigned long schedule_frame;
+ 	unsigned long pc;
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
++
+ 	/*
+ 	 * This one depends on the frame size of schedule().  Do a
+ 	 * "disass schedule" in gdb to find the frame size.  Also, the
+diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h
+index e4031ecd3c8c1..04a5268e592b9 100644
+--- a/arch/arc/include/asm/processor.h
++++ b/arch/arc/include/asm/processor.h
+@@ -70,7 +70,7 @@ struct task_struct;
+ extern void start_thread(struct pt_regs * regs, unsigned long pc,
+ 			 unsigned long usp);
+ 
+-extern unsigned int get_wchan(struct task_struct *p);
++extern unsigned int __get_wchan(struct task_struct *p);
+ 
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/arc/kernel/stacktrace.c b/arch/arc/kernel/stacktrace.c
+index 1b9576d21e244..db96cc8783891 100644
+--- a/arch/arc/kernel/stacktrace.c
++++ b/arch/arc/kernel/stacktrace.c
+@@ -15,7 +15,7 @@
+  *      = specifics of data structs where trace is saved(CONFIG_STACKTRACE etc)
+  *
+  *  vineetg: March 2009
+- *  -Implemented correct versions of thread_saved_pc() and get_wchan()
++ *  -Implemented correct versions of thread_saved_pc() and __get_wchan()
+  *
+  *  rajeshwarr: 2008
+  *  -Initial implementation
+@@ -248,7 +248,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
+  * Of course just returning schedule( ) would be pointless so unwind until
+  * the function is not in schedular code
+  */
+-unsigned int get_wchan(struct task_struct *tsk)
++unsigned int __get_wchan(struct task_struct *tsk)
+ {
+ 	return arc_unwind_core(tsk, NULL, __get_first_nonsched, NULL);
+ }
+diff --git a/arch/arm/Makefile b/arch/arm/Makefile
+index 173da685a52eb..59f63c3e7acaa 100644
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -60,15 +60,15 @@ KBUILD_CFLAGS	+= $(call cc-option,-fno-ipa-sra)
+ # Note that GCC does not numerically define an architecture version
+ # macro, but instead defines a whole series of macros which makes
+ # testing for a specific architecture or later rather impossible.
+-arch-$(CONFIG_CPU_32v7M)	=-D__LINUX_ARM_ARCH__=7 -march=armv7-m -Wa,-march=armv7-m
+-arch-$(CONFIG_CPU_32v7)		=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7-a)
+-arch-$(CONFIG_CPU_32v6)		=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6,-march=armv5t -Wa$(comma)-march=armv6)
++arch-$(CONFIG_CPU_32v7M)	=-D__LINUX_ARM_ARCH__=7 -march=armv7-m
++arch-$(CONFIG_CPU_32v7)		=-D__LINUX_ARM_ARCH__=7 -march=armv7-a
++arch-$(CONFIG_CPU_32v6)		=-D__LINUX_ARM_ARCH__=6 -march=armv6
+ # Only override the compiler option if ARMv6. The ARMv6K extensions are
+ # always available in ARMv7
+ ifeq ($(CONFIG_CPU_32v6),y)
+-arch-$(CONFIG_CPU_32v6K)	=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6k,-march=armv5t -Wa$(comma)-march=armv6k)
++arch-$(CONFIG_CPU_32v6K)	=-D__LINUX_ARM_ARCH__=6 -march=armv6k
+ endif
+-arch-$(CONFIG_CPU_32v5)		=-D__LINUX_ARM_ARCH__=5 $(call cc-option,-march=armv5te,-march=armv4t)
++arch-$(CONFIG_CPU_32v5)		=-D__LINUX_ARM_ARCH__=5 -march=armv5te
+ arch-$(CONFIG_CPU_32v4T)	=-D__LINUX_ARM_ARCH__=4 -march=armv4t
+ arch-$(CONFIG_CPU_32v4)		=-D__LINUX_ARM_ARCH__=4 -march=armv4
+ arch-$(CONFIG_CPU_32v3)		=-D__LINUX_ARM_ARCH__=3 -march=armv3m
+@@ -82,7 +82,7 @@ tune-$(CONFIG_CPU_ARM720T)	=-mtune=arm7tdmi
+ tune-$(CONFIG_CPU_ARM740T)	=-mtune=arm7tdmi
+ tune-$(CONFIG_CPU_ARM9TDMI)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_ARM940T)	=-mtune=arm9tdmi
+-tune-$(CONFIG_CPU_ARM946E)	=$(call cc-option,-mtune=arm9e,-mtune=arm9tdmi)
++tune-$(CONFIG_CPU_ARM946E)	=-mtune=arm9e
+ tune-$(CONFIG_CPU_ARM920T)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_ARM922T)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_ARM925T)	=-mtune=arm9tdmi
+@@ -90,11 +90,11 @@ tune-$(CONFIG_CPU_ARM926T)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_FA526)	=-mtune=arm9tdmi
+ tune-$(CONFIG_CPU_SA110)	=-mtune=strongarm110
+ tune-$(CONFIG_CPU_SA1100)	=-mtune=strongarm1100
+-tune-$(CONFIG_CPU_XSCALE)	=$(call cc-option,-mtune=xscale,-mtune=strongarm110) -Wa,-mcpu=xscale
+-tune-$(CONFIG_CPU_XSC3)		=$(call cc-option,-mtune=xscale,-mtune=strongarm110) -Wa,-mcpu=xscale
+-tune-$(CONFIG_CPU_FEROCEON)	=$(call cc-option,-mtune=marvell-f,-mtune=xscale)
+-tune-$(CONFIG_CPU_V6)		=$(call cc-option,-mtune=arm1136j-s,-mtune=strongarm)
+-tune-$(CONFIG_CPU_V6K)		=$(call cc-option,-mtune=arm1136j-s,-mtune=strongarm)
++tune-$(CONFIG_CPU_XSCALE)	=-mtune=xscale
++tune-$(CONFIG_CPU_XSC3)		=-mtune=xscale
++tune-$(CONFIG_CPU_FEROCEON)	=-mtune=xscale
++tune-$(CONFIG_CPU_V6)		=-mtune=arm1136j-s
++tune-$(CONFIG_CPU_V6K)		=-mtune=arm1136j-s
+ 
+ # Evaluate tune cc-option calls now
+ tune-y := $(tune-y)
+diff --git a/arch/arm/boot/dts/at91-tse850-3.dts b/arch/arm/boot/dts/at91-tse850-3.dts
+index 3ca97b47c69ce..7e5c598e7e68f 100644
+--- a/arch/arm/boot/dts/at91-tse850-3.dts
++++ b/arch/arm/boot/dts/at91-tse850-3.dts
+@@ -262,7 +262,7 @@
+ &macb1 {
+ 	status = "okay";
+ 
+-	phy-mode = "rgmii";
++	phy-mode = "rmii";
+ 
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+index 61c7b137607e5..7900aac4f35a9 100644
+--- a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
++++ b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+@@ -20,7 +20,7 @@
+ 		bootargs = "console=ttyS0,115200 earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+index 6c6bb7b17d27a..7546c8d07bcd7 100644
+--- a/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
++++ b/arch/arm/boot/dts/bcm4709-asus-rt-ac87u.dts
+@@ -19,7 +19,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+index d29e7f80ea6aa..beae9eab9cb8c 100644
+--- a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
++++ b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+@@ -19,7 +19,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x18000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+index 9b6887d477d86..7879f7d7d9c33 100644
+--- a/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
++++ b/arch/arm/boot/dts/bcm4709-linksys-ea9200.dts
+@@ -16,7 +16,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+index 7989a53597d4f..56d309dbc6b0d 100644
+--- a/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
++++ b/arch/arm/boot/dts/bcm4709-netgear-r7000.dts
+@@ -19,7 +19,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+index 87b655be674c5..184e3039aa864 100644
+--- a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
++++ b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+@@ -30,7 +30,7 @@
+ 		bootargs = "console=ttyS0,115200";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x08000000>;
+diff --git a/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts b/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts
+index f806be5da7237..c2a266a439d05 100644
+--- a/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts
++++ b/arch/arm/boot/dts/bcm4709-tplink-archer-c9-v1.dts
+@@ -15,7 +15,7 @@
+ 		bootargs = "console=ttyS0,115200 earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+index 452b8d0ab180e..b0d8a688141d3 100644
+--- a/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
++++ b/arch/arm/boot/dts/bcm47094-luxul-xwc-2000.dts
+@@ -16,7 +16,7 @@
+ 		bootargs = "earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>,
+ 		      <0x88000000 0x18000000>;
+diff --git a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+index 3b978dc8997a4..612d61852bfb9 100644
+--- a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
++++ b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+@@ -20,7 +20,7 @@
+ 		bootargs = " console=ttyS0,115200n8 earlycon";
+ 	};
+ 
+-	memory {
++	memory@0 {
+ 		reg = <0x00000000 0x08000000>;
+ 		device_type = "memory";
+ 	};
+diff --git a/arch/arm/boot/dts/bcm94708.dts b/arch/arm/boot/dts/bcm94708.dts
+index 3d13e46c69494..d9eb2040b9631 100644
+--- a/arch/arm/boot/dts/bcm94708.dts
++++ b/arch/arm/boot/dts/bcm94708.dts
+@@ -38,7 +38,7 @@
+ 	model = "NorthStar SVK (BCM94708)";
+ 	compatible = "brcm,bcm94708", "brcm,bcm4708";
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/bcm94709.dts b/arch/arm/boot/dts/bcm94709.dts
+index 5017b7b259cbe..618c812eef73e 100644
+--- a/arch/arm/boot/dts/bcm94709.dts
++++ b/arch/arm/boot/dts/bcm94709.dts
+@@ -38,7 +38,7 @@
+ 	model = "NorthStar SVK (BCM94709)";
+ 	compatible = "brcm,bcm94709", "brcm,bcm4709", "brcm,bcm4708";
+ 
+-	memory {
++	memory@0 {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index 938cc691bb2fe..23ab27fe4ee5d 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -515,7 +515,7 @@
+ 		compatible = "bosch,bma180";
+ 		reg = <0x41>;
+ 		pinctrl-names = "default";
+-		pintcrl-0 = <&bma180_pins>;
++		pinctrl-0 = <&bma180_pins>;
+ 		interrupt-parent = <&gpio4>;
+ 		interrupts = <19 IRQ_TYPE_LEVEL_HIGH>; /* GPIO_115 */
+ 	};
+diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi
+index db4c06bf7888b..96722172b0643 100644
+--- a/arch/arm/boot/dts/qcom-msm8974.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8974.dtsi
+@@ -1580,8 +1580,8 @@
+ 				#phy-cells = <0>;
+ 				qcom,dsi-phy-index = <0>;
+ 
+-				clocks = <&mmcc MDSS_AHB_CLK>;
+-				clock-names = "iface";
++				clocks = <&mmcc MDSS_AHB_CLK>, <&xo_board>;
++				clock-names = "iface", "ref";
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index 5b60ecbd718f0..2ebafe27a865b 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1179,7 +1179,7 @@
+ 		};
+ 	};
+ 
+-	sai2a_pins_c: sai2a-4 {
++	sai2a_pins_c: sai2a-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('D', 13, AF10)>, /* SAI2_SCK_A */
+ 				 <STM32_PINMUX('D', 11, AF10)>, /* SAI2_SD_A */
+@@ -1190,7 +1190,7 @@
+ 		};
+ 	};
+ 
+-	sai2a_sleep_pins_c: sai2a-5 {
++	sai2a_sleep_pins_c: sai2a-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('D', 13, ANALOG)>, /* SAI2_SCK_A */
+ 				 <STM32_PINMUX('D', 11, ANALOG)>, /* SAI2_SD_A */
+@@ -1235,14 +1235,14 @@
+ 		};
+ 	};
+ 
+-	sai2b_pins_c: sai2a-4 {
++	sai2b_pins_c: sai2b-2 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('F', 11, AF10)>; /* SAI2_SD_B */
+ 			bias-disable;
+ 		};
+ 	};
+ 
+-	sai2b_sleep_pins_c: sai2a-sleep-5 {
++	sai2b_sleep_pins_c: sai2b-sleep-2 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('F', 11, ANALOG)>; /* SAI2_SD_B */
+ 		};
+diff --git a/arch/arm/boot/dts/stm32mp151.dtsi b/arch/arm/boot/dts/stm32mp151.dtsi
+index bd289bf5d2690..6992a4b0ba79b 100644
+--- a/arch/arm/boot/dts/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/stm32mp151.dtsi
+@@ -824,7 +824,7 @@
+ 				#sound-dai-cells = <0>;
+ 
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x4 0x1c>;
++				reg = <0x4 0x20>;
+ 				clocks = <&rcc SAI1_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 87 0x400 0x01>;
+@@ -834,7 +834,7 @@
+ 			sai1b: audio-controller@4400a024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI1_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 88 0x400 0x01>;
+@@ -855,7 +855,7 @@
+ 			sai2a: audio-controller@4400b004 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x4 0x1c>;
++				reg = <0x4 0x20>;
+ 				clocks = <&rcc SAI2_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 89 0x400 0x01>;
+@@ -865,7 +865,7 @@
+ 			sai2b: audio-controller@4400b024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI2_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 90 0x400 0x01>;
+@@ -886,7 +886,7 @@
+ 			sai3a: audio-controller@4400c004 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x04 0x1c>;
++				reg = <0x04 0x20>;
+ 				clocks = <&rcc SAI3_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 113 0x400 0x01>;
+@@ -896,7 +896,7 @@
+ 			sai3b: audio-controller@4400c024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI3_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 114 0x400 0x01>;
+@@ -1271,7 +1271,7 @@
+ 			sai4a: audio-controller@50027004 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-a";
+-				reg = <0x04 0x1c>;
++				reg = <0x04 0x20>;
+ 				clocks = <&rcc SAI4_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 99 0x400 0x01>;
+@@ -1281,7 +1281,7 @@
+ 			sai4b: audio-controller@50027024 {
+ 				#sound-dai-cells = <0>;
+ 				compatible = "st,stm32-sai-sub-b";
+-				reg = <0x24 0x1c>;
++				reg = <0x24 0x20>;
+ 				clocks = <&rcc SAI4_K>;
+ 				clock-names = "sai_ck";
+ 				dmas = <&dmamux1 100 0x400 0x01>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index 2b0ac605549d7..44ecc47085871 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -202,7 +202,7 @@
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-rx-bus-width = <4>;
+-		spi-max-frequency = <108000000>;
++		spi-max-frequency = <50000000>;
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 	};
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index 586aac8a998c0..a86f2dfa67acc 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -249,7 +249,7 @@
+ 	stusb1600@28 {
+ 		compatible = "st,stusb1600";
+ 		reg = <0x28>;
+-		interrupts = <11 IRQ_TYPE_EDGE_FALLING>;
++		interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-parent = <&gpioi>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&stusb1600_pins_a>;
+diff --git a/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts
+index 8077f1716fbc8..ecb91fb899ff3 100644
+--- a/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts
++++ b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2.dts
+@@ -112,7 +112,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&gmac_rgmii_pins>;
+ 	phy-handle = <&phy1>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
+index 9e6b972863077..6af68edfa53ab 100644
+--- a/arch/arm/include/asm/processor.h
++++ b/arch/arm/include/asm/processor.h
+@@ -84,7 +84,7 @@ struct task_struct;
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define task_pt_regs(p) \
+ 	((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index fc9e8b37eaa84..261be96fa0c30 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -283,13 +283,11 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
+ 	return 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	struct stackframe frame;
+ 	unsigned long stack_page;
+ 	int count = 0;
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+ 
+ 	frame.fp = thread_saved_fp(p);
+ 	frame.sp = thread_saved_sp(p);
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index 76ea4178a55cb..db798eac74315 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -54,8 +54,7 @@ int notrace unwind_frame(struct stackframe *frame)
+ 
+ 	frame->sp = frame->fp;
+ 	frame->fp = *(unsigned long *)(fp);
+-	frame->pc = frame->lr;
+-	frame->lr = *(unsigned long *)(fp + 4);
++	frame->pc = *(unsigned long *)(fp + 4);
+ #else
+ 	/* check current frame pointer is within bounds */
+ 	if (fp < low + 12 || fp > high - 4)
+diff --git a/arch/arm/mach-s3c/irq-s3c24xx.c b/arch/arm/mach-s3c/irq-s3c24xx.c
+index 0c631c14a8172..53081505e3976 100644
+--- a/arch/arm/mach-s3c/irq-s3c24xx.c
++++ b/arch/arm/mach-s3c/irq-s3c24xx.c
+@@ -362,11 +362,25 @@ static inline int s3c24xx_handle_intc(struct s3c_irq_intc *intc,
+ static asmlinkage void __exception_irq_entry s3c24xx_handle_irq(struct pt_regs *regs)
+ {
+ 	do {
+-		if (likely(s3c_intc[0]))
+-			if (s3c24xx_handle_intc(s3c_intc[0], regs, 0))
+-				continue;
++		/*
++		 * For platform based machines, neither ERR nor NULL can happen here.
++		 * The s3c24xx_handle_irq() will be set as IRQ handler iff this succeeds:
++		 *
++		 *    s3c_intc[0] = s3c24xx_init_intc()
++		 *
++		 * If this fails, the next calls to s3c24xx_init_intc() won't be executed.
++		 *
++		 * For DT machine, s3c_init_intc_of() could set the IRQ handler without
++		 * setting s3c_intc[0] only if it was called with num_ctrl=0. There is no
++		 * such code path, so again the s3c_intc[0] will have a valid pointer if
++		 * set_handle_irq() is called.
++		 *
++		 * Therefore in s3c24xx_handle_irq(), the s3c_intc[0] is always something.
++		 */
++		if (s3c24xx_handle_intc(s3c_intc[0], regs, 0))
++			continue;
+ 
+-		if (s3c_intc[2])
++		if (!IS_ERR_OR_NULL(s3c_intc[2]))
+ 			if (s3c24xx_handle_intc(s3c_intc[2], regs, 64))
+ 				continue;
+ 
+diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
+index 8355c38958942..82aa990c4180c 100644
+--- a/arch/arm/mm/Kconfig
++++ b/arch/arm/mm/Kconfig
+@@ -750,7 +750,7 @@ config CPU_BIG_ENDIAN
+ config CPU_ENDIAN_BE8
+ 	bool
+ 	depends on CPU_BIG_ENDIAN
+-	default CPU_V6 || CPU_V6K || CPU_V7
++	default CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
+ 	help
+ 	  Support for the BE-8 (big-endian) mode on ARMv6 and ARMv7 processors.
+ 
+diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
+index 9c348042a7244..4b1619584b23c 100644
+--- a/arch/arm/mm/kasan_init.c
++++ b/arch/arm/mm/kasan_init.c
+@@ -226,7 +226,7 @@ void __init kasan_init(void)
+ 	BUILD_BUG_ON(pgd_index(KASAN_SHADOW_START) !=
+ 		     pgd_index(KASAN_SHADOW_END));
+ 	memcpy(tmp_pmd_table,
+-	       pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
++	       (void*)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+ 	       sizeof(tmp_pmd_table));
+ 	set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+ 		__pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index a4e0060051070..274e4f73fd33c 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -390,9 +390,9 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
+ 	BUILD_BUG_ON(__fix_to_virt(__end_of_fixed_addresses) < FIXADDR_START);
+ 	BUG_ON(idx >= __end_of_fixed_addresses);
+ 
+-	/* we only support device mappings until pgprot_kernel has been set */
++	/* We support only device mappings before pgprot_kernel is set. */
+ 	if (WARN_ON(pgprot_val(prot) != pgprot_val(FIXMAP_PAGE_IO) &&
+-		    pgprot_val(pgprot_kernel) == 0))
++		    pgprot_val(prot) && pgprot_val(pgprot_kernel) == 0))
+ 		return;
+ 
+ 	if (pgprot_val(prot))
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+index 81269ccc24968..d8838dde0f0f4 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-sei510.dts
+@@ -139,7 +139,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts
+index a26bfe72550fe..4b5d11e56364d 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-u200.dts
+@@ -139,7 +139,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
+index 579f3d02d613e..b4e86196e3468 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-x96-max.dts
+@@ -139,7 +139,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
+index f42cf4b8af2d4..16dd409051b40 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
+@@ -18,7 +18,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_ab 0 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -37,7 +37,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&vsys_3v3>;
++		pwm-supply = <&vsys_3v3>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+index 344573e157a7b..4f33820aba1f1 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-n2.dtsi
+@@ -130,7 +130,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_ab 0 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -149,7 +149,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+index feb0885047400..b40d2c1002c92 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-w400.dtsi
+@@ -96,7 +96,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_ab 0 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -115,7 +115,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+index effaa138b5f98..212c6aa5a3b86 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+@@ -173,7 +173,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+index f2c0981435944..9c0b544e22098 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts
+@@ -24,7 +24,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&vsys_3v3>;
++		pwm-supply = <&vsys_3v3>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi
+index fd0ad85c165ba..5779e70caccd3 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid.dtsi
+@@ -116,7 +116,7 @@
+ 		regulator-min-microvolt = <721000>;
+ 		regulator-max-microvolt = <1022000>;
+ 
+-		vin-supply = <&main_12v>;
++		pwm-supply = <&main_12v>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1250 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -263,6 +263,10 @@
+ 		reg = <0>;
+ 		max-speed = <1000>;
+ 
++		reset-assert-us = <10000>;
++		reset-deassert-us = <80000>;
++		reset-gpios = <&gpio GPIOZ_15 (GPIO_ACTIVE_LOW | GPIO_OPEN_DRAIN)>;
++
+ 		interrupt-parent = <&gpio_intc>;
+ 		/* MAC_INTR on GPIOZ_14 */
+ 		interrupts = <26 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+index 2194a778973f1..427475846fc70 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-sei610.dts
+@@ -185,7 +185,7 @@
+ 		regulator-min-microvolt = <690000>;
+ 		regulator-max-microvolt = <1050000>;
+ 
+-		vin-supply = <&dc_in>;
++		pwm-supply = <&dc_in>;
+ 
+ 		pwms = <&pwm_AO_cd 1 1500 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+index a5a64d17d9ea6..f6b93bbb49228 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+@@ -292,7 +292,7 @@
+ 			reg = <0x640 0x18>;
+ 			interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+-			clock-names = "periph";
++			clock-names = "refclk";
+ 			status = "okay";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 4f06c0a9c4252..7718c7f25aba9 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1357,11 +1357,17 @@
+ 		lpass: audio-controller@7708000 {
+ 			status = "disabled";
+ 			compatible = "qcom,lpass-cpu-apq8016";
++
++			/*
++			 * Note: Unlike the name would suggest, the SEC_I2S_CLK
++			 * is actually only used by Tertiary MI2S while
++			 * Primary/Secondary MI2S both use the PRI_I2S_CLK.
++			 */
+ 			clocks = <&gcc GCC_ULTAUDIO_AHBFABRIC_IXFABRIC_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_PCNOC_MPORT_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_PCNOC_SWAY_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_LPAIF_PRI_I2S_CLK>,
+-				 <&gcc GCC_ULTAUDIO_LPAIF_SEC_I2S_CLK>,
++				 <&gcc GCC_ULTAUDIO_LPAIF_PRI_I2S_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_LPAIF_SEC_I2S_CLK>,
+ 				 <&gcc GCC_ULTAUDIO_LPAIF_AUX_I2S_CLK>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/pm8916.dtsi b/arch/arm64/boot/dts/qcom/pm8916.dtsi
+index f931cb0de231f..42180f1b5dbbb 100644
+--- a/arch/arm64/boot/dts/qcom/pm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8916.dtsi
+@@ -86,7 +86,6 @@
+ 		rtc@6000 {
+ 			compatible = "qcom,pm8941-rtc";
+ 			reg = <0x6000>;
+-			reg-names = "rtc", "alarm";
+ 			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/pmi8994.dtsi b/arch/arm64/boot/dts/qcom/pmi8994.dtsi
+index b4ac900ab115f..a06ea9adae810 100644
+--- a/arch/arm64/boot/dts/qcom/pmi8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmi8994.dtsi
+@@ -42,7 +42,7 @@
+ 			/* Yes, all four strings *have to* be defined or things won't work. */
+ 			qcom,enabled-strings = <0 1 2 3>;
+ 			qcom,cabc;
+-			qcom,eternal-pfet;
++			qcom,external-pfet;
+ 			status = "disabled";
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+index a758e4d226122..81098aa9687ba 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+@@ -33,7 +33,7 @@ ap_h1_spi: &spi0 {};
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&pm6150_adc_tm 1>;
+-			sustainable-power = <814>;
++			sustainable-power = <965>;
+ 
+ 			trips {
+ 				skin_temp_alert0: trip-point0 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
+index a246dbd74cc11..b7b5264888b7c 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
+@@ -44,7 +44,7 @@ ap_h1_spi: &spi0 {};
+ };
+ 
+ &cpu6_thermal {
+-	sustainable-power = <948>;
++	sustainable-power = <1124>;
+ };
+ 
+ &cpu7_alert0 {
+@@ -56,7 +56,7 @@ ap_h1_spi: &spi0 {};
+ };
+ 
+ &cpu7_thermal {
+-	sustainable-power = <948>;
++	sustainable-power = <1124>;
+ };
+ 
+ &cpu8_alert0 {
+@@ -68,7 +68,7 @@ ap_h1_spi: &spi0 {};
+ };
+ 
+ &cpu8_thermal {
+-	sustainable-power = <948>;
++	sustainable-power = <1124>;
+ };
+ 
+ &cpu9_alert0 {
+@@ -80,7 +80,7 @@ ap_h1_spi: &spi0 {};
+ };
+ 
+ &cpu9_thermal {
+-	sustainable-power = <948>;
++	sustainable-power = <1124>;
+ };
+ 
+ &gpio_keys {
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index a9a052f8c63c8..4cb5d19f8df2d 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -132,8 +132,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1024>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <415>;
++			dynamic-power-coefficient = <137>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ 					<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+@@ -157,8 +157,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1024>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <415>;
++			dynamic-power-coefficient = <137>;
+ 			next-level-cache = <&L2_100>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -179,8 +179,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1024>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <415>;
++			dynamic-power-coefficient = <137>;
+ 			next-level-cache = <&L2_200>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -201,8 +201,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1024>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <415>;
++			dynamic-power-coefficient = <137>;
+ 			next-level-cache = <&L2_300>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -223,8 +223,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1024>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <415>;
++			dynamic-power-coefficient = <137>;
+ 			next-level-cache = <&L2_400>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -245,8 +245,8 @@
+ 			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+ 					   &LITTLE_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1024>;
+-			dynamic-power-coefficient = <100>;
++			capacity-dmips-mhz = <415>;
++			dynamic-power-coefficient = <137>;
+ 			next-level-cache = <&L2_500>;
+ 			operating-points-v2 = <&cpu0_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -267,8 +267,8 @@
+ 			cpu-idle-states = <&BIG_CPU_SLEEP_0
+ 					   &BIG_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1740>;
+-			dynamic-power-coefficient = <405>;
++			capacity-dmips-mhz = <1024>;
++			dynamic-power-coefficient = <480>;
+ 			next-level-cache = <&L2_600>;
+ 			operating-points-v2 = <&cpu6_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -289,8 +289,8 @@
+ 			cpu-idle-states = <&BIG_CPU_SLEEP_0
+ 					   &BIG_CPU_SLEEP_1
+ 					   &CLUSTER_SLEEP_0>;
+-			capacity-dmips-mhz = <1740>;
+-			dynamic-power-coefficient = <405>;
++			capacity-dmips-mhz = <1024>;
++			dynamic-power-coefficient = <480>;
+ 			next-level-cache = <&L2_700>;
+ 			operating-points-v2 = <&cpu6_opp_table>;
+ 			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+@@ -3504,7 +3504,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 1>;
+-			sustainable-power = <768>;
++			sustainable-power = <1052>;
+ 
+ 			trips {
+ 				cpu0_alert0: trip-point0 {
+@@ -3553,7 +3553,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 2>;
+-			sustainable-power = <768>;
++			sustainable-power = <1052>;
+ 
+ 			trips {
+ 				cpu1_alert0: trip-point0 {
+@@ -3602,7 +3602,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 3>;
+-			sustainable-power = <768>;
++			sustainable-power = <1052>;
+ 
+ 			trips {
+ 				cpu2_alert0: trip-point0 {
+@@ -3651,7 +3651,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 4>;
+-			sustainable-power = <768>;
++			sustainable-power = <1052>;
+ 
+ 			trips {
+ 				cpu3_alert0: trip-point0 {
+@@ -3700,7 +3700,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 5>;
+-			sustainable-power = <768>;
++			sustainable-power = <1052>;
+ 
+ 			trips {
+ 				cpu4_alert0: trip-point0 {
+@@ -3749,7 +3749,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 6>;
+-			sustainable-power = <768>;
++			sustainable-power = <1052>;
+ 
+ 			trips {
+ 				cpu5_alert0: trip-point0 {
+@@ -3798,7 +3798,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 9>;
+-			sustainable-power = <1202>;
++			sustainable-power = <1425>;
+ 
+ 			trips {
+ 				cpu6_alert0: trip-point0 {
+@@ -3839,7 +3839,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 10>;
+-			sustainable-power = <1202>;
++			sustainable-power = <1425>;
+ 
+ 			trips {
+ 				cpu7_alert0: trip-point0 {
+@@ -3880,7 +3880,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 11>;
+-			sustainable-power = <1202>;
++			sustainable-power = <1425>;
+ 
+ 			trips {
+ 				cpu8_alert0: trip-point0 {
+@@ -3921,7 +3921,7 @@
+ 			polling-delay = <0>;
+ 
+ 			thermal-sensors = <&tsens0 12>;
+-			sustainable-power = <1202>;
++			sustainable-power = <1425>;
+ 
+ 			trips {
+ 				cpu9_alert0: trip-point0 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 0a86fe71a66d1..4cca597a36eb6 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -2316,7 +2316,7 @@
+ 			compatible = "qcom,bam-v1.7.0";
+ 			reg = <0 0x01dc4000 0 0x24000>;
+ 			interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&rpmhcc 15>;
++			clocks = <&rpmhcc RPMH_CE_CLK>;
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
+@@ -2331,8 +2331,8 @@
+ 			compatible = "qcom,crypto-v5.4";
+ 			reg = <0 0x01dfa000 0 0x6000>;
+ 			clocks = <&gcc GCC_CE1_AHB_CLK>,
+-				 <&gcc GCC_CE1_AHB_CLK>,
+-				 <&rpmhcc 15>;
++				 <&gcc GCC_CE1_AXI_CLK>,
++				 <&rpmhcc RPMH_CE_CLK>;
+ 			clock-names = "iface", "bus", "core";
+ 			dmas = <&cryptobam 6>, <&cryptobam 7>;
+ 			dma-names = "rx", "tx";
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+index 090dc9c4f57b5..937d17a426b66 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+@@ -50,6 +50,7 @@
+ &avb {
+ 	pinctrl-0 = <&avb_pins>;
+ 	pinctrl-names = "default";
++	phy-mode = "rgmii-rxid";
+ 	phy-handle = <&phy0>;
+ 	rx-internal-delay-ps = <1800>;
+ 	tx-internal-delay-ps = <2000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 8c821acb21ffb..da84be6f4715e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -599,7 +599,7 @@
+ 
+ 	gpu: gpu@ff300000 {
+ 		compatible = "rockchip,rk3328-mali", "arm,mali-450";
+-		reg = <0x0 0xff300000 0x0 0x40000>;
++		reg = <0x0 0xff300000 0x0 0x30000>;
+ 		interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568.dtsi b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+index d225e6a45d5cb..fdfe283b7a30d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+@@ -201,7 +201,7 @@
+ 		interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <3>;
+-		mbi-alias = <0x0 0xfd100000>;
++		mbi-alias = <0x0 0xfd410000>;
+ 		mbi-ranges = <296 24>;
+ 		msi-controller;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index e8a41d09b45f2..874cba75e9a5a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -606,10 +606,10 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		cdns,no-bar-match-nbits = <64>;
+-		vendor-id = /bits/ 16 <0x104c>;
+-		device-id = /bits/ 16 <0xb00f>;
++		vendor-id = <0x104c>;
++		device-id = <0xb00f>;
+ 		msi-map = <0x0 &gic_its 0x0 0x10000>;
+ 		dma-coherent;
+ 		ranges = <0x01000000 0x0 0x18001000  0x00 0x18001000  0x0 0x0010000>,
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index cf3482376c1e6..08c8d1b47dcd9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -610,7 +610,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x0 0x10000>;
+@@ -636,7 +636,7 @@
+ 		clocks = <&k3_clks 239 1>;
+ 		clock-names = "fck";
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 	};
+ 
+@@ -658,7 +658,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x10000 0x10000>;
+@@ -684,7 +684,7 @@
+ 		clocks = <&k3_clks 240 1>;
+ 		clock-names = "fck";
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 	};
+ 
+@@ -706,7 +706,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x20000 0x10000>;
+@@ -732,7 +732,7 @@
+ 		clocks = <&k3_clks 241 1>;
+ 		clock-names = "fck";
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 	};
+ 
+@@ -754,7 +754,7 @@
+ 		clock-names = "fck";
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		bus-range = <0x0 0xf>;
++		bus-range = <0x0 0xff>;
+ 		vendor-id = <0x104c>;
+ 		device-id = <0xb00d>;
+ 		msi-map = <0x0 &gic_its 0x30000 0x10000>;
+@@ -780,7 +780,7 @@
+ 		clocks = <&k3_clks 242 1>;
+ 		clock-names = "fck";
+ 		max-functions = /bits/ 8 <6>;
+-		max-virtual-functions = /bits/ 16 <4 4 4 4 0 0>;
++		max-virtual-functions = /bits/ 8 <4 4 4 4 0 0>;
+ 		dma-coherent;
+ 		#address-cells = <2>;
+ 		#size-cells = <2>;
+diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
+index 29f97eb3dad41..8f59bbeba7a7e 100644
+--- a/arch/arm64/include/asm/esr.h
++++ b/arch/arm64/include/asm/esr.h
+@@ -68,6 +68,7 @@
+ #define ESR_ELx_EC_MAX		(0x3F)
+ 
+ #define ESR_ELx_EC_SHIFT	(26)
++#define ESR_ELx_EC_WIDTH	(6)
+ #define ESR_ELx_EC_MASK		(UL(0x3F) << ESR_ELx_EC_SHIFT)
+ #define ESR_ELx_EC(esr)		(((esr) & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT)
+ 
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index f09bf5c028919..16e53d2515089 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -67,9 +67,15 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+  * page table entry, taking care of 52-bit addresses.
+  */
+ #ifdef CONFIG_ARM64_PA_BITS_52
+-#define __pte_to_phys(pte)	\
+-	((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36))
+-#define __phys_to_pte_val(phys)	(((phys) | ((phys) >> 36)) & PTE_ADDR_MASK)
++static inline phys_addr_t __pte_to_phys(pte_t pte)
++{
++	return (pte_val(pte) & PTE_ADDR_LOW) |
++		((pte_val(pte) & PTE_ADDR_HIGH) << 36);
++}
++static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
++{
++	return (phys | (phys >> 36)) & PTE_ADDR_MASK;
++}
+ #else
+ #define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_MASK)
+ #define __phys_to_pte_val(phys)	(phys)
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index b6517fd03d7b6..922355eb7eefa 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -251,7 +251,7 @@ struct task_struct;
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ void set_task_sctlr_el1(u64 sctlr);
+ 
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 92c99472d2c90..d935546e07a63 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -572,15 +572,19 @@ static const struct arm64_ftr_bits ftr_raz[] = {
+ 	ARM64_FTR_END,
+ };
+ 
+-#define ARM64_FTR_REG_OVERRIDE(id, table, ovr) {		\
++#define __ARM64_FTR_REG_OVERRIDE(id_str, id, table, ovr) {	\
+ 		.sys_id = id,					\
+ 		.reg = 	&(struct arm64_ftr_reg){		\
+-			.name = #id,				\
++			.name = id_str,				\
+ 			.override = (ovr),			\
+ 			.ftr_bits = &((table)[0]),		\
+ 	}}
+ 
+-#define ARM64_FTR_REG(id, table) ARM64_FTR_REG_OVERRIDE(id, table, &no_override)
++#define ARM64_FTR_REG_OVERRIDE(id, table, ovr)	\
++	__ARM64_FTR_REG_OVERRIDE(#id, id, table, ovr)
++
++#define ARM64_FTR_REG(id, table)		\
++	__ARM64_FTR_REG_OVERRIDE(#id, id, table, &no_override)
+ 
+ struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override;
+ struct arm64_ftr_override __ro_after_init id_aa64pfr1_override;
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index c858b857c1ecf..46995c972ff5f 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -544,13 +544,11 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
+ 	return last;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	struct stackframe frame;
+ 	unsigned long stack_page, ret = 0;
+ 	int count = 0;
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+ 
+ 	stack_page = (unsigned long)try_get_task_stack(p);
+ 	if (!stack_page)
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 3dba0c4f8f42b..764d1900d5aab 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -40,7 +40,8 @@ cc32-as-instr = $(call try-run,\
+ # As a result we set our own flags here.
+ 
+ # KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
+-VDSO_CPPFLAGS := -DBUILD_VDSO -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include)
++VDSO_CPPFLAGS := -DBUILD_VDSO -D__KERNEL__ -nostdinc
++VDSO_CPPFLAGS += -isystem $(shell $(CC_COMPAT) -print-file-name=include 2>/dev/null)
+ VDSO_CPPFLAGS += $(LINUXINCLUDE)
+ 
+ # Common C and assembly flags
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index 9aa9b73475c95..b6b6801d96d5a 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -44,7 +44,7 @@
+ el1_sync:				// Guest trapped into EL2
+ 
+ 	mrs	x0, esr_el2
+-	lsr	x0, x0, #ESR_ELx_EC_SHIFT
++	ubfx	x0, x0, #ESR_ELx_EC_SHIFT, #ESR_ELx_EC_WIDTH
+ 	cmp	x0, #ESR_ELx_EC_HVC64
+ 	ccmp	x0, #ESR_ELx_EC_HVC32, #4, ne
+ 	b.ne	el1_trap
+diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
+index 4b652ffb591d4..d310d2b2c8b40 100644
+--- a/arch/arm64/kvm/hyp/nvhe/host.S
++++ b/arch/arm64/kvm/hyp/nvhe/host.S
+@@ -115,7 +115,7 @@ SYM_FUNC_END(__hyp_do_panic)
+ .L__vect_start\@:
+ 	stp	x0, x1, [sp, #-16]!
+ 	mrs	x0, esr_el2
+-	lsr	x0, x0, #ESR_ELx_EC_SHIFT
++	ubfx	x0, x0, #ESR_ELx_EC_SHIFT, #ESR_ELx_EC_WIDTH
+ 	cmp	x0, #ESR_ELx_EC_HVC64
+ 	b.ne	__host_exit
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c
+index a6e874e61a40e..0bd7701ad1df5 100644
+--- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c
++++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c
+@@ -152,6 +152,7 @@ static inline void hyp_page_ref_inc(struct hyp_page *p)
+ 
+ static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
+ {
++	BUG_ON(!p->refcount);
+ 	p->refcount--;
+ 	return (p->refcount == 0);
+ }
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 9ff0de1b2b93c..90d185853e341 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1499,6 +1499,11 @@ int arch_add_memory(int nid, u64 start, u64 size,
+ 	if (ret)
+ 		__remove_pgd_mapping(swapper_pg_dir,
+ 				     __phys_to_virt(start), size);
++	else {
++		max_pfn = PFN_UP(start + size);
++		max_low_pfn = max_pfn;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 41c23f474ea63..803e7773fa869 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1136,6 +1136,11 @@ out:
+ 	return prog;
+ }
+ 
++u64 bpf_jit_alloc_exec_limit(void)
++{
++	return BPF_JIT_REGION_SIZE;
++}
++
+ void *bpf_jit_alloc_exec(unsigned long size)
+ {
+ 	return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
+diff --git a/arch/csky/include/asm/processor.h b/arch/csky/include/asm/processor.h
+index 9e933021fe8e0..817dd60ff152d 100644
+--- a/arch/csky/include/asm/processor.h
++++ b/arch/csky/include/asm/processor.h
+@@ -81,7 +81,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ 
+ extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
+ #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->usp)
+diff --git a/arch/csky/kernel/stacktrace.c b/arch/csky/kernel/stacktrace.c
+index 1b280ef080045..9f78f5d215117 100644
+--- a/arch/csky/kernel/stacktrace.c
++++ b/arch/csky/kernel/stacktrace.c
+@@ -111,12 +111,11 @@ static bool save_wchan(unsigned long pc, void *arg)
+ 	return false;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *task)
++unsigned long __get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc = 0;
+ 
+-	if (likely(task && task != current && !task_is_running(task)))
+-		walk_stackframe(task, NULL, save_wchan, &pc);
++	walk_stackframe(task, NULL, save_wchan, &pc);
+ 	return pc;
+ }
+ 
+diff --git a/arch/h8300/include/asm/processor.h b/arch/h8300/include/asm/processor.h
+index a060b41b2d31c..141a23eb62b74 100644
+--- a/arch/h8300/include/asm/processor.h
++++ b/arch/h8300/include/asm/processor.h
+@@ -105,7 +105,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define	KSTK_EIP(tsk)	\
+ 	({			 \
+diff --git a/arch/h8300/kernel/process.c b/arch/h8300/kernel/process.c
+index 2ac27e4248a46..8833fa4f5d516 100644
+--- a/arch/h8300/kernel/process.c
++++ b/arch/h8300/kernel/process.c
+@@ -128,15 +128,12 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	return 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	stack_page = (unsigned long)p;
+ 	fp = ((struct pt_regs *)p->thread.ksp)->er6;
+ 	do {
+diff --git a/arch/hexagon/include/asm/processor.h b/arch/hexagon/include/asm/processor.h
+index 9f0cc99420bee..615f7e49968e6 100644
+--- a/arch/hexagon/include/asm/processor.h
++++ b/arch/hexagon/include/asm/processor.h
+@@ -64,7 +64,7 @@ struct thread_struct {
+ extern void release_thread(struct task_struct *dead_task);
+ 
+ /* Get wait channel for task P.  */
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ /*  The following stuff is pretty HEXAGON specific.  */
+ 
+diff --git a/arch/hexagon/kernel/process.c b/arch/hexagon/kernel/process.c
+index 6a6835fb42425..232dfd8956aa2 100644
+--- a/arch/hexagon/kernel/process.c
++++ b/arch/hexagon/kernel/process.c
+@@ -130,13 +130,11 @@ void flush_thread(void)
+  * is an identification of the point at which the scheduler
+  * was invoked by a blocked thread.
+  */
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+ 
+ 	stack_page = (unsigned long)task_stack_page(p);
+ 	fp = ((struct hexagon_switch_stack *)p->thread.switch_sp)->fp;
+diff --git a/arch/ia64/Kconfig.debug b/arch/ia64/Kconfig.debug
+index 40ca23bd228d6..2ce008e2d1644 100644
+--- a/arch/ia64/Kconfig.debug
++++ b/arch/ia64/Kconfig.debug
+@@ -39,7 +39,7 @@ config DISABLE_VHPT
+ 
+ config IA64_DEBUG_CMPXCHG
+ 	bool "Turn on compare-and-exchange bug checking (slow!)"
+-	depends on DEBUG_KERNEL
++	depends on DEBUG_KERNEL && PRINTK
+ 	help
+ 	  Selecting this option turns on bug checking for the IA-64
+ 	  compare-and-exchange instructions.  This is slow!  Itaniums
+diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
+index 2d8bcdc27d7f8..45365c2ef5983 100644
+--- a/arch/ia64/include/asm/processor.h
++++ b/arch/ia64/include/asm/processor.h
+@@ -330,7 +330,7 @@ struct task_struct;
+ #define release_thread(dead_task)
+ 
+ /* Get wait channel for task P.  */
+-extern unsigned long get_wchan (struct task_struct *p);
++extern unsigned long __get_wchan (struct task_struct *p);
+ 
+ /* Return instruction pointer of blocked task TSK.  */
+ #define KSTK_EIP(tsk)					\
+diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c
+index 441ed04b10378..d4048518a1d7d 100644
+--- a/arch/ia64/kernel/kprobes.c
++++ b/arch/ia64/kernel/kprobes.c
+@@ -398,7 +398,8 @@ static void kretprobe_trampoline(void)
+ 
+ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
+ {
+-	regs->cr_iip = __kretprobe_trampoline_handler(regs, kretprobe_trampoline, NULL);
++	regs->cr_iip = __kretprobe_trampoline_handler(regs,
++		dereference_function_descriptor(kretprobe_trampoline), NULL);
+ 	/*
+ 	 * By returning a non-zero value, we are telling
+ 	 * kprobe_handler() that we don't want the post_handler
+@@ -414,7 +415,7 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+ 	ri->fp = NULL;
+ 
+ 	/* Replace the return addr with trampoline addr */
+-	regs->b0 = ((struct fnptr *)kretprobe_trampoline)->ip;
++	regs->b0 = (unsigned long)dereference_function_descriptor(kretprobe_trampoline);
+ }
+ 
+ /* Check the instruction in the slot is break */
+@@ -902,14 +903,14 @@ static struct kprobe trampoline_p = {
+ int __init arch_init_kprobes(void)
+ {
+ 	trampoline_p.addr =
+-		(kprobe_opcode_t *)((struct fnptr *)kretprobe_trampoline)->ip;
++		dereference_function_descriptor(kretprobe_trampoline);
+ 	return register_kprobe(&trampoline_p);
+ }
+ 
+ int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+ {
+ 	if (p->addr ==
+-		(kprobe_opcode_t *)((struct fnptr *)kretprobe_trampoline)->ip)
++		dereference_function_descriptor(kretprobe_trampoline))
+ 		return 1;
+ 
+ 	return 0;
+diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
+index e56d63f4abf9d..834df24a88f12 100644
+--- a/arch/ia64/kernel/process.c
++++ b/arch/ia64/kernel/process.c
+@@ -523,15 +523,12 @@ exit_thread (struct task_struct *tsk)
+ }
+ 
+ unsigned long
+-get_wchan (struct task_struct *p)
++__get_wchan (struct task_struct *p)
+ {
+ 	struct unw_frame_info info;
+ 	unsigned long ip;
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	/*
+ 	 * Note: p may not be a blocked task (it could be current or
+ 	 * another process running on some other CPU.  Rather than
+diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
+index 6a07a68178856..a5222aaa19f67 100644
+--- a/arch/m68k/Kconfig.machine
++++ b/arch/m68k/Kconfig.machine
+@@ -203,6 +203,7 @@ config INIT_LCD
+ config MEMORY_RESERVE
+ 	int "Memory reservation (MiB)"
+ 	depends on (UCSIMM || UCDIMM)
++	default 0
+ 	help
+ 	  Reserve certain memory regions on 68x328 based boards.
+ 
+diff --git a/arch/m68k/include/asm/processor.h b/arch/m68k/include/asm/processor.h
+index 3750819ac5a13..bacec548cb3c6 100644
+--- a/arch/m68k/include/asm/processor.h
++++ b/arch/m68k/include/asm/processor.h
+@@ -125,7 +125,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define	KSTK_EIP(tsk)	\
+     ({			\
+diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
+index db49f90917112..d2357cba09abe 100644
+--- a/arch/m68k/kernel/process.c
++++ b/arch/m68k/kernel/process.c
+@@ -263,13 +263,11 @@ int dump_fpu (struct pt_regs *regs, struct user_m68kfp_struct *fpu)
+ }
+ EXPORT_SYMBOL(dump_fpu);
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+ 
+ 	stack_page = (unsigned long)task_stack_page(p);
+ 	fp = ((struct switch_stack *)p->thread.ksp)->a6;
+diff --git a/arch/microblaze/include/asm/processor.h b/arch/microblaze/include/asm/processor.h
+index 06c6e493590a2..7e9e92670df33 100644
+--- a/arch/microblaze/include/asm/processor.h
++++ b/arch/microblaze/include/asm/processor.h
+@@ -68,7 +68,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ /* The size allocated for kernel stacks. This _must_ be a power of two! */
+ # define KERNEL_STACK_SIZE	0x2000
+diff --git a/arch/microblaze/kernel/process.c b/arch/microblaze/kernel/process.c
+index 62aa237180b67..5e2b91c1e8ced 100644
+--- a/arch/microblaze/kernel/process.c
++++ b/arch/microblaze/kernel/process.c
+@@ -112,7 +112,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	return 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ /* TBD (used by procfs) */
+ 	return 0;
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 6dfb27d531dd7..d1fcb3e497a88 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1406,6 +1406,7 @@ config CPU_LOONGSON64
+ 	select MIPS_ASID_BITS_VARIABLE
+ 	select MIPS_PGD_C0_CONTEXT
+ 	select MIPS_L1_CACHE_SHIFT_6
++	select MIPS_FP_SUPPORT
+ 	select GPIOLIB
+ 	select SWIOTLB
+ 	select HAVE_KVM
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 653befc1b1761..0dfef0beaaaa1 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -254,7 +254,9 @@ endif
+ #
+ # Board-dependent options and extra files
+ #
++ifdef need-compiler
+ include arch/mips/Kbuild.platforms
++endif
+ 
+ ifdef CONFIG_PHYSICAL_START
+ load-y					= $(CONFIG_PHYSICAL_START)
+diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
+index 0b983800f48b7..66a8b293fd80b 100644
+--- a/arch/mips/include/asm/cmpxchg.h
++++ b/arch/mips/include/asm/cmpxchg.h
+@@ -249,6 +249,7 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
+ 	/* Load 64 bits from ptr */
+ 	"	" __SYNC(full, loongson3_war) "		\n"
+ 	"1:	lld	%L0, %3		# __cmpxchg64	\n"
++	"	.set	pop				\n"
+ 	/*
+ 	 * Split the 64 bit value we loaded into the 2 registers that hold the
+ 	 * ret variable.
+@@ -276,12 +277,14 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
+ 	"	or	%L1, %L1, $at			\n"
+ 	"	.set	at				\n"
+ #  endif
++	"	.set	push				\n"
++	"	.set	" MIPS_ISA_ARCH_LEVEL "		\n"
+ 	/* Attempt to store new at ptr */
+ 	"	scd	%L1, %2				\n"
+ 	/* If we failed, loop! */
+ 	"\t" __SC_BEQZ "%L1, 1b				\n"
+-	"	.set	pop				\n"
+ 	"2:	" __SYNC(full, loongson3_war) "		\n"
++	"	.set	pop				\n"
+ 	: "=&r"(ret),
+ 	  "=&r"(tmp),
+ 	  "=" GCC_OFF_SMALL_ASM() (*(unsigned long long *)ptr)
+diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
+index aeae2effa123d..23c67c0871b17 100644
+--- a/arch/mips/include/asm/mips-cm.h
++++ b/arch/mips/include/asm/mips-cm.h
+@@ -11,6 +11,7 @@
+ #ifndef __MIPS_ASM_MIPS_CM_H__
+ #define __MIPS_ASM_MIPS_CM_H__
+ 
++#include <linux/bitfield.h>
+ #include <linux/bitops.h>
+ #include <linux/errno.h>
+ 
+@@ -153,8 +154,8 @@ GCR_ACCESSOR_RO(32, 0x030, rev)
+ #define CM_GCR_REV_MINOR			GENMASK(7, 0)
+ 
+ #define CM_ENCODE_REV(major, minor) \
+-		(((major) << __ffs(CM_GCR_REV_MAJOR)) | \
+-		 ((minor) << __ffs(CM_GCR_REV_MINOR)))
++		(FIELD_PREP(CM_GCR_REV_MAJOR, major) | \
++		 FIELD_PREP(CM_GCR_REV_MINOR, minor))
+ 
+ #define CM_REV_CM2				CM_ENCODE_REV(6, 0)
+ #define CM_REV_CM2_5				CM_ENCODE_REV(7, 0)
+@@ -362,10 +363,10 @@ static inline int mips_cm_revision(void)
+ static inline unsigned int mips_cm_max_vp_width(void)
+ {
+ 	extern int smp_num_siblings;
+-	uint32_t cfg;
+ 
+ 	if (mips_cm_revision() >= CM_REV_CM3)
+-		return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW;
++		return FIELD_GET(CM_GCR_SYS_CONFIG2_MAXVPW,
++				 read_gcr_sys_config2());
+ 
+ 	if (mips_cm_present()) {
+ 		/*
+@@ -373,8 +374,7 @@ static inline unsigned int mips_cm_max_vp_width(void)
+ 		 * number of VP(E)s, and if that ever changes then this will
+ 		 * need revisiting.
+ 		 */
+-		cfg = read_gcr_cl_config() & CM_GCR_Cx_CONFIG_PVPE;
+-		return (cfg >> __ffs(CM_GCR_Cx_CONFIG_PVPE)) + 1;
++		return FIELD_GET(CM_GCR_Cx_CONFIG_PVPE, read_gcr_cl_config()) + 1;
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_SMP))
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index 0c3550c82b726..252ed38ce8c5a 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -369,7 +369,7 @@ static inline void flush_thread(void)
+ {
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
+ 			 THREAD_SIZE - 32 - sizeof(struct pt_regs))
+diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c
+index 90f1c3df1f0e4..b4f7d950c8468 100644
+--- a/arch/mips/kernel/mips-cm.c
++++ b/arch/mips/kernel/mips-cm.c
+@@ -221,8 +221,7 @@ static void mips_cm_probe_l2sync(void)
+ 	phys_addr_t addr;
+ 
+ 	/* L2-only sync was introduced with CM major revision 6 */
+-	major_rev = (read_gcr_rev() & CM_GCR_REV_MAJOR) >>
+-		__ffs(CM_GCR_REV_MAJOR);
++	major_rev = FIELD_GET(CM_GCR_REV_MAJOR, read_gcr_rev());
+ 	if (major_rev < 6)
+ 		return;
+ 
+@@ -306,13 +305,13 @@ void mips_cm_lock_other(unsigned int cluster, unsigned int core,
+ 	preempt_disable();
+ 
+ 	if (cm_rev >= CM_REV_CM3) {
+-		val = core << __ffs(CM3_GCR_Cx_OTHER_CORE);
+-		val |= vp << __ffs(CM3_GCR_Cx_OTHER_VP);
++		val = FIELD_PREP(CM3_GCR_Cx_OTHER_CORE, core) |
++		      FIELD_PREP(CM3_GCR_Cx_OTHER_VP, vp);
+ 
+ 		if (cm_rev >= CM_REV_CM3_5) {
+ 			val |= CM_GCR_Cx_OTHER_CLUSTER_EN;
+-			val |= cluster << __ffs(CM_GCR_Cx_OTHER_CLUSTER);
+-			val |= block << __ffs(CM_GCR_Cx_OTHER_BLOCK);
++			val |= FIELD_PREP(CM_GCR_Cx_OTHER_CLUSTER, cluster);
++			val |= FIELD_PREP(CM_GCR_Cx_OTHER_BLOCK, block);
+ 		} else {
+ 			WARN_ON(cluster != 0);
+ 			WARN_ON(block != CM_GCR_Cx_OTHER_BLOCK_LOCAL);
+@@ -342,7 +341,7 @@ void mips_cm_lock_other(unsigned int cluster, unsigned int core,
+ 		spin_lock_irqsave(&per_cpu(cm_core_lock, curr_core),
+ 				  per_cpu(cm_core_lock_flags, curr_core));
+ 
+-		val = core << __ffs(CM_GCR_Cx_OTHER_CORENUM);
++		val = FIELD_PREP(CM_GCR_Cx_OTHER_CORENUM, core);
+ 	}
+ 
+ 	write_gcr_cl_other(val);
+@@ -386,8 +385,8 @@ void mips_cm_error_report(void)
+ 	cm_other = read_gcr_error_mult();
+ 
+ 	if (revision < CM_REV_CM3) { /* CM2 */
+-		cause = cm_error >> __ffs(CM_GCR_ERROR_CAUSE_ERRTYPE);
+-		ocause = cm_other >> __ffs(CM_GCR_ERROR_MULT_ERR2ND);
++		cause = FIELD_GET(CM_GCR_ERROR_CAUSE_ERRTYPE, cm_error);
++		ocause = FIELD_GET(CM_GCR_ERROR_MULT_ERR2ND, cm_other);
+ 
+ 		if (!cause)
+ 			return;
+@@ -445,8 +444,8 @@ void mips_cm_error_report(void)
+ 		ulong core_id_bits, vp_id_bits, cmd_bits, cmd_group_bits;
+ 		ulong cm3_cca_bits, mcp_bits, cm3_tr_bits, sched_bit;
+ 
+-		cause = cm_error >> __ffs64(CM3_GCR_ERROR_CAUSE_ERRTYPE);
+-		ocause = cm_other >> __ffs(CM_GCR_ERROR_MULT_ERR2ND);
++		cause = FIELD_GET(CM3_GCR_ERROR_CAUSE_ERRTYPE, cm_error);
++		ocause = FIELD_GET(CM_GCR_ERROR_MULT_ERR2ND, cm_other);
+ 
+ 		if (!cause)
+ 			return;
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 73c8e7990a973..637e6207e3500 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -511,7 +511,7 @@ static int __init frame_info_init(void)
+ 
+ 	/*
+ 	 * Without schedule() frame info, result given by
+-	 * thread_saved_pc() and get_wchan() are not reliable.
++	 * thread_saved_pc() and __get_wchan() are not reliable.
+ 	 */
+ 	if (schedule_mfi.pc_offset < 0)
+ 		printk("Can't analyze schedule() prologue at %p\n", schedule);
+@@ -652,9 +652,9 @@ unsigned long unwind_stack(struct task_struct *task, unsigned long *sp,
+ #endif
+ 
+ /*
+- * get_wchan - a maintenance nightmare^W^Wpain in the ass ...
++ * __get_wchan - a maintenance nightmare^W^Wpain in the ass ...
+  */
+-unsigned long get_wchan(struct task_struct *task)
++unsigned long __get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc = 0;
+ #ifdef CONFIG_KALLSYMS
+@@ -662,8 +662,6 @@ unsigned long get_wchan(struct task_struct *task)
+ 	unsigned long ra = 0;
+ #endif
+ 
+-	if (!task || task == current || task_is_running(task))
+-		goto out;
+ 	if (!task_stack_page(task))
+ 		goto out;
+ 
+diff --git a/arch/mips/kernel/r2300_fpu.S b/arch/mips/kernel/r2300_fpu.S
+index 12e58053544fc..cbf6db98cfb38 100644
+--- a/arch/mips/kernel/r2300_fpu.S
++++ b/arch/mips/kernel/r2300_fpu.S
+@@ -29,8 +29,8 @@
+ #define EX2(a,b)						\
+ 9:	a,##b;							\
+ 	.section __ex_table,"a";				\
+-	PTR	9b,bad_stack;					\
+-	PTR	9b+4,bad_stack;					\
++	PTR	9b,fault;					\
++	PTR	9b+4,fault;					\
+ 	.previous
+ 
+ 	.set	mips1
+diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c
+index 2afa3eef486a9..5512cd586e6e8 100644
+--- a/arch/mips/kernel/syscall.c
++++ b/arch/mips/kernel/syscall.c
+@@ -240,12 +240,3 @@ SYSCALL_DEFINE3(cachectl, char *, addr, int, nbytes, int, op)
+ {
+ 	return -ENOSYS;
+ }
+-
+-/*
+- * If we ever come here the user sp is bad.  Zap the process right away.
+- * Due to the bad stack signaling wouldn't work.
+- */
+-asmlinkage void bad_stack(void)
+-{
+-	do_exit(SIGSEGV);
+-}
+diff --git a/arch/mips/lantiq/xway/dma.c b/arch/mips/lantiq/xway/dma.c
+index 63dccb2ed08b2..53fcc672a2944 100644
+--- a/arch/mips/lantiq/xway/dma.c
++++ b/arch/mips/lantiq/xway/dma.c
+@@ -11,6 +11,7 @@
+ #include <linux/export.h>
+ #include <linux/spinlock.h>
+ #include <linux/clk.h>
++#include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/of.h>
+ 
+@@ -30,6 +31,7 @@
+ #define LTQ_DMA_PCTRL		0x44
+ #define LTQ_DMA_IRNEN		0xf4
+ 
++#define DMA_ID_CHNR		GENMASK(26, 20)	/* channel number */
+ #define DMA_DESCPT		BIT(3)		/* descriptor complete irq */
+ #define DMA_TX			BIT(8)		/* TX channel direction */
+ #define DMA_CHAN_ON		BIT(0)		/* channel on / off bit */
+@@ -39,8 +41,11 @@
+ #define DMA_IRQ_ACK		0x7e		/* IRQ status register */
+ #define DMA_POLL		BIT(31)		/* turn on channel polling */
+ #define DMA_CLK_DIV4		BIT(6)		/* polling clock divider */
+-#define DMA_2W_BURST		BIT(1)		/* 2 word burst length */
+-#define DMA_MAX_CHANNEL		20		/* the soc has 20 channels */
++#define DMA_PCTRL_2W_BURST	0x1		/* 2 word burst length */
++#define DMA_PCTRL_4W_BURST	0x2		/* 4 word burst length */
++#define DMA_PCTRL_8W_BURST	0x3		/* 8 word burst length */
++#define DMA_TX_BURST_SHIFT	4		/* tx burst shift */
++#define DMA_RX_BURST_SHIFT	2		/* rx burst shift */
+ #define DMA_ETOP_ENDIANNESS	(0xf << 8) /* endianness swap etop channels */
+ #define DMA_WEIGHT	(BIT(17) | BIT(16))	/* default channel wheight */
+ 
+@@ -191,7 +196,8 @@ ltq_dma_init_port(int p)
+ 		break;
+ 
+ 	case DMA_PORT_DEU:
+-		ltq_dma_w32((DMA_2W_BURST << 4) | (DMA_2W_BURST << 2),
++		ltq_dma_w32((DMA_PCTRL_2W_BURST << DMA_TX_BURST_SHIFT) |
++			(DMA_PCTRL_2W_BURST << DMA_RX_BURST_SHIFT),
+ 			LTQ_DMA_PCTRL);
+ 		break;
+ 
+@@ -206,7 +212,7 @@ ltq_dma_init(struct platform_device *pdev)
+ {
+ 	struct clk *clk;
+ 	struct resource *res;
+-	unsigned id;
++	unsigned int id, nchannels;
+ 	int i;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -222,21 +228,24 @@ ltq_dma_init(struct platform_device *pdev)
+ 	clk_enable(clk);
+ 	ltq_dma_w32_mask(0, DMA_RESET, LTQ_DMA_CTRL);
+ 
++	usleep_range(1, 10);
++
+ 	/* disable all interrupts */
+ 	ltq_dma_w32(0, LTQ_DMA_IRNEN);
+ 
+ 	/* reset/configure each channel */
+-	for (i = 0; i < DMA_MAX_CHANNEL; i++) {
++	id = ltq_dma_r32(LTQ_DMA_ID);
++	nchannels = ((id & DMA_ID_CHNR) >> 20);
++	for (i = 0; i < nchannels; i++) {
+ 		ltq_dma_w32(i, LTQ_DMA_CS);
+ 		ltq_dma_w32(DMA_CHAN_RST, LTQ_DMA_CCTRL);
+ 		ltq_dma_w32(DMA_POLL | DMA_CLK_DIV4, LTQ_DMA_CPOLL);
+ 		ltq_dma_w32_mask(DMA_CHAN_ON, 0, LTQ_DMA_CCTRL);
+ 	}
+ 
+-	id = ltq_dma_r32(LTQ_DMA_ID);
+ 	dev_info(&pdev->dev,
+ 		"Init done - hw rev: %X, ports: %d, channels: %d\n",
+-		id & 0x1f, (id >> 16) & 0xf, id >> 20);
++		id & 0x1f, (id >> 16) & 0xf, nchannels);
+ 
+ 	return 0;
+ }
+diff --git a/arch/nds32/include/asm/processor.h b/arch/nds32/include/asm/processor.h
+index b82369c7659d4..e6bfc74972bb3 100644
+--- a/arch/nds32/include/asm/processor.h
++++ b/arch/nds32/include/asm/processor.h
+@@ -83,7 +83,7 @@ extern struct task_struct *last_task_used_math;
+ /* Prepare to copy thread state - unlazy all lazy status */
+ #define prepare_to_copy(tsk)	do { } while (0)
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define cpu_relax()			barrier()
+ 
+diff --git a/arch/nds32/kernel/process.c b/arch/nds32/kernel/process.c
+index 391895b54d13c..49fab9e39cbff 100644
+--- a/arch/nds32/kernel/process.c
++++ b/arch/nds32/kernel/process.c
+@@ -233,15 +233,12 @@ int dump_fpu(struct pt_regs *regs, elf_fpregset_t * fpu)
+ 
+ EXPORT_SYMBOL(dump_fpu);
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, lr;
+ 	unsigned long stack_start, stack_end;
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	if (IS_ENABLED(CONFIG_FRAME_POINTER)) {
+ 		stack_start = (unsigned long)end_of_stack(p);
+ 		stack_end = (unsigned long)task_stack_page(p) + THREAD_SIZE;
+@@ -258,5 +255,3 @@ unsigned long get_wchan(struct task_struct *p)
+ 	}
+ 	return 0;
+ }
+-
+-EXPORT_SYMBOL(get_wchan);
+diff --git a/arch/nios2/include/asm/processor.h b/arch/nios2/include/asm/processor.h
+index 94bcb86f679f5..b8125dfbcad2d 100644
+--- a/arch/nios2/include/asm/processor.h
++++ b/arch/nios2/include/asm/processor.h
+@@ -69,7 +69,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define task_pt_regs(p) \
+ 	((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)
+diff --git a/arch/nios2/kernel/process.c b/arch/nios2/kernel/process.c
+index 9ff37ba2bb603..f8ea522a15880 100644
+--- a/arch/nios2/kernel/process.c
++++ b/arch/nios2/kernel/process.c
+@@ -217,15 +217,12 @@ void dump(struct pt_regs *fp)
+ 	pr_emerg("\n\n");
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	stack_page = (unsigned long)p;
+ 	fp = ((struct switch_stack *)p->thread.ksp)->fp;	/* ;dgt2 */
+ 	do {
+diff --git a/arch/openrisc/include/asm/processor.h b/arch/openrisc/include/asm/processor.h
+index ad53b31848859..aa1699c18add8 100644
+--- a/arch/openrisc/include/asm/processor.h
++++ b/arch/openrisc/include/asm/processor.h
+@@ -73,7 +73,7 @@ struct thread_struct {
+ 
+ void start_thread(struct pt_regs *regs, unsigned long nip, unsigned long sp);
+ void release_thread(struct task_struct *);
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define cpu_relax()     barrier()
+ 
+diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c
+index 1b16d97e7da7f..a82b2caaa560d 100644
+--- a/arch/openrisc/kernel/dma.c
++++ b/arch/openrisc/kernel/dma.c
+@@ -33,7 +33,7 @@ page_set_nocache(pte_t *pte, unsigned long addr,
+ 	 * Flush the page out of the TLB so that the new page flags get
+ 	 * picked up next time there's an access
+ 	 */
+-	flush_tlb_page(NULL, addr);
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ 
+ 	/* Flush page out of dcache */
+ 	for (cl = __pa(addr); cl < __pa(next); cl += cpuinfo->dcache_block_size)
+@@ -56,7 +56,7 @@ page_clear_nocache(pte_t *pte, unsigned long addr,
+ 	 * Flush the page out of the TLB so that the new page flags get
+ 	 * picked up next time there's an access
+ 	 */
+-	flush_tlb_page(NULL, addr);
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ 
+ 	return 0;
+ }
+diff --git a/arch/openrisc/kernel/process.c b/arch/openrisc/kernel/process.c
+index eb62429681fc8..eeea6d54b198c 100644
+--- a/arch/openrisc/kernel/process.c
++++ b/arch/openrisc/kernel/process.c
+@@ -265,7 +265,7 @@ void dump_elf_thread(elf_greg_t *dest, struct pt_regs* regs)
+ 	dest[35] = 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	/* TODO */
+ 
+diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
+index 415e209732a3d..ba78766cf00b5 100644
+--- a/arch/openrisc/kernel/smp.c
++++ b/arch/openrisc/kernel/smp.c
+@@ -272,7 +272,7 @@ static inline void ipi_flush_tlb_range(void *info)
+ 	local_flush_tlb_range(NULL, fd->addr1, fd->addr2);
+ }
+ 
+-static void smp_flush_tlb_range(struct cpumask *cmask, unsigned long start,
++static void smp_flush_tlb_range(const struct cpumask *cmask, unsigned long start,
+ 				unsigned long end)
+ {
+ 	unsigned int cpuid;
+@@ -320,7 +320,9 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+ void flush_tlb_range(struct vm_area_struct *vma,
+ 		     unsigned long start, unsigned long end)
+ {
+-	smp_flush_tlb_range(mm_cpumask(vma->vm_mm), start, end);
++	const struct cpumask *cmask = vma ? mm_cpumask(vma->vm_mm)
++					  : cpu_online_mask;
++	smp_flush_tlb_range(cmask, start, end);
+ }
+ 
+ /* Instruction cache invalidate - performed on each cpu */
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index 43937af127b11..1f2fea3bfacdc 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -76,6 +76,8 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+ 	purge_tlb_end(flags);
+ }
+ 
++extern void __update_cache(pte_t pte);
++
+ /* Certain architectures need to do special things when PTEs
+  * within a page table are directly modified.  Thus, the following
+  * hook is made available.
+@@ -83,11 +85,14 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)
+ #define set_pte(pteptr, pteval)			\
+ 	do {					\
+ 		*(pteptr) = (pteval);		\
+-		barrier();			\
++		mb();				\
+ 	} while(0)
+ 
+ #define set_pte_at(mm, addr, pteptr, pteval)	\
+ 	do {					\
++		if (pte_present(pteval) &&	\
++		    pte_user(pteval))		\
++			__update_cache(pteval);	\
+ 		*(pteptr) = (pteval);		\
+ 		purge_tlb_entries(mm, addr);	\
+ 	} while (0)
+@@ -303,6 +308,7 @@ extern unsigned long *empty_zero_page;
+ 
+ #define pte_none(x)     (pte_val(x) == 0)
+ #define pte_present(x)	(pte_val(x) & _PAGE_PRESENT)
++#define pte_user(x)	(pte_val(x) & _PAGE_USER)
+ #define pte_clear(mm, addr, xp)  set_pte_at(mm, addr, xp, __pte(0))
+ 
+ #define pmd_flag(x)	(pmd_val(x) & PxD_FLAG_MASK)
+@@ -410,7 +416,7 @@ extern void paging_init (void);
+ 
+ #define PG_dcache_dirty         PG_arch_1
+ 
+-extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *);
++#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep)
+ 
+ /* Encode and de-code a swap entry */
+ 
+diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h
+index b5fbcd2c17808..5e5ceb5b9631f 100644
+--- a/arch/parisc/include/asm/processor.h
++++ b/arch/parisc/include/asm/processor.h
+@@ -277,7 +277,7 @@ struct mm_struct;
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)	((tsk)->thread.regs.iaoq[0])
+ #define KSTK_ESP(tsk)	((tsk)->thread.regs.gr[30])
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 86a1a63563fd5..c81ab0cb89255 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -83,9 +83,9 @@ EXPORT_SYMBOL(flush_cache_all_local);
+ #define pfn_va(pfn)	__va(PFN_PHYS(pfn))
+ 
+ void
+-update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
++__update_cache(pte_t pte)
+ {
+-	unsigned long pfn = pte_pfn(*ptep);
++	unsigned long pfn = pte_pfn(pte);
+ 	struct page *page;
+ 
+ 	/* We don't have pte special.  As a result, we can be called with
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 9f939afe6b88c..2716e58b498bb 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -1834,7 +1834,7 @@ syscall_restore:
+ 	LDREG	TI_TASK-THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1
+ 
+ 	/* Are we being ptraced? */
+-	ldw	TASK_FLAGS(%r1),%r19
++	LDREG	TI_FLAGS-THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r19
+ 	ldi	_TIF_SYSCALL_TRACE_MASK,%r2
+ 	and,COND(=)	%r19,%r2,%r0
+ 	b,n	syscall_restore_rfi
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 184ec3c1eae44..05e89d4fa911a 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -243,15 +243,12 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
+ }
+ 
+ unsigned long
+-get_wchan(struct task_struct *p)
++__get_wchan(struct task_struct *p)
+ {
+ 	struct unwind_frame_info info;
+ 	unsigned long ip;
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	/*
+ 	 * These bracket the sleeping functions..
+ 	 */
+diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
+index 1405b603b91b6..cf92ece20b757 100644
+--- a/arch/parisc/kernel/smp.c
++++ b/arch/parisc/kernel/smp.c
+@@ -29,6 +29,7 @@
+ #include <linux/bitops.h>
+ #include <linux/ftrace.h>
+ #include <linux/cpu.h>
++#include <linux/kgdb.h>
+ 
+ #include <linux/atomic.h>
+ #include <asm/current.h>
+@@ -69,7 +70,10 @@ enum ipi_message_type {
+ 	IPI_CALL_FUNC,
+ 	IPI_CPU_START,
+ 	IPI_CPU_STOP,
+-	IPI_CPU_TEST
++	IPI_CPU_TEST,
++#ifdef CONFIG_KGDB
++	IPI_ENTER_KGDB,
++#endif
+ };
+ 
+ 
+@@ -167,7 +171,12 @@ ipi_interrupt(int irq, void *dev_id)
+ 			case IPI_CPU_TEST:
+ 				smp_debug(100, KERN_DEBUG "CPU%d is alive!\n", this_cpu);
+ 				break;
+-
++#ifdef CONFIG_KGDB
++			case IPI_ENTER_KGDB:
++				smp_debug(100, KERN_DEBUG "CPU%d ENTER_KGDB\n", this_cpu);
++				kgdb_nmicallback(raw_smp_processor_id(), get_irq_regs());
++				break;
++#endif
+ 			default:
+ 				printk(KERN_CRIT "Unknown IPI num on CPU%d: %lu\n",
+ 					this_cpu, which);
+@@ -226,6 +235,12 @@ send_IPI_allbutself(enum ipi_message_type op)
+ 	}
+ }
+ 
++#ifdef CONFIG_KGDB
++void kgdb_roundup_cpus(void)
++{
++	send_IPI_allbutself(IPI_ENTER_KGDB);
++}
++#endif
+ 
+ inline void 
+ smp_send_stop(void)	{ send_IPI_allbutself(IPI_CPU_STOP); }
+diff --git a/arch/parisc/kernel/unwind.c b/arch/parisc/kernel/unwind.c
+index 87ae476d1c4f5..86a57fb0e6fae 100644
+--- a/arch/parisc/kernel/unwind.c
++++ b/arch/parisc/kernel/unwind.c
+@@ -21,6 +21,8 @@
+ #include <asm/ptrace.h>
+ 
+ #include <asm/unwind.h>
++#include <asm/switch_to.h>
++#include <asm/sections.h>
+ 
+ /* #define DEBUG 1 */
+ #ifdef DEBUG
+@@ -203,6 +205,11 @@ int __init unwind_init(void)
+ 	return 0;
+ }
+ 
++static bool pc_is_kernel_fn(unsigned long pc, void *fn)
++{
++	return (unsigned long)dereference_kernel_function_descriptor(fn) == pc;
++}
++
+ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int frame_size)
+ {
+ 	/*
+@@ -221,7 +228,7 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 	extern void * const _call_on_stack;
+ #endif /* CONFIG_IRQSTACKS */
+ 
+-	if (pc == (unsigned long) &handle_interruption) {
++	if (pc_is_kernel_fn(pc, handle_interruption)) {
+ 		struct pt_regs *regs = (struct pt_regs *)(info->sp - frame_size - PT_SZ_ALGN);
+ 		dbg("Unwinding through handle_interruption()\n");
+ 		info->prev_sp = regs->gr[30];
+@@ -229,13 +236,13 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 		return 1;
+ 	}
+ 
+-	if (pc == (unsigned long) &ret_from_kernel_thread ||
+-	    pc == (unsigned long) &syscall_exit) {
++	if (pc_is_kernel_fn(pc, ret_from_kernel_thread) ||
++	    pc_is_kernel_fn(pc, syscall_exit)) {
+ 		info->prev_sp = info->prev_ip = 0;
+ 		return 1;
+ 	}
+ 
+-	if (pc == (unsigned long) &intr_return) {
++	if (pc_is_kernel_fn(pc, intr_return)) {
+ 		struct pt_regs *regs;
+ 
+ 		dbg("Found intr_return()\n");
+@@ -246,20 +253,20 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 		return 1;
+ 	}
+ 
+-	if (pc == (unsigned long) &_switch_to_ret) {
++	if (pc_is_kernel_fn(pc, _switch_to) ||
++	    pc_is_kernel_fn(pc, _switch_to_ret)) {
+ 		info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE;
+ 		info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET);
+ 		return 1;
+ 	}
+ 
+ #ifdef CONFIG_IRQSTACKS
+-	if (pc == (unsigned long) &_call_on_stack) {
++	if (pc_is_kernel_fn(pc, _call_on_stack)) {
+ 		info->prev_sp = *(unsigned long *)(info->sp - FRAME_SIZE - REG_SZ);
+ 		info->prev_ip = *(unsigned long *)(info->sp - FRAME_SIZE - RP_OFFSET);
+ 		return 1;
+ 	}
+ #endif
+-
+ 	return 0;
+ }
+ 
+diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
+index 2769eb991f58d..3d208afd15bc6 100644
+--- a/arch/parisc/kernel/vmlinux.lds.S
++++ b/arch/parisc/kernel/vmlinux.lds.S
+@@ -57,6 +57,8 @@ SECTIONS
+ {
+ 	. = KERNEL_BINARY_TEXT_START;
+ 
++	_stext = .;	/* start of kernel text, includes init code & data */
++
+ 	__init_begin = .;
+ 	HEAD_TEXT_SECTION
+ 	MLONGCALL_DISCARD(INIT_TEXT_SECTION(8))
+@@ -80,7 +82,6 @@ SECTIONS
+ 	/* freed after init ends here */
+ 
+ 	_text = .;		/* Text and read-only data */
+-	_stext = .;
+ 	MLONGCALL_KEEP(INIT_TEXT_SECTION(8))
+ 	.text ALIGN(PAGE_SIZE) : {
+ 		TEXT_TEXT
+diff --git a/arch/parisc/mm/fixmap.c b/arch/parisc/mm/fixmap.c
+index 24426a7e1a5e5..cc15d737fda64 100644
+--- a/arch/parisc/mm/fixmap.c
++++ b/arch/parisc/mm/fixmap.c
+@@ -20,12 +20,9 @@ void notrace set_fixmap(enum fixed_addresses idx, phys_addr_t phys)
+ 	pte_t *pte;
+ 
+ 	if (pmd_none(*pmd))
+-		pmd = pmd_alloc(NULL, pud, vaddr);
+-
+-	pte = pte_offset_kernel(pmd, vaddr);
+-	if (pte_none(*pte))
+ 		pte = pte_alloc_kernel(pmd, vaddr);
+ 
++	pte = pte_offset_kernel(pmd, vaddr);
+ 	set_pte_at(&init_mm, vaddr, pte, __mk_pte(phys, PAGE_KERNEL_RWX));
+ 	flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
+ }
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 591a4e9394153..bf33f4b0de40b 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -842,9 +842,9 @@ void flush_tlb_all(void)
+ {
+ 	int do_recycle;
+ 
+-	__inc_irq_stat(irq_tlb_count);
+ 	do_recycle = 0;
+ 	spin_lock(&sid_lock);
++	__inc_irq_stat(irq_tlb_count);
+ 	if (dirty_space_ids > RECYCLE_THRESHOLD) {
+ 	    BUG_ON(recycle_inuse);  /* FIXME: Use a semaphore/wait queue here */
+ 	    get_dirty_sids(&recycle_ndirty,recycle_dirty_array);
+@@ -863,8 +863,8 @@ void flush_tlb_all(void)
+ #else
+ void flush_tlb_all(void)
+ {
+-	__inc_irq_stat(irq_tlb_count);
+ 	spin_lock(&sid_lock);
++	__inc_irq_stat(irq_tlb_count);
+ 	flush_tlb_all_local(NULL);
+ 	recycle_sids();
+ 	spin_unlock(&sid_lock);
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 663766fbf5055..d4d274bb07ffa 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -141,7 +141,7 @@ config PPC
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
+ 	select ARCH_HAS_SET_MEMORY
+-	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
++	select ARCH_HAS_STRICT_KERNEL_RWX	if (PPC_BOOK3S || PPC_8xx || 40x) && !HIBERNATION
+ 	select ARCH_HAS_STRICT_MODULE_RWX	if ARCH_HAS_STRICT_KERNEL_RWX && !PPC_BOOK3S_32
+ 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
+ 	select ARCH_HAS_UACCESS_FLUSHCACHE
+@@ -153,7 +153,7 @@ config PPC
+ 	select ARCH_OPTIONAL_KERNEL_RWX		if ARCH_HAS_STRICT_KERNEL_RWX
+ 	select ARCH_STACKWALK
+ 	select ARCH_SUPPORTS_ATOMIC_RMW
+-	select ARCH_SUPPORTS_DEBUG_PAGEALLOC	if PPC32 || PPC_BOOK3S_64
++	select ARCH_SUPPORTS_DEBUG_PAGEALLOC	if PPC_BOOK3S || PPC_8xx || 40x
+ 	select ARCH_USE_BUILTIN_BSWAP
+ 	select ARCH_USE_CMPXCHG_LOCKREF		if PPC64
+ 	select ARCH_USE_MEMTEST
+@@ -194,7 +194,7 @@ config PPC
+ 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
+ 	select HAVE_ARCH_KASAN			if PPC32 && PPC_PAGE_SHIFT <= 14
+ 	select HAVE_ARCH_KASAN_VMALLOC		if PPC32 && PPC_PAGE_SHIFT <= 14
+-	select HAVE_ARCH_KFENCE			if PPC32
++	select HAVE_ARCH_KFENCE			if PPC_BOOK3S_32 || PPC_8xx || 40x
+ 	select HAVE_ARCH_KGDB
+ 	select HAVE_ARCH_MMAP_RND_BITS
+ 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
+diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
+index f06ae00f2a65e..d6ba821a56ced 100644
+--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
+@@ -193,10 +193,12 @@ static inline pte_t pte_wrprotect(pte_t pte)
+ }
+ #endif
+ 
++#ifndef pte_mkexec
+ static inline pte_t pte_mkexec(pte_t pte)
+ {
+ 	return __pte(pte_val(pte) | _PAGE_EXEC);
+ }
++#endif
+ 
+ #define pmd_none(pmd)		(!pmd_val(pmd))
+ #define	pmd_bad(pmd)		(pmd_val(pmd) & _PMD_BAD)
+@@ -306,30 +308,29 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+ }
+ 
+ #define __HAVE_ARCH_PTEP_SET_WRPROTECT
++#ifndef ptep_set_wrprotect
+ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
+ 				      pte_t *ptep)
+ {
+-	unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0)));
+-	unsigned long set = pte_val(pte_wrprotect(__pte(0)));
+-
+-	pte_update(mm, addr, ptep, clr, set, 0);
++	pte_update(mm, addr, ptep, _PAGE_RW, 0, 0);
+ }
++#endif
+ 
++#ifndef __ptep_set_access_flags
+ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
+ 					   pte_t *ptep, pte_t entry,
+ 					   unsigned long address,
+ 					   int psize)
+ {
+-	pte_t pte_set = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte_mkexec(__pte(0)))));
+-	pte_t pte_clr = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte_mkexec(__pte(~0)))));
+-	unsigned long set = pte_val(entry) & pte_val(pte_set);
+-	unsigned long clr = ~pte_val(entry) & ~pte_val(pte_clr);
++	unsigned long set = pte_val(entry) &
++			    (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
+ 	int huge = psize > mmu_virtual_psize ? 1 : 0;
+ 
+-	pte_update(vma->vm_mm, address, ptep, clr, set, huge);
++	pte_update(vma->vm_mm, address, ptep, 0, set, huge);
+ 
+ 	flush_tlb_page(vma, address);
+ }
++#endif
+ 
+ static inline int pte_young(pte_t pte)
+ {
+diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
+index fcc48d590d888..1a89ebdc3acc9 100644
+--- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h
++++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
+@@ -136,6 +136,28 @@ static inline pte_t pte_mkhuge(pte_t pte)
+ 
+ #define pte_mkhuge pte_mkhuge
+ 
++static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, pte_t *p,
++				     unsigned long clr, unsigned long set, int huge);
++
++static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
++{
++	pte_update(mm, addr, ptep, 0, _PAGE_RO, 0);
++}
++#define ptep_set_wrprotect ptep_set_wrprotect
++
++static inline void __ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
++					   pte_t entry, unsigned long address, int psize)
++{
++	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_EXEC);
++	unsigned long clr = ~pte_val(entry) & _PAGE_RO;
++	int huge = psize > mmu_virtual_psize ? 1 : 0;
++
++	pte_update(vma->vm_mm, address, ptep, clr, set, huge);
++
++	flush_tlb_page(vma, address);
++}
++#define __ptep_set_access_flags __ptep_set_access_flags
++
+ static inline unsigned long pgd_leaf_size(pgd_t pgd)
+ {
+ 	if (pgd_val(pgd) & _PMD_PAGE_8M)
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index d081704b13fb9..9d2905a474103 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -118,11 +118,6 @@ static inline pte_t pte_wrprotect(pte_t pte)
+ 	return __pte(pte_val(pte) & ~_PAGE_RW);
+ }
+ 
+-static inline pte_t pte_mkexec(pte_t pte)
+-{
+-	return __pte(pte_val(pte) | _PAGE_EXEC);
+-}
+-
+ #define PMD_BAD_BITS		(PTE_TABLE_SIZE-1)
+ #define PUD_BAD_BITS		(PMD_TABLE_SIZE-1)
+ 
+diff --git a/arch/powerpc/include/asm/nohash/pte-book3e.h b/arch/powerpc/include/asm/nohash/pte-book3e.h
+index 813918f407653..f798640422c2d 100644
+--- a/arch/powerpc/include/asm/nohash/pte-book3e.h
++++ b/arch/powerpc/include/asm/nohash/pte-book3e.h
+@@ -48,7 +48,7 @@
+ #define _PAGE_WRITETHRU	0x800000 /* W: cache write-through */
+ 
+ /* "Higher level" linux bit combinations */
+-#define _PAGE_EXEC		_PAGE_BAP_UX /* .. and was cache cleaned */
++#define _PAGE_EXEC		(_PAGE_BAP_SX | _PAGE_BAP_UX) /* .. and was cache cleaned */
+ #define _PAGE_RW		(_PAGE_BAP_SW | _PAGE_BAP_UW) /* User write permission */
+ #define _PAGE_KERNEL_RW		(_PAGE_BAP_SW | _PAGE_BAP_SR | _PAGE_DIRTY)
+ #define _PAGE_KERNEL_RO		(_PAGE_BAP_SR)
+@@ -93,11 +93,11 @@
+ /* Permission masks used to generate the __P and __S table */
+ #define PAGE_NONE	__pgprot(_PAGE_BASE)
+ #define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
+-#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
++#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_BAP_UX)
+ #define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
+-#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
++#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_BAP_UX)
+ #define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER)
+-#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
++#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_BAP_UX)
+ 
+ #ifndef __ASSEMBLY__
+ static inline pte_t pte_mkprivileged(pte_t pte)
+@@ -113,6 +113,16 @@ static inline pte_t pte_mkuser(pte_t pte)
+ }
+ 
+ #define pte_mkuser pte_mkuser
++
++static inline pte_t pte_mkexec(pte_t pte)
++{
++	if (pte_val(pte) & _PAGE_BAP_UR)
++		return __pte((pte_val(pte) & ~_PAGE_BAP_SX) | _PAGE_BAP_UX);
++	else
++		return __pte((pte_val(pte) & ~_PAGE_BAP_UX) | _PAGE_BAP_SX);
++}
++#define pte_mkexec pte_mkexec
++
+ #endif /* __ASSEMBLY__ */
+ 
+ #endif /* __KERNEL__ */
+diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
+index bcb7b5f917be6..b325022ffa2b0 100644
+--- a/arch/powerpc/include/asm/paravirt.h
++++ b/arch/powerpc/include/asm/paravirt.h
+@@ -97,7 +97,23 @@ static inline bool vcpu_is_preempted(int cpu)
+ 
+ #ifdef CONFIG_PPC_SPLPAR
+ 	if (!is_kvm_guest()) {
+-		int first_cpu = cpu_first_thread_sibling(smp_processor_id());
++		int first_cpu;
++
++		/*
++		 * The result of vcpu_is_preempted() is used in a
++		 * speculative way, and is always subject to invalidation
++		 * by events internal and external to Linux. While we can
++		 * be called in preemptable context (in the Linux sense),
++		 * we're not accessing per-cpu resources in a way that can
++		 * race destructively with Linux scheduler preemption and
++		 * migration, and callers can tolerate the potential for
++		 * error introduced by sampling the CPU index without
++		 * pinning the task to it. So it is permissible to use
++		 * raw_smp_processor_id() here to defeat the preempt debug
++		 * warnings that can arise from using smp_processor_id()
++		 * in arbitrary contexts.
++		 */
++		first_cpu = cpu_first_thread_sibling(raw_smp_processor_id());
+ 
+ 		/*
+ 		 * Preemption can only happen at core granularity. This CPU
+diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
+index f348e564f7dd5..e39bd0ff69f3a 100644
+--- a/arch/powerpc/include/asm/processor.h
++++ b/arch/powerpc/include/asm/processor.h
+@@ -300,7 +300,7 @@ struct thread_struct {
+ 
+ #define task_pt_regs(tsk)	((tsk)->thread.regs)
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->nip: 0)
+ #define KSTK_ESP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->gpr[1]: 0)
+diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
+index c7022c41cc314..20328f72f9f2b 100644
+--- a/arch/powerpc/kernel/firmware.c
++++ b/arch/powerpc/kernel/firmware.c
+@@ -31,11 +31,10 @@ int __init check_kvm_guest(void)
+ 	if (!hyper_node)
+ 		return 0;
+ 
+-	if (!of_device_is_compatible(hyper_node, "linux,kvm"))
+-		return 0;
+-
+-	static_branch_enable(&kvm_guest);
++	if (of_device_is_compatible(hyper_node, "linux,kvm"))
++		static_branch_enable(&kvm_guest);
+ 
++	of_node_put(hyper_node);
+ 	return 0;
+ }
+ core_initcall(check_kvm_guest); // before kvm_guest_init()
+diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
+index e5503420b6c6d..ef8d1b1c234e7 100644
+--- a/arch/powerpc/kernel/head_booke.h
++++ b/arch/powerpc/kernel/head_booke.h
+@@ -465,12 +465,21 @@ label:
+ 	bl	do_page_fault;						      \
+ 	b	interrupt_return
+ 
++/*
++ * Instruction TLB Error interrupt handlers may call InstructionStorage
++ * directly without clearing ESR, so the ESR at this point may be left over
++ * from a prior interrupt.
++ *
++ * In any case, do_page_fault for BOOK3E does not use ESR and always expects
++ * dsisr to be 0. ESR_DST from a prior store in particular would confuse fault
++ * handling.
++ */
+ #define INSTRUCTION_STORAGE_EXCEPTION					      \
+ 	START_EXCEPTION(InstructionStorage)				      \
+-	NORMAL_EXCEPTION_PROLOG(0x400, INST_STORAGE);		      \
+-	mfspr	r5,SPRN_ESR;		/* Grab the ESR and save it */	      \
++	NORMAL_EXCEPTION_PROLOG(0x400, INST_STORAGE);			      \
++	li	r5,0;			/* Store 0 in regs->esr (dsisr) */    \
+ 	stw	r5,_ESR(r11);						      \
+-	stw	r12, _DEAR(r11);	/* Pass SRR0 as arg2 */		      \
++	stw	r12, _DEAR(r11);	/* Set regs->dear (dar) to SRR0 */    \
+ 	prepare_transfer_to_handler;					      \
+ 	bl	do_page_fault;						      \
+ 	b	interrupt_return
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index ec4e2d3635077..795bf682ec714 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -268,7 +268,7 @@ static void check_return_regs_valid(struct pt_regs *regs)
+ 	if (trap_is_scv(regs))
+ 		return;
+ 
+-	trap = regs->trap;
++	trap = TRAP(regs);
+ 	// EE in HV mode sets HSRRs like 0xea0
+ 	if (cpu_has_feature(CPU_FTR_HVMODE) && trap == INTERRUPT_EXTERNAL)
+ 		trap = 0xea0;
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 185beb2905801..247ef0b9bfa4e 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -2111,14 +2111,11 @@ int validate_sp(unsigned long sp, struct task_struct *p,
+ 
+ EXPORT_SYMBOL(validate_sp);
+ 
+-static unsigned long __get_wchan(struct task_struct *p)
++static unsigned long ___get_wchan(struct task_struct *p)
+ {
+ 	unsigned long ip, sp;
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	sp = p->thread.ksp;
+ 	if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD))
+ 		return 0;
+@@ -2137,14 +2134,14 @@ static unsigned long __get_wchan(struct task_struct *p)
+ 	return 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long ret;
+ 
+ 	if (!try_get_task_stack(p))
+ 		return 0;
+ 
+-	ret = __get_wchan(p);
++	ret = ___get_wchan(p);
+ 
+ 	put_task_stack(p);
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index af822f09785ff..29d3b76d71ec8 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3687,7 +3687,20 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
+ 
+ 	kvmppc_set_host_core(pcpu);
+ 
+-	guest_exit_irqoff();
++	context_tracking_guest_exit();
++	if (!vtime_accounting_enabled_this_cpu()) {
++		local_irq_enable();
++		/*
++		 * Service IRQs here before vtime_account_guest_exit() so any
++		 * ticks that occurred while running the guest are accounted to
++		 * the guest. If vtime accounting is enabled, accounting uses
++		 * TB rather than ticks, so it can be done without enabling
++		 * interrupts here, which has the problem that it accounts
++		 * interrupt processing overhead to the host.
++		 */
++		local_irq_disable();
++	}
++	vtime_account_guest_exit();
+ 
+ 	local_irq_enable();
+ 
+@@ -4462,7 +4475,20 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ 
+ 	kvmppc_set_host_core(pcpu);
+ 
+-	guest_exit_irqoff();
++	context_tracking_guest_exit();
++	if (!vtime_accounting_enabled_this_cpu()) {
++		local_irq_enable();
++		/*
++		 * Service IRQs here before vtime_account_guest_exit() so any
++		 * ticks that occurred while running the guest are accounted to
++		 * the guest. If vtime accounting is enabled, accounting uses
++		 * TB rather than ticks, so it can be done without enabling
++		 * interrupts here, which has the problem that it accounts
++		 * interrupt processing overhead to the host.
++		 */
++		local_irq_disable();
++	}
++	vtime_account_guest_exit();
+ 
+ 	local_irq_enable();
+ 
+diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
+index 551b30d84aeeb..0f9a25544bf95 100644
+--- a/arch/powerpc/kvm/booke.c
++++ b/arch/powerpc/kvm/booke.c
+@@ -1047,7 +1047,21 @@ int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+ 	}
+ 
+ 	trace_kvm_exit(exit_nr, vcpu);
+-	guest_exit_irqoff();
++
++	context_tracking_guest_exit();
++	if (!vtime_accounting_enabled_this_cpu()) {
++		local_irq_enable();
++		/*
++		 * Service IRQs here before vtime_account_guest_exit() so any
++		 * ticks that occurred while running the guest are accounted to
++		 * the guest. If vtime accounting is enabled, accounting uses
++		 * TB rather than ticks, so it can be done without enabling
++		 * interrupts here, which has the problem that it accounts
++		 * interrupt processing overhead to the host.
++		 */
++		local_irq_disable();
++	}
++	vtime_account_guest_exit();
+ 
+ 	local_irq_enable();
+ 
+diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
+index cda17bee5afea..c3e06922468b3 100644
+--- a/arch/powerpc/lib/feature-fixups.c
++++ b/arch/powerpc/lib/feature-fixups.c
+@@ -228,6 +228,7 @@ static void do_stf_exit_barrier_fixups(enum stf_barrier_type types)
+ 
+ static bool stf_exit_reentrant = false;
+ static bool rfi_exit_reentrant = false;
++static DEFINE_MUTEX(exit_flush_lock);
+ 
+ static int __do_stf_barrier_fixups(void *data)
+ {
+@@ -253,6 +254,9 @@ void do_stf_barrier_fixups(enum stf_barrier_type types)
+ 	 * low level interrupt exit code before patching. After the patching,
+ 	 * if allowed, then flip the branch to allow fast exits.
+ 	 */
++
++	// Prevent static key update races with do_rfi_flush_fixups()
++	mutex_lock(&exit_flush_lock);
+ 	static_branch_enable(&interrupt_exit_not_reentrant);
+ 
+ 	stop_machine(__do_stf_barrier_fixups, &types, NULL);
+@@ -264,6 +268,8 @@ void do_stf_barrier_fixups(enum stf_barrier_type types)
+ 
+ 	if (stf_exit_reentrant && rfi_exit_reentrant)
+ 		static_branch_disable(&interrupt_exit_not_reentrant);
++
++	mutex_unlock(&exit_flush_lock);
+ }
+ 
+ void do_uaccess_flush_fixups(enum l1d_flush_type types)
+@@ -486,6 +492,9 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
+ 	 * without stop_machine, so this could be achieved with a broadcast
+ 	 * IPI instead, but this matches the stf sequence.
+ 	 */
++
++	// Prevent static key update races with do_stf_barrier_fixups()
++	mutex_lock(&exit_flush_lock);
+ 	static_branch_enable(&interrupt_exit_not_reentrant);
+ 
+ 	stop_machine(__do_rfi_flush_fixups, &types, NULL);
+@@ -497,6 +506,8 @@ void do_rfi_flush_fixups(enum l1d_flush_type types)
+ 
+ 	if (stf_exit_reentrant && rfi_exit_reentrant)
+ 		static_branch_disable(&interrupt_exit_not_reentrant);
++
++	mutex_unlock(&exit_flush_lock);
+ }
+ 
+ void do_barrier_nospec_fixups_range(bool enable, void *fixup_start, void *fixup_end)
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index ad198b4392224..0380efff535a1 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -20,8 +20,8 @@
+ #include <asm/machdep.h>
+ #include <asm/rtas.h>
+ #include <asm/kasan.h>
+-#include <asm/sparsemem.h>
+ #include <asm/svm.h>
++#include <asm/mmzone.h>
+ 
+ #include <mm/mmu_decl.h>
+ 
+diff --git a/arch/powerpc/mm/nohash/tlb_low_64e.S b/arch/powerpc/mm/nohash/tlb_low_64e.S
+index bf24451f3e71f..9235e720e3572 100644
+--- a/arch/powerpc/mm/nohash/tlb_low_64e.S
++++ b/arch/powerpc/mm/nohash/tlb_low_64e.S
+@@ -222,7 +222,7 @@ tlb_miss_kernel_bolted:
+ 
+ tlb_miss_fault_bolted:
+ 	/* We need to check if it was an instruction miss */
+-	andi.	r10,r11,_PAGE_EXEC|_PAGE_BAP_SX
++	andi.	r10,r11,_PAGE_BAP_UX|_PAGE_BAP_SX
+ 	bne	itlb_miss_fault_bolted
+ dtlb_miss_fault_bolted:
+ 	tlb_epilog_bolted
+@@ -239,7 +239,7 @@ itlb_miss_fault_bolted:
+ 	srdi	r15,r16,60		/* get region */
+ 	bne-	itlb_miss_fault_bolted
+ 
+-	li	r11,_PAGE_PRESENT|_PAGE_EXEC	/* Base perm */
++	li	r11,_PAGE_PRESENT|_PAGE_BAP_UX	/* Base perm */
+ 
+ 	/* We do the user/kernel test for the PID here along with the RW test
+ 	 */
+@@ -614,7 +614,7 @@ itlb_miss_fault_e6500:
+ 
+ 	/* We do the user/kernel test for the PID here along with the RW test
+ 	 */
+-	li	r11,_PAGE_PRESENT|_PAGE_EXEC	/* Base perm */
++	li	r11,_PAGE_PRESENT|_PAGE_BAP_UX	/* Base perm */
+ 	oris	r11,r11,_PAGE_ACCESSED@h
+ 
+ 	cmpldi	cr0,r15,0			/* Check for user region */
+@@ -734,7 +734,7 @@ normal_tlb_miss_done:
+ 
+ normal_tlb_miss_access_fault:
+ 	/* We need to check if it was an instruction miss */
+-	andi.	r10,r11,_PAGE_EXEC
++	andi.	r10,r11,_PAGE_BAP_UX
+ 	bne	1f
+ 	ld	r14,EX_TLB_DEAR(r12)
+ 	ld	r15,EX_TLB_ESR(r12)
+diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
+index dcf5ecca19d99..fde1ed445ca46 100644
+--- a/arch/powerpc/mm/pgtable_32.c
++++ b/arch/powerpc/mm/pgtable_32.c
+@@ -173,7 +173,7 @@ void mark_rodata_ro(void)
+ }
+ #endif
+ 
+-#ifdef CONFIG_DEBUG_PAGEALLOC
++#if defined(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && defined(CONFIG_DEBUG_PAGEALLOC)
+ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ {
+ 	unsigned long addr = (unsigned long)page_address(page);
+diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
+index fcbf7a917c566..90ce75f0f1e2a 100644
+--- a/arch/powerpc/net/bpf_jit_comp.c
++++ b/arch/powerpc/net/bpf_jit_comp.c
+@@ -241,8 +241,8 @@ skip_codegen_passes:
+ 	fp->jited_len = alloclen;
+ 
+ 	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
+-	bpf_jit_binary_lock_ro(bpf_hdr);
+ 	if (!fp->is_func || extra_pass) {
++		bpf_jit_binary_lock_ro(bpf_hdr);
+ 		bpf_prog_fill_jited_linfo(fp, addrs);
+ out_addrs:
+ 		kfree(addrs);
+diff --git a/arch/powerpc/perf/power10-events-list.h b/arch/powerpc/perf/power10-events-list.h
+index 93be7197d2502..564f14097f07b 100644
+--- a/arch/powerpc/perf/power10-events-list.h
++++ b/arch/powerpc/perf/power10-events-list.h
+@@ -9,10 +9,10 @@
+ /*
+  * Power10 event codes.
+  */
+-EVENT(PM_RUN_CYC,				0x600f4);
++EVENT(PM_CYC,				0x600f4);
+ EVENT(PM_DISP_STALL_CYC,			0x100f8);
+ EVENT(PM_EXEC_STALL,				0x30008);
+-EVENT(PM_RUN_INST_CMPL,				0x500fa);
++EVENT(PM_INST_CMPL,				0x500fa);
+ EVENT(PM_BR_CMPL,                               0x4d05e);
+ EVENT(PM_BR_MPRED_CMPL,                         0x400f6);
+ EVENT(PM_BR_FIN,				0x2f04a);
+@@ -50,8 +50,8 @@ EVENT(PM_DTLB_MISS,				0x300fc);
+ /* ITLB Reloaded */
+ EVENT(PM_ITLB_MISS,				0x400fc);
+ 
+-EVENT(PM_RUN_CYC_ALT,				0x0001e);
+-EVENT(PM_RUN_INST_CMPL_ALT,			0x00002);
++EVENT(PM_CYC_ALT,				0x0001e);
++EVENT(PM_INST_CMPL_ALT,				0x00002);
+ 
+ /*
+  * Memory Access Events
+diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
+index f9d64c63bb4a7..9dd75f3858372 100644
+--- a/arch/powerpc/perf/power10-pmu.c
++++ b/arch/powerpc/perf/power10-pmu.c
+@@ -91,8 +91,8 @@ extern u64 PERF_REG_EXTENDED_MASK;
+ 
+ /* Table of alternatives, sorted by column 0 */
+ static const unsigned int power10_event_alternatives[][MAX_ALT] = {
+-	{ PM_RUN_CYC_ALT,		PM_RUN_CYC },
+-	{ PM_RUN_INST_CMPL_ALT,		PM_RUN_INST_CMPL },
++	{ PM_CYC_ALT,			PM_CYC },
++	{ PM_INST_CMPL_ALT,		PM_INST_CMPL },
+ };
+ 
+ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
+@@ -118,8 +118,8 @@ static int power10_check_attr_config(struct perf_event *ev)
+ 	return 0;
+ }
+ 
+-GENERIC_EVENT_ATTR(cpu-cycles,			PM_RUN_CYC);
+-GENERIC_EVENT_ATTR(instructions,		PM_RUN_INST_CMPL);
++GENERIC_EVENT_ATTR(cpu-cycles,			PM_CYC);
++GENERIC_EVENT_ATTR(instructions,		PM_INST_CMPL);
+ GENERIC_EVENT_ATTR(branch-instructions,		PM_BR_CMPL);
+ GENERIC_EVENT_ATTR(branch-misses,		PM_BR_MPRED_CMPL);
+ GENERIC_EVENT_ATTR(cache-references,		PM_LD_REF_L1);
+@@ -148,8 +148,8 @@ CACHE_EVENT_ATTR(dTLB-load-misses,		PM_DTLB_MISS);
+ CACHE_EVENT_ATTR(iTLB-load-misses,		PM_ITLB_MISS);
+ 
+ static struct attribute *power10_events_attr_dd1[] = {
+-	GENERIC_EVENT_PTR(PM_RUN_CYC),
+-	GENERIC_EVENT_PTR(PM_RUN_INST_CMPL),
++	GENERIC_EVENT_PTR(PM_CYC),
++	GENERIC_EVENT_PTR(PM_INST_CMPL),
+ 	GENERIC_EVENT_PTR(PM_BR_CMPL),
+ 	GENERIC_EVENT_PTR(PM_BR_MPRED_CMPL),
+ 	GENERIC_EVENT_PTR(PM_LD_REF_L1),
+@@ -173,8 +173,8 @@ static struct attribute *power10_events_attr_dd1[] = {
+ };
+ 
+ static struct attribute *power10_events_attr[] = {
+-	GENERIC_EVENT_PTR(PM_RUN_CYC),
+-	GENERIC_EVENT_PTR(PM_RUN_INST_CMPL),
++	GENERIC_EVENT_PTR(PM_CYC),
++	GENERIC_EVENT_PTR(PM_INST_CMPL),
+ 	GENERIC_EVENT_PTR(PM_BR_FIN),
+ 	GENERIC_EVENT_PTR(PM_MPRED_BR_FIN),
+ 	GENERIC_EVENT_PTR(PM_LD_REF_L1),
+@@ -271,8 +271,8 @@ static const struct attribute_group *power10_pmu_attr_groups[] = {
+ };
+ 
+ static int power10_generic_events_dd1[] = {
+-	[PERF_COUNT_HW_CPU_CYCLES] =			PM_RUN_CYC,
+-	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_RUN_INST_CMPL,
++	[PERF_COUNT_HW_CPU_CYCLES] =			PM_CYC,
++	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_INST_CMPL,
+ 	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] =		PM_BR_CMPL,
+ 	[PERF_COUNT_HW_BRANCH_MISSES] =			PM_BR_MPRED_CMPL,
+ 	[PERF_COUNT_HW_CACHE_REFERENCES] =		PM_LD_REF_L1,
+@@ -280,8 +280,8 @@ static int power10_generic_events_dd1[] = {
+ };
+ 
+ static int power10_generic_events[] = {
+-	[PERF_COUNT_HW_CPU_CYCLES] =			PM_RUN_CYC,
+-	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_RUN_INST_CMPL,
++	[PERF_COUNT_HW_CPU_CYCLES] =			PM_CYC,
++	[PERF_COUNT_HW_INSTRUCTIONS] =			PM_INST_CMPL,
+ 	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] =		PM_BR_FIN,
+ 	[PERF_COUNT_HW_BRANCH_MISSES] =			PM_MPRED_BR_FIN,
+ 	[PERF_COUNT_HW_CACHE_REFERENCES] =		PM_LD_REF_L1,
+@@ -548,6 +548,24 @@ static u64 power10_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
+ 
+ #undef C
+ 
++/*
++ * Set the MMCR0[CC56RUN] bit to enable counting for
++ * PMC5 and PMC6 regardless of the state of CTRL[RUN],
++ * so that we can use counters 5 and 6 as PM_INST_CMPL and
++ * PM_CYC.
++ */
++static int power10_compute_mmcr(u64 event[], int n_ev,
++				unsigned int hwc[], struct mmcr_regs *mmcr,
++				struct perf_event *pevents[], u32 flags)
++{
++	int ret;
++
++	ret = isa207_compute_mmcr(event, n_ev, hwc, mmcr, pevents, flags);
++	if (!ret)
++		mmcr->mmcr0 |= MMCR0_C56RUN;
++	return ret;
++}
++
+ static struct power_pmu power10_pmu = {
+ 	.name			= "POWER10",
+ 	.n_counter		= MAX_PMU_COUNTERS,
+@@ -555,7 +573,7 @@ static struct power_pmu power10_pmu = {
+ 	.test_adder		= ISA207_TEST_ADDER,
+ 	.group_constraint_mask	= CNST_CACHE_PMC4_MASK,
+ 	.group_constraint_val	= CNST_CACHE_PMC4_VAL,
+-	.compute_mmcr		= isa207_compute_mmcr,
++	.compute_mmcr		= power10_compute_mmcr,
+ 	.config_bhrb		= power10_config_bhrb,
+ 	.bhrb_filter_map	= power10_bhrb_filter_map,
+ 	.get_constraint		= isa207_get_constraint,
+diff --git a/arch/powerpc/platforms/44x/fsp2.c b/arch/powerpc/platforms/44x/fsp2.c
+index b299e43f5ef94..823397c802def 100644
+--- a/arch/powerpc/platforms/44x/fsp2.c
++++ b/arch/powerpc/platforms/44x/fsp2.c
+@@ -208,6 +208,7 @@ static void node_irq_request(const char *compat, irq_handler_t errirq_handler)
+ 		if (irq == NO_IRQ) {
+ 			pr_err("device tree node %pOFn is missing a interrupt",
+ 			      np);
++			of_node_put(np);
+ 			return;
+ 		}
+ 
+@@ -215,6 +216,7 @@ static void node_irq_request(const char *compat, irq_handler_t errirq_handler)
+ 		if (rc) {
+ 			pr_err("fsp_of_probe: request_irq failed: np=%pOF rc=%d",
+ 			      np, rc);
++			of_node_put(np);
+ 			return;
+ 		}
+ 	}
+diff --git a/arch/powerpc/platforms/85xx/Makefile b/arch/powerpc/platforms/85xx/Makefile
+index d1dd0dca5ebf0..bd750edeb105b 100644
+--- a/arch/powerpc/platforms/85xx/Makefile
++++ b/arch/powerpc/platforms/85xx/Makefile
+@@ -3,7 +3,9 @@
+ # Makefile for the PowerPC 85xx linux kernel.
+ #
+ obj-$(CONFIG_SMP) += smp.o
+-obj-$(CONFIG_FSL_PMC)		  += mpc85xx_pm_ops.o
++ifneq ($(CONFIG_FSL_CORENET_RCPM),y)
++obj-$(CONFIG_SMP) += mpc85xx_pm_ops.o
++endif
+ 
+ obj-y += common.o
+ 
+diff --git a/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c b/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c
+index 7c0133f558d02..4a8af80011a6f 100644
+--- a/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c
++++ b/arch/powerpc/platforms/85xx/mpc85xx_pm_ops.c
+@@ -17,6 +17,7 @@
+ 
+ static struct ccsr_guts __iomem *guts;
+ 
++#ifdef CONFIG_FSL_PMC
+ static void mpc85xx_irq_mask(int cpu)
+ {
+ 
+@@ -49,6 +50,7 @@ static void mpc85xx_cpu_up_prepare(int cpu)
+ {
+ 
+ }
++#endif
+ 
+ static void mpc85xx_freeze_time_base(bool freeze)
+ {
+@@ -76,10 +78,12 @@ static const struct of_device_id mpc85xx_smp_guts_ids[] = {
+ 
+ static const struct fsl_pm_ops mpc85xx_pm_ops = {
+ 	.freeze_time_base = mpc85xx_freeze_time_base,
++#ifdef CONFIG_FSL_PMC
+ 	.irq_mask = mpc85xx_irq_mask,
+ 	.irq_unmask = mpc85xx_irq_unmask,
+ 	.cpu_die = mpc85xx_cpu_die,
+ 	.cpu_up_prepare = mpc85xx_cpu_up_prepare,
++#endif
+ };
+ 
+ int __init mpc85xx_setup_pmc(void)
+@@ -94,9 +98,8 @@ int __init mpc85xx_setup_pmc(void)
+ 			pr_err("Could not map guts node address\n");
+ 			return -ENOMEM;
+ 		}
++		qoriq_pm_ops = &mpc85xx_pm_ops;
+ 	}
+ 
+-	qoriq_pm_ops = &mpc85xx_pm_ops;
+-
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
+index c6df294054fe9..83f4a6389a282 100644
+--- a/arch/powerpc/platforms/85xx/smp.c
++++ b/arch/powerpc/platforms/85xx/smp.c
+@@ -40,7 +40,6 @@ struct epapr_spin_table {
+ 	u32	pir;
+ };
+ 
+-#ifdef CONFIG_HOTPLUG_CPU
+ static u64 timebase;
+ static int tb_req;
+ static int tb_valid;
+@@ -112,6 +111,7 @@ static void mpc85xx_take_timebase(void)
+ 	local_irq_restore(flags);
+ }
+ 
++#ifdef CONFIG_HOTPLUG_CPU
+ static void smp_85xx_cpu_offline_self(void)
+ {
+ 	unsigned int cpu = smp_processor_id();
+@@ -495,21 +495,21 @@ void __init mpc85xx_smp_init(void)
+ 		smp_85xx_ops.probe = NULL;
+ 	}
+ 
+-#ifdef CONFIG_HOTPLUG_CPU
+ #ifdef CONFIG_FSL_CORENET_RCPM
++	/* Assign a value to qoriq_pm_ops on PPC_E500MC */
+ 	fsl_rcpm_init();
+-#endif
+-
+-#ifdef CONFIG_FSL_PMC
++#else
++	/* Assign a value to qoriq_pm_ops on !PPC_E500MC */
+ 	mpc85xx_setup_pmc();
+ #endif
+ 	if (qoriq_pm_ops) {
+ 		smp_85xx_ops.give_timebase = mpc85xx_give_timebase;
+ 		smp_85xx_ops.take_timebase = mpc85xx_take_timebase;
++#ifdef CONFIG_HOTPLUG_CPU
+ 		smp_85xx_ops.cpu_offline_self = smp_85xx_cpu_offline_self;
+ 		smp_85xx_ops.cpu_die = qoriq_cpu_kill;
+-	}
+ #endif
++	}
+ 	smp_ops = &smp_85xx_ops;
+ 
+ #ifdef CONFIG_KEXEC_CORE
+diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
+index 30172e52e16b7..4d82c92ddd523 100644
+--- a/arch/powerpc/platforms/book3s/vas-api.c
++++ b/arch/powerpc/platforms/book3s/vas-api.c
+@@ -303,7 +303,7 @@ static int coproc_ioc_tx_win_open(struct file *fp, unsigned long arg)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!cp_inst->coproc->vops && !cp_inst->coproc->vops->open_win) {
++	if (!cp_inst->coproc->vops || !cp_inst->coproc->vops->open_win) {
+ 		pr_err("VAS API is not registered\n");
+ 		return -EACCES;
+ 	}
+@@ -373,7 +373,7 @@ static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!cp_inst->coproc->vops && !cp_inst->coproc->vops->paste_addr) {
++	if (!cp_inst->coproc->vops || !cp_inst->coproc->vops->paste_addr) {
+ 		pr_err("%s(): VAS API is not registered\n", __func__);
+ 		return -EACCES;
+ 	}
+diff --git a/arch/powerpc/platforms/powernv/opal-prd.c b/arch/powerpc/platforms/powernv/opal-prd.c
+index a191f4c60ce71..113bdb151f687 100644
+--- a/arch/powerpc/platforms/powernv/opal-prd.c
++++ b/arch/powerpc/platforms/powernv/opal-prd.c
+@@ -369,6 +369,12 @@ static struct notifier_block opal_prd_event_nb = {
+ 	.priority	= 0,
+ };
+ 
++static struct notifier_block opal_prd_event_nb2 = {
++	.notifier_call	= opal_prd_msg_notifier,
++	.next		= NULL,
++	.priority	= 0,
++};
++
+ static int opal_prd_probe(struct platform_device *pdev)
+ {
+ 	int rc;
+@@ -390,9 +396,10 @@ static int opal_prd_probe(struct platform_device *pdev)
+ 		return rc;
+ 	}
+ 
+-	rc = opal_message_notifier_register(OPAL_MSG_PRD2, &opal_prd_event_nb);
++	rc = opal_message_notifier_register(OPAL_MSG_PRD2, &opal_prd_event_nb2);
+ 	if (rc) {
+ 		pr_err("Couldn't register PRD2 event notifier\n");
++		opal_message_notifier_unregister(OPAL_MSG_PRD, &opal_prd_event_nb);
+ 		return rc;
+ 	}
+ 
+@@ -401,6 +408,8 @@ static int opal_prd_probe(struct platform_device *pdev)
+ 		pr_err("failed to register miscdev\n");
+ 		opal_message_notifier_unregister(OPAL_MSG_PRD,
+ 				&opal_prd_event_nb);
++		opal_message_notifier_unregister(OPAL_MSG_PRD2,
++				&opal_prd_event_nb2);
+ 		return rc;
+ 	}
+ 
+@@ -411,6 +420,7 @@ static int opal_prd_remove(struct platform_device *pdev)
+ {
+ 	misc_deregister(&opal_prd_dev);
+ 	opal_message_notifier_unregister(OPAL_MSG_PRD, &opal_prd_event_nb);
++	opal_message_notifier_unregister(OPAL_MSG_PRD2, &opal_prd_event_nb2);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
+index e83e0891272d3..210a37a065fb7 100644
+--- a/arch/powerpc/platforms/pseries/mobility.c
++++ b/arch/powerpc/platforms/pseries/mobility.c
+@@ -63,6 +63,27 @@ static int mobility_rtas_call(int token, char *buf, s32 scope)
+ 
+ static int delete_dt_node(struct device_node *dn)
+ {
++	struct device_node *pdn;
++	bool is_platfac;
++
++	pdn = of_get_parent(dn);
++	is_platfac = of_node_is_type(dn, "ibm,platform-facilities") ||
++		     of_node_is_type(pdn, "ibm,platform-facilities");
++	of_node_put(pdn);
++
++	/*
++	 * The drivers that bind to nodes in the platform-facilities
++	 * hierarchy don't support node removal, and the removal directive
++	 * from firmware is always followed by an add of an equivalent
++	 * node. The capability (e.g. RNG, encryption, compression)
++	 * represented by the node is never interrupted by the migration.
++	 * So ignore changes to this part of the tree.
++	 */
++	if (is_platfac) {
++		pr_notice("ignoring remove operation for %pOFfp\n", dn);
++		return 0;
++	}
++
+ 	pr_debug("removing node %pOFfp\n", dn);
+ 	dlpar_detach_node(dn);
+ 	return 0;
+@@ -222,6 +243,19 @@ static int add_dt_node(struct device_node *parent_dn, __be32 drc_index)
+ 	if (!dn)
+ 		return -ENOENT;
+ 
++	/*
++	 * Since delete_dt_node() ignores this node type, this is the
++	 * necessary counterpart. We also know that a platform-facilities
++	 * node returned from dlpar_configure_connector() has children
++	 * attached, and dlpar_attach_node() only adds the parent, leaking
++	 * the children. So ignore these on the add side for now.
++	 */
++	if (of_node_is_type(dn, "ibm,platform-facilities")) {
++		pr_notice("ignoring add operation for %pOF\n", dn);
++		dlpar_free_cc_nodes(dn);
++		return 0;
++	}
++
+ 	rc = dlpar_attach_node(dn, parent_dn);
+ 	if (rc)
+ 		dlpar_free_cc_nodes(dn);
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index da4d7f225a409..47518116e9533 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -3274,8 +3274,7 @@ static void show_task(struct task_struct *volatile tsk)
+ 	 * appropriate for calling from xmon. This could be moved
+ 	 * to a common, generic, routine used by both.
+ 	 */
+-	state = (p_state == 0) ? 'R' :
+-		(p_state < 0) ? 'U' :
++	state = (p_state == TASK_RUNNING) ? 'R' :
+ 		(p_state & TASK_UNINTERRUPTIBLE) ? 'D' :
+ 		(p_state & TASK_STOPPED) ? 'T' :
+ 		(p_state & TASK_TRACED) ? 'C' :
+diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
+index 021ed64ee608f..086821b44def1 100644
+--- a/arch/riscv/include/asm/processor.h
++++ b/arch/riscv/include/asm/processor.h
+@@ -58,7 +58,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ 
+ static inline void wait_for_interrupt(void)
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 315db3d0229bf..0fcdc0233faca 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -128,16 +128,14 @@ static bool save_wchan(void *arg, unsigned long pc)
+ 	return true;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *task)
++unsigned long __get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc = 0;
+ 
+-	if (likely(task && task != current && !task_is_running(task))) {
+-		if (!try_get_task_stack(task))
+-			return 0;
+-		walk_stackframe(task, NULL, save_wchan, &pc);
+-		put_task_stack(task);
+-	}
++	if (!try_get_task_stack(task))
++		return 0;
++	walk_stackframe(task, NULL, save_wchan, &pc);
++	put_task_stack(task);
+ 	return pc;
+ }
+ 
+diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
+index 5d247198c30d3..753d85bdfad07 100644
+--- a/arch/riscv/net/bpf_jit_core.c
++++ b/arch/riscv/net/bpf_jit_core.c
+@@ -167,6 +167,11 @@ out:
+ 	return prog;
+ }
+ 
++u64 bpf_jit_alloc_exec_limit(void)
++{
++	return BPF_JIT_REGION_SIZE;
++}
++
+ void *bpf_jit_alloc_exec(unsigned long size)
+ {
+ 	return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 879b8e3f609cd..f54c152bf2bf9 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -192,7 +192,7 @@ static inline void release_thread(struct task_struct *tsk) { }
+ void guarded_storage_release(struct task_struct *tsk);
+ void gs_load_bc_cb(struct pt_regs *regs);
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ #define task_pt_regs(tsk) ((struct pt_regs *) \
+         (task_stack_page(tsk) + THREAD_SIZE) - 1)
+ #define KSTK_EIP(tsk)	(task_pt_regs(tsk)->psw.addr)
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index d7dc36ec0a60e..b9eb4c4192f92 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -679,8 +679,10 @@ static void cpumf_pmu_stop(struct perf_event *event, int flags)
+ 						      false);
+ 			if (cfdiag_diffctr(cpuhw, event->hw.config_base))
+ 				cfdiag_push_sample(event, cpuhw);
+-		} else
++		} else if (cpuhw->flags & PMU_F_RESERVED) {
++			/* Only update when PMU not hotplugged off */
+ 			hw_perf_event_update(event);
++		}
+ 		hwc->state |= PERF_HES_UPTODATE;
+ 	}
+ }
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index 350e94d0cac23..e5dd46b1bff8c 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -181,12 +181,12 @@ void execve_tail(void)
+ 	asm volatile("sfpc %0" : : "d" (0));
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	struct unwind_state state;
+ 	unsigned long ip = 0;
+ 
+-	if (!p || p == current || task_is_running(p) || !task_stack_page(p))
++	if (!task_stack_page(p))
+ 		return 0;
+ 
+ 	if (!try_get_task_stack(p))
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index aeb0a15bcbb71..193205fb27774 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -227,7 +227,7 @@ again:
+ 	uaddr = __gmap_translate(gmap, gaddr);
+ 	if (IS_ERR_VALUE(uaddr))
+ 		goto out;
+-	vma = find_vma(gmap->mm, uaddr);
++	vma = vma_lookup(gmap->mm, uaddr);
+ 	if (!vma)
+ 		goto out;
+ 	/*
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 9928f785c6773..12dcf97571082 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -397,6 +397,8 @@ static int handle_sske(struct kvm_vcpu *vcpu)
+ 		mmap_read_unlock(current->mm);
+ 		if (rc == -EFAULT)
+ 			return kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
++		if (rc == -EAGAIN)
++			continue;
+ 		if (rc < 0)
+ 			return rc;
+ 		start += PAGE_SIZE;
+diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
+index c8841f476e913..00d272d134c24 100644
+--- a/arch/s390/kvm/pv.c
++++ b/arch/s390/kvm/pv.c
+@@ -16,18 +16,17 @@
+ 
+ int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc)
+ {
+-	int cc = 0;
++	int cc;
+ 
+-	if (kvm_s390_pv_cpu_get_handle(vcpu)) {
+-		cc = uv_cmd_nodata(kvm_s390_pv_cpu_get_handle(vcpu),
+-				   UVC_CMD_DESTROY_SEC_CPU, rc, rrc);
++	if (!kvm_s390_pv_cpu_get_handle(vcpu))
++		return 0;
++
++	cc = uv_cmd_nodata(kvm_s390_pv_cpu_get_handle(vcpu), UVC_CMD_DESTROY_SEC_CPU, rc, rrc);
++
++	KVM_UV_EVENT(vcpu->kvm, 3, "PROTVIRT DESTROY VCPU %d: rc %x rrc %x",
++		     vcpu->vcpu_id, *rc, *rrc);
++	WARN_ONCE(cc, "protvirt destroy cpu failed rc %x rrc %x", *rc, *rrc);
+ 
+-		KVM_UV_EVENT(vcpu->kvm, 3,
+-			     "PROTVIRT DESTROY VCPU %d: rc %x rrc %x",
+-			     vcpu->vcpu_id, *rc, *rrc);
+-		WARN_ONCE(cc, "protvirt destroy cpu failed rc %x rrc %x",
+-			  *rc, *rrc);
+-	}
+ 	/* Intended memory leak for something that should never happen. */
+ 	if (!cc)
+ 		free_pages(vcpu->arch.pv.stor_base,
+@@ -196,7 +195,7 @@ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
+ 	uvcb.conf_base_stor_origin = (u64)kvm->arch.pv.stor_base;
+ 	uvcb.conf_virt_stor_origin = (u64)kvm->arch.pv.stor_var;
+ 
+-	cc = uv_call(0, (u64)&uvcb);
++	cc = uv_call_sched(0, (u64)&uvcb);
+ 	*rc = uvcb.header.rc;
+ 	*rrc = uvcb.header.rrc;
+ 	KVM_UV_EVENT(kvm, 3, "PROTVIRT CREATE VM: handle %llx len %llx rc %x rrc %x",
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 9bb2c7512cd54..9023bf3ced89b 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -673,6 +673,7 @@ EXPORT_SYMBOL_GPL(gmap_fault);
+  */
+ void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
+ {
++	struct vm_area_struct *vma;
+ 	unsigned long vmaddr;
+ 	spinlock_t *ptl;
+ 	pte_t *ptep;
+@@ -682,11 +683,17 @@ void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
+ 						   gaddr >> PMD_SHIFT);
+ 	if (vmaddr) {
+ 		vmaddr |= gaddr & ~PMD_MASK;
++
++		vma = vma_lookup(gmap->mm, vmaddr);
++		if (!vma || is_vm_hugetlb_page(vma))
++			return;
++
+ 		/* Get pointer to the page table entry */
+ 		ptep = get_locked_pte(gmap->mm, vmaddr, &ptl);
+-		if (likely(ptep))
++		if (likely(ptep)) {
+ 			ptep_zap_unused(gmap->mm, vmaddr, ptep, 0);
+-		pte_unmap_unlock(ptep, ptl);
++			pte_unmap_unlock(ptep, ptl);
++		}
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(__gmap_zap);
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index eec3a9d7176e3..5fb409ff78422 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -429,22 +429,36 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
+ }
+ 
+ #ifdef CONFIG_PGSTE
+-static pmd_t *pmd_alloc_map(struct mm_struct *mm, unsigned long addr)
++static int pmd_lookup(struct mm_struct *mm, unsigned long addr, pmd_t **pmdp)
+ {
++	struct vm_area_struct *vma;
+ 	pgd_t *pgd;
+ 	p4d_t *p4d;
+ 	pud_t *pud;
+-	pmd_t *pmd;
++
++	/* We need a valid VMA, otherwise this is clearly a fault. */
++	vma = vma_lookup(mm, addr);
++	if (!vma)
++		return -EFAULT;
+ 
+ 	pgd = pgd_offset(mm, addr);
+-	p4d = p4d_alloc(mm, pgd, addr);
+-	if (!p4d)
+-		return NULL;
+-	pud = pud_alloc(mm, p4d, addr);
+-	if (!pud)
+-		return NULL;
+-	pmd = pmd_alloc(mm, pud, addr);
+-	return pmd;
++	if (!pgd_present(*pgd))
++		return -ENOENT;
++
++	p4d = p4d_offset(pgd, addr);
++	if (!p4d_present(*p4d))
++		return -ENOENT;
++
++	pud = pud_offset(p4d, addr);
++	if (!pud_present(*pud))
++		return -ENOENT;
++
++	/* Large PUDs are not supported yet. */
++	if (pud_large(*pud))
++		return -EFAULT;
++
++	*pmdp = pmd_offset(pud, addr);
++	return 0;
+ }
+ #endif
+ 
+@@ -778,8 +792,7 @@ int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
+ 	pmd_t *pmdp;
+ 	pte_t *ptep;
+ 
+-	pmdp = pmd_alloc_map(mm, addr);
+-	if (unlikely(!pmdp))
++	if (pmd_lookup(mm, addr, &pmdp))
+ 		return -EFAULT;
+ 
+ 	ptl = pmd_lock(mm, pmdp);
+@@ -881,8 +894,7 @@ int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr)
+ 	pte_t *ptep;
+ 	int cc = 0;
+ 
+-	pmdp = pmd_alloc_map(mm, addr);
+-	if (unlikely(!pmdp))
++	if (pmd_lookup(mm, addr, &pmdp))
+ 		return -EFAULT;
+ 
+ 	ptl = pmd_lock(mm, pmdp);
+@@ -935,15 +947,24 @@ int get_guest_storage_key(struct mm_struct *mm, unsigned long addr,
+ 	pmd_t *pmdp;
+ 	pte_t *ptep;
+ 
+-	pmdp = pmd_alloc_map(mm, addr);
+-	if (unlikely(!pmdp))
++	/*
++	 * If we don't have a PTE table and if there is no huge page mapped,
++	 * the storage key is 0.
++	 */
++	*key = 0;
++
++	switch (pmd_lookup(mm, addr, &pmdp)) {
++	case -ENOENT:
++		return 0;
++	case 0:
++		break;
++	default:
+ 		return -EFAULT;
++	}
+ 
+ 	ptl = pmd_lock(mm, pmdp);
+ 	if (!pmd_present(*pmdp)) {
+-		/* Not yet mapped memory has a zero key */
+ 		spin_unlock(ptl);
+-		*key = 0;
+ 		return 0;
+ 	}
+ 
+@@ -988,6 +1009,7 @@ EXPORT_SYMBOL(get_guest_storage_key);
+ int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
+ 			unsigned long *oldpte, unsigned long *oldpgste)
+ {
++	struct vm_area_struct *vma;
+ 	unsigned long pgstev;
+ 	spinlock_t *ptl;
+ 	pgste_t pgste;
+@@ -997,6 +1019,10 @@ int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
+ 	WARN_ON_ONCE(orc > ESSA_MAX);
+ 	if (unlikely(orc > ESSA_MAX))
+ 		return -EINVAL;
++
++	vma = vma_lookup(mm, hva);
++	if (!vma || is_vm_hugetlb_page(vma))
++		return -EFAULT;
+ 	ptep = get_locked_pte(mm, hva, &ptl);
+ 	if (unlikely(!ptep))
+ 		return -EFAULT;
+@@ -1089,10 +1115,14 @@ EXPORT_SYMBOL(pgste_perform_essa);
+ int set_pgste_bits(struct mm_struct *mm, unsigned long hva,
+ 			unsigned long bits, unsigned long value)
+ {
++	struct vm_area_struct *vma;
+ 	spinlock_t *ptl;
+ 	pgste_t new;
+ 	pte_t *ptep;
+ 
++	vma = vma_lookup(mm, hva);
++	if (!vma || is_vm_hugetlb_page(vma))
++		return -EFAULT;
+ 	ptep = get_locked_pte(mm, hva, &ptl);
+ 	if (unlikely(!ptep))
+ 		return -EFAULT;
+@@ -1117,9 +1147,13 @@ EXPORT_SYMBOL(set_pgste_bits);
+  */
+ int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep)
+ {
++	struct vm_area_struct *vma;
+ 	spinlock_t *ptl;
+ 	pte_t *ptep;
+ 
++	vma = vma_lookup(mm, hva);
++	if (!vma || is_vm_hugetlb_page(vma))
++		return -EFAULT;
+ 	ptep = get_locked_pte(mm, hva, &ptl);
+ 	if (unlikely(!ptep))
+ 		return -EFAULT;
+diff --git a/arch/sh/include/asm/processor_32.h b/arch/sh/include/asm/processor_32.h
+index aa92cc933889d..45240ec6b85a4 100644
+--- a/arch/sh/include/asm/processor_32.h
++++ b/arch/sh/include/asm/processor_32.h
+@@ -180,7 +180,7 @@ static inline void show_code(struct pt_regs *regs)
+ }
+ #endif
+ 
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->pc)
+ #define KSTK_ESP(tsk)  (task_pt_regs(tsk)->regs[15])
+diff --git a/arch/sh/kernel/cpu/fpu.c b/arch/sh/kernel/cpu/fpu.c
+index ae354a2931e7e..fd6db0ab19288 100644
+--- a/arch/sh/kernel/cpu/fpu.c
++++ b/arch/sh/kernel/cpu/fpu.c
+@@ -62,18 +62,20 @@ void fpu_state_restore(struct pt_regs *regs)
+ 	}
+ 
+ 	if (!tsk_used_math(tsk)) {
+-		local_irq_enable();
++		int ret;
+ 		/*
+ 		 * does a slab alloc which can sleep
+ 		 */
+-		if (init_fpu(tsk)) {
++		local_irq_enable();
++		ret = init_fpu(tsk);
++		local_irq_disable();
++		if (ret) {
+ 			/*
+ 			 * ran out of memory!
+ 			 */
+-			do_group_exit(SIGKILL);
++			force_sig(SIGKILL);
+ 			return;
+ 		}
+-		local_irq_disable();
+ 	}
+ 
+ 	grab_fpu(regs);
+diff --git a/arch/sh/kernel/process_32.c b/arch/sh/kernel/process_32.c
+index 717de05c81f49..1c28e3cddb60d 100644
+--- a/arch/sh/kernel/process_32.c
++++ b/arch/sh/kernel/process_32.c
+@@ -182,13 +182,10 @@ __switch_to(struct task_struct *prev, struct task_struct *next)
+ 	return prev;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long pc;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	/*
+ 	 * The same comment as on the Alpha applies here, too ...
+ 	 */
+diff --git a/arch/sparc/include/asm/processor_32.h b/arch/sparc/include/asm/processor_32.h
+index b6242f7771e9e..647bf0ac7beb9 100644
+--- a/arch/sparc/include/asm/processor_32.h
++++ b/arch/sparc/include/asm/processor_32.h
+@@ -89,7 +89,7 @@ static inline void start_thread(struct pt_regs * regs, unsigned long pc,
+ /* Free all resources held by a thread. */
+ #define release_thread(tsk)		do { } while(0)
+ 
+-unsigned long get_wchan(struct task_struct *);
++unsigned long __get_wchan(struct task_struct *);
+ 
+ #define task_pt_regs(tsk) ((tsk)->thread.kregs)
+ #define KSTK_EIP(tsk)  ((tsk)->thread.kregs->pc)
+diff --git a/arch/sparc/include/asm/processor_64.h b/arch/sparc/include/asm/processor_64.h
+index 5cf145f18f36b..ae851e8fce4c9 100644
+--- a/arch/sparc/include/asm/processor_64.h
++++ b/arch/sparc/include/asm/processor_64.h
+@@ -183,7 +183,7 @@ do { \
+ /* Free all resources held by a thread. */
+ #define release_thread(tsk)		do { } while (0)
+ 
+-unsigned long get_wchan(struct task_struct *task);
++unsigned long __get_wchan(struct task_struct *task);
+ 
+ #define task_pt_regs(tsk) (task_thread_info(tsk)->kregs)
+ #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->tpc)
+diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
+index 93983d6d431de..29a2f396f8601 100644
+--- a/arch/sparc/kernel/process_32.c
++++ b/arch/sparc/kernel/process_32.c
+@@ -368,7 +368,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 	return 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *task)
++unsigned long __get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc, fp, bias = 0;
+ 	unsigned long task_base = (unsigned long) task;
+@@ -376,9 +376,6 @@ unsigned long get_wchan(struct task_struct *task)
+ 	struct reg_window32 *rw;
+ 	int count = 0;
+ 
+-	if (!task || task == current || task_is_running(task))
+-		goto out;
+-
+ 	fp = task_thread_info(task)->ksp + bias;
+ 	do {
+ 		/* Bogus frame pointer? */
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index d33c58a58d4ff..fa8db86e561c7 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -666,7 +666,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ 	return 0;
+ }
+ 
+-unsigned long get_wchan(struct task_struct *task)
++unsigned long __get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc, fp, bias = 0;
+ 	struct thread_info *tp;
+@@ -674,9 +674,6 @@ unsigned long get_wchan(struct task_struct *task)
+         unsigned long ret = 0;
+ 	int count = 0; 
+ 
+-	if (!task || task == current || task_is_running(task))
+-		goto out;
+-
+ 	tp = task_thread_info(task);
+ 	bias = STACK_BIAS;
+ 	fp = task_thread_info(task)->ksp + bias;
+diff --git a/arch/um/include/asm/processor-generic.h b/arch/um/include/asm/processor-generic.h
+index b5cf0ed116d9e..579692a40a556 100644
+--- a/arch/um/include/asm/processor-generic.h
++++ b/arch/um/include/asm/processor-generic.h
+@@ -106,6 +106,6 @@ extern struct cpuinfo_um boot_cpu_data;
+ #define cache_line_size()	(boot_cpu_data.cache_alignment)
+ 
+ #define KSTK_REG(tsk, reg) get_thread_reg(reg, &tsk->thread.switch_buf)
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 457a38db368b7..82107373ac7e9 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -364,14 +364,11 @@ unsigned long arch_align_stack(unsigned long sp)
+ }
+ #endif
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long stack_page, sp, ip;
+ 	bool seen_sched = 0;
+ 
+-	if ((p == NULL) || (p == current) || task_is_running(p))
+-		return 0;
+-
+ 	stack_page = (unsigned long) task_stack_page(p);
+ 	/* Bail if the process has no kernel stack for some reason */
+ 	if (stack_page == 0)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 551eaab376f31..8f791b047b862 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1513,6 +1513,7 @@ config AMD_MEM_ENCRYPT
+ 	select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+ 	select INSTRUCTION_DECODER
+ 	select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
++	select ARCH_HAS_CC_PLATFORM
+ 	help
+ 	  Say yes to enable support for the encryption of system memory.
+ 	  This requires an AMD processor that supports Secure Memory
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 0fc961bef299c..e09f4672dd382 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -866,7 +866,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
+ 		req = &subreq;
+ 
+ 		err = skcipher_walk_virt(&walk, req, false);
+-		if (err)
++		if (!walk.nbytes)
+ 			return err;
+ 	} else {
+ 		tail = 0;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 482224444a1ee..41da78eda95ea 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -243,7 +243,8 @@ static struct extra_reg intel_skl_extra_regs[] __read_mostly = {
+ 
+ static struct event_constraint intel_icl_event_constraints[] = {
+ 	FIXED_EVENT_CONSTRAINT(0x00c0, 0),	/* INST_RETIRED.ANY */
+-	FIXED_EVENT_CONSTRAINT(0x01c0, 0),	/* INST_RETIRED.PREC_DIST */
++	FIXED_EVENT_CONSTRAINT(0x01c0, 0),	/* old INST_RETIRED.PREC_DIST */
++	FIXED_EVENT_CONSTRAINT(0x0100, 0),	/* INST_RETIRED.PREC_DIST */
+ 	FIXED_EVENT_CONSTRAINT(0x003c, 1),	/* CPU_CLK_UNHALTED.CORE */
+ 	FIXED_EVENT_CONSTRAINT(0x0300, 2),	/* CPU_CLK_UNHALTED.REF */
+ 	FIXED_EVENT_CONSTRAINT(0x0400, 3),	/* SLOTS */
+@@ -288,7 +289,7 @@ static struct extra_reg intel_spr_extra_regs[] __read_mostly = {
+ 
+ static struct event_constraint intel_spr_event_constraints[] = {
+ 	FIXED_EVENT_CONSTRAINT(0x00c0, 0),	/* INST_RETIRED.ANY */
+-	FIXED_EVENT_CONSTRAINT(0x01c0, 0),	/* INST_RETIRED.PREC_DIST */
++	FIXED_EVENT_CONSTRAINT(0x0100, 0),	/* INST_RETIRED.PREC_DIST */
+ 	FIXED_EVENT_CONSTRAINT(0x003c, 1),	/* CPU_CLK_UNHALTED.CORE */
+ 	FIXED_EVENT_CONSTRAINT(0x0300, 2),	/* CPU_CLK_UNHALTED.REF */
+ 	FIXED_EVENT_CONSTRAINT(0x0400, 3),	/* SLOTS */
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 8647713276a73..4dbb55a43dad2 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -923,7 +923,8 @@ struct event_constraint intel_skl_pebs_event_constraints[] = {
+ };
+ 
+ struct event_constraint intel_icl_pebs_event_constraints[] = {
+-	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1c0, 0x100000000ULL),	/* INST_RETIRED.PREC_DIST */
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x100000000ULL),	/* old INST_RETIRED.PREC_DIST */
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x0100, 0x100000000ULL),	/* INST_RETIRED.PREC_DIST */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL),	/* SLOTS */
+ 
+ 	INTEL_PLD_CONSTRAINT(0x1cd, 0xff),			/* MEM_TRANS_RETIRED.LOAD_LATENCY */
+@@ -943,7 +944,7 @@ struct event_constraint intel_icl_pebs_event_constraints[] = {
+ };
+ 
+ struct event_constraint intel_spr_pebs_event_constraints[] = {
+-	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1c0, 0x100000000ULL),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x100, 0x100000000ULL),	/* INST_RETIRED.PREC_DIST */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL),
+ 
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xc0, 0xfe),
+diff --git a/arch/x86/events/intel/uncore_discovery.h b/arch/x86/events/intel/uncore_discovery.h
+index 1d652939a01c9..abfb1e8d8598d 100644
+--- a/arch/x86/events/intel/uncore_discovery.h
++++ b/arch/x86/events/intel/uncore_discovery.h
+@@ -30,7 +30,7 @@
+ 
+ 
+ #define uncore_discovery_invalid_unit(unit)			\
+-	(!unit.table1 || !unit.ctl || !unit.table3 ||	\
++	(!unit.table1 || !unit.ctl || \
+ 	 unit.table1 == -1ULL || unit.ctl == -1ULL ||	\
+ 	 unit.table3 == -1ULL)
+ 
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 482a9931d1e65..0eca031519a30 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -451,7 +451,7 @@
+ #define ICX_M3UPI_PCI_PMON_BOX_CTL		0xa0
+ 
+ /* ICX IMC */
+-#define ICX_NUMBER_IMC_CHN			2
++#define ICX_NUMBER_IMC_CHN			3
+ #define ICX_IMC_MEM_STRIDE			0x4
+ 
+ DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7");
+@@ -5051,8 +5051,10 @@ static struct event_constraint icx_uncore_iio_constraints[] = {
+ 	UNCORE_EVENT_CONSTRAINT(0x02, 0x3),
+ 	UNCORE_EVENT_CONSTRAINT(0x03, 0x3),
+ 	UNCORE_EVENT_CONSTRAINT(0x83, 0x3),
++	UNCORE_EVENT_CONSTRAINT(0x88, 0xc),
+ 	UNCORE_EVENT_CONSTRAINT(0xc0, 0xc),
+ 	UNCORE_EVENT_CONSTRAINT(0xc5, 0xc),
++	UNCORE_EVENT_CONSTRAINT(0xd5, 0xc),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -5437,7 +5439,7 @@ static struct intel_uncore_ops icx_uncore_mmio_ops = {
+ static struct intel_uncore_type icx_uncore_imc = {
+ 	.name		= "imc",
+ 	.num_counters   = 4,
+-	.num_boxes	= 8,
++	.num_boxes	= 12,
+ 	.perf_ctr_bits	= 48,
+ 	.fixed_ctr_bits	= 48,
+ 	.fixed_ctr	= SNR_IMC_MMIO_PMON_FIXED_CTR,
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 6952e219cba36..d7e1eac3802f4 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -160,7 +160,6 @@ void set_hv_tscchange_cb(void (*cb)(void))
+ 	struct hv_reenlightenment_control re_ctrl = {
+ 		.vector = HYPERV_REENLIGHTENMENT_VECTOR,
+ 		.enabled = 1,
+-		.target_vp = hv_vp_index[smp_processor_id()]
+ 	};
+ 	struct hv_tsc_emulation_control emu_ctrl = {.enabled = 1};
+ 
+@@ -174,8 +173,12 @@ void set_hv_tscchange_cb(void (*cb)(void))
+ 	/* Make sure callback is registered before we write to MSRs */
+ 	wmb();
+ 
++	re_ctrl.target_vp = hv_vp_index[get_cpu()];
++
+ 	wrmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));
+ 	wrmsrl(HV_X64_MSR_TSC_EMULATION_CONTROL, *((u64 *)&emu_ctrl));
++
++	put_cpu();
+ }
+ EXPORT_SYMBOL_GPL(set_hv_tscchange_cb);
+ 
+diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
+index 3d52b094850a9..dd5ea1bdf04c5 100644
+--- a/arch/x86/include/asm/cpu_entry_area.h
++++ b/arch/x86/include/asm/cpu_entry_area.h
+@@ -10,6 +10,12 @@
+ 
+ #ifdef CONFIG_X86_64
+ 
++#ifdef CONFIG_AMD_MEM_ENCRYPT
++#define VC_EXCEPTION_STKSZ	EXCEPTION_STKSZ
++#else
++#define VC_EXCEPTION_STKSZ	0
++#endif
++
+ /* Macro to enforce the same ordering and stack sizes */
+ #define ESTACKS_MEMBERS(guardsize, optional_stack_size)		\
+ 	char	DF_stack_guard[guardsize];			\
+@@ -28,7 +34,7 @@
+ 
+ /* The exception stacks' physical storage. No guard pages required */
+ struct exception_stacks {
+-	ESTACKS_MEMBERS(0, 0)
++	ESTACKS_MEMBERS(0, VC_EXCEPTION_STKSZ)
+ };
+ 
+ /* The effective cpu entry area mapping with guard pages. */
+diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
+index 91d7182ad2d6e..4ec3613551e3b 100644
+--- a/arch/x86/include/asm/insn-eval.h
++++ b/arch/x86/include/asm/insn-eval.h
+@@ -21,6 +21,7 @@ int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs);
+ int insn_get_modrm_reg_off(struct insn *insn, struct pt_regs *regs);
+ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
+ int insn_get_code_seg_params(struct pt_regs *regs);
++int insn_get_effective_ip(struct pt_regs *regs, unsigned long *ip);
+ int insn_fetch_from_user(struct pt_regs *regs,
+ 			 unsigned char buf[MAX_INSN_SIZE]);
+ int insn_fetch_from_user_inatomic(struct pt_regs *regs,
+diff --git a/arch/x86/include/asm/irq_stack.h b/arch/x86/include/asm/irq_stack.h
+index 562854c608082..8d55bd11848cb 100644
+--- a/arch/x86/include/asm/irq_stack.h
++++ b/arch/x86/include/asm/irq_stack.h
+@@ -77,11 +77,11 @@
+  *     Function calls can clobber anything except the callee-saved
+  *     registers. Tell the compiler.
+  */
+-#define call_on_irqstack(func, asm_call, argconstr...)			\
++#define call_on_stack(stack, func, asm_call, argconstr...)		\
+ {									\
+ 	register void *tos asm("r11");					\
+ 									\
+-	tos = ((void *)__this_cpu_read(hardirq_stack_ptr));		\
++	tos = ((void *)(stack));					\
+ 									\
+ 	asm_inline volatile(						\
+ 	"movq	%%rsp, (%[tos])				\n"		\
+@@ -98,6 +98,25 @@
+ 	);								\
+ }
+ 
++#define ASM_CALL_ARG0							\
++	"call %P[__func]				\n"
++
++#define ASM_CALL_ARG1							\
++	"movq	%[arg1], %%rdi				\n"		\
++	ASM_CALL_ARG0
++
++#define ASM_CALL_ARG2							\
++	"movq	%[arg2], %%rsi				\n"		\
++	ASM_CALL_ARG1
++
++#define ASM_CALL_ARG3							\
++	"movq	%[arg3], %%rdx				\n"		\
++	ASM_CALL_ARG2
++
++#define call_on_irqstack(func, asm_call, argconstr...)			\
++	call_on_stack(__this_cpu_read(hardirq_stack_ptr),		\
++		      func, asm_call, argconstr)
++
+ /* Macros to assert type correctness for run_*_on_irqstack macros */
+ #define assert_function_type(func, proto)				\
+ 	static_assert(__builtin_types_compatible_p(typeof(&func), proto))
+@@ -147,8 +166,7 @@
+  */
+ #define ASM_CALL_SYSVEC							\
+ 	"call irq_enter_rcu				\n"		\
+-	"movq	%[arg1], %%rdi				\n"		\
+-	"call %P[__func]				\n"		\
++	ASM_CALL_ARG1							\
+ 	"call irq_exit_rcu				\n"
+ 
+ #define SYSVEC_CONSTRAINTS	, [arg1] "r" (regs)
+@@ -168,12 +186,10 @@
+  */
+ #define ASM_CALL_IRQ							\
+ 	"call irq_enter_rcu				\n"		\
+-	"movq	%[arg1], %%rdi				\n"		\
+-	"movl	%[arg2], %%esi				\n"		\
+-	"call %P[__func]				\n"		\
++	ASM_CALL_ARG2							\
+ 	"call irq_exit_rcu				\n"
+ 
+-#define IRQ_CONSTRAINTS	, [arg1] "r" (regs), [arg2] "r" (vector)
++#define IRQ_CONSTRAINTS	, [arg1] "r" (regs), [arg2] "r" ((unsigned long)vector)
+ 
+ #define run_irq_on_irqstack_cond(func, regs, vector)			\
+ {									\
+@@ -185,9 +201,6 @@
+ 			      IRQ_CONSTRAINTS, regs, vector);		\
+ }
+ 
+-#define ASM_CALL_SOFTIRQ						\
+-	"call %P[__func]				\n"
+-
+ /*
+  * Macro to invoke __do_softirq on the irq stack. This is only called from
+  * task context when bottom halves are about to be reenabled and soft
+@@ -197,7 +210,7 @@
+ #define do_softirq_own_stack()						\
+ {									\
+ 	__this_cpu_write(hardirq_stack_inuse, true);			\
+-	call_on_irqstack(__do_softirq, ASM_CALL_SOFTIRQ);		\
++	call_on_irqstack(__do_softirq, ASM_CALL_ARG0);			\
+ 	__this_cpu_write(hardirq_stack_inuse, false);			\
+ }
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 41f7ee07271e1..5f929d02b5b04 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -745,7 +745,7 @@ struct kvm_vcpu_arch {
+ 		u8 preempted;
+ 		u64 msr_val;
+ 		u64 last_steal;
+-		struct gfn_to_pfn_cache cache;
++		struct gfn_to_hva_cache cache;
+ 	} st;
+ 
+ 	u64 l1_tsc_offset;
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index 9c80c68d75b54..3fb9f5ebefa42 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -13,6 +13,7 @@
+ #ifndef __ASSEMBLY__
+ 
+ #include <linux/init.h>
++#include <linux/cc_platform.h>
+ 
+ #include <asm/bootparam.h>
+ 
+diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
+index a8d4ad8565681..e9e2c3ba59239 100644
+--- a/arch/x86/include/asm/page_64_types.h
++++ b/arch/x86/include/asm/page_64_types.h
+@@ -15,7 +15,7 @@
+ #define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
+ #define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
+ 
+-#define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
++#define EXCEPTION_STACK_ORDER (1 + KASAN_STACK_ORDER)
+ #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
+ 
+ #define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER)
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index f3020c54e2cb3..6f9ed2e800f21 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -516,6 +516,7 @@ struct thread_struct {
+ 	 */
+ 	unsigned long		iopl_emul;
+ 
++	unsigned int		iopl_warn:1;
+ 	unsigned int		sig_on_uaccess_err:1;
+ 
+ 	/*
+@@ -587,7 +588,7 @@ static inline void load_sp0(unsigned long sp0)
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long get_wchan(struct task_struct *p);
++unsigned long __get_wchan(struct task_struct *p);
+ 
+ /*
+  * Generic CPUID function
+diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h
+index f248eb2ac2d4a..3881b5333eb81 100644
+--- a/arch/x86/include/asm/stacktrace.h
++++ b/arch/x86/include/asm/stacktrace.h
+@@ -38,6 +38,16 @@ int get_stack_info(unsigned long *stack, struct task_struct *task,
+ bool get_stack_info_noinstr(unsigned long *stack, struct task_struct *task,
+ 			    struct stack_info *info);
+ 
++static __always_inline
++bool get_stack_guard_info(unsigned long *stack, struct stack_info *info)
++{
++	/* make sure it's not in the stack proper */
++	if (get_stack_info_noinstr(stack, current, info))
++		return false;
++	/* but if it is in the page below it, we hit a guard */
++	return get_stack_info_noinstr((void *)stack + PAGE_SIZE, current, info);
++}
++
+ const char *stack_type_name(enum stack_type type);
+ 
+ static inline bool on_stack(struct stack_info *info, void *addr, size_t len)
+diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h
+index 7f7200021bd13..6221be7cafc3b 100644
+--- a/arch/x86/include/asm/traps.h
++++ b/arch/x86/include/asm/traps.h
+@@ -40,9 +40,9 @@ void math_emulate(struct math_emu_info *);
+ bool fault_in_kernel_space(unsigned long address);
+ 
+ #ifdef CONFIG_VMAP_STACK
+-void __noreturn handle_stack_overflow(const char *message,
+-				      struct pt_regs *regs,
+-				      unsigned long fault_address);
++void __noreturn handle_stack_overflow(struct pt_regs *regs,
++				      unsigned long fault_address,
++				      struct stack_info *info);
+ #endif
+ 
+ #endif /* _ASM_X86_TRAPS_H */
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index 3e625c61f008f..d21b28f162810 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -21,6 +21,7 @@ CFLAGS_REMOVE_ftrace.o = -pg
+ CFLAGS_REMOVE_early_printk.o = -pg
+ CFLAGS_REMOVE_head64.o = -pg
+ CFLAGS_REMOVE_sev.o = -pg
++CFLAGS_REMOVE_cc_platform.o = -pg
+ endif
+ 
+ KASAN_SANITIZE_head$(BITS).o				:= n
+@@ -29,6 +30,7 @@ KASAN_SANITIZE_dumpstack_$(BITS).o			:= n
+ KASAN_SANITIZE_stacktrace.o				:= n
+ KASAN_SANITIZE_paravirt.o				:= n
+ KASAN_SANITIZE_sev.o					:= n
++KASAN_SANITIZE_cc_platform.o				:= n
+ 
+ # With some compiler versions the generated code results in boot hangs, caused
+ # by several compilation units. To be safe, disable all instrumentation.
+@@ -47,6 +49,7 @@ endif
+ KCOV_INSTRUMENT		:= n
+ 
+ CFLAGS_head$(BITS).o	+= -fno-stack-protector
++CFLAGS_cc_platform.o	+= -fno-stack-protector
+ 
+ CFLAGS_irq.o := -I $(srctree)/$(src)/../include/asm/trace
+ 
+@@ -150,6 +153,9 @@ obj-$(CONFIG_UNWINDER_FRAME_POINTER)	+= unwind_frame.o
+ obj-$(CONFIG_UNWINDER_GUESS)		+= unwind_guess.o
+ 
+ obj-$(CONFIG_AMD_MEM_ENCRYPT)		+= sev.o
++
++obj-$(CONFIG_ARCH_HAS_CC_PLATFORM)	+= cc_platform.o
++
+ ###
+ # 64 bit specific files
+ ifeq ($(CONFIG_X86_64),y)
+diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
+new file mode 100644
+index 0000000000000..03bb2f343ddb7
+--- /dev/null
++++ b/arch/x86/kernel/cc_platform.c
+@@ -0,0 +1,69 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Confidential Computing Platform Capability checks
++ *
++ * Copyright (C) 2021 Advanced Micro Devices, Inc.
++ *
++ * Author: Tom Lendacky <thomas.lendacky@amd.com>
++ */
++
++#include <linux/export.h>
++#include <linux/cc_platform.h>
++#include <linux/mem_encrypt.h>
++
++#include <asm/processor.h>
++
++static bool __maybe_unused intel_cc_platform_has(enum cc_attr attr)
++{
++#ifdef CONFIG_INTEL_TDX_GUEST
++	return false;
++#else
++	return false;
++#endif
++}
++
++/*
++ * SME and SEV are very similar but they are not the same, so there are
++ * times that the kernel will need to distinguish between SME and SEV. The
++ * cc_platform_has() function is used for this.  When a distinction isn't
++ * needed, the CC_ATTR_MEM_ENCRYPT attribute can be used.
++ *
++ * The trampoline code is a good example for this requirement.  Before
++ * paging is activated, SME will access all memory as decrypted, but SEV
++ * will access all memory as encrypted.  So, when APs are being brought
++ * up under SME the trampoline area cannot be encrypted, whereas under SEV
++ * the trampoline area must be encrypted.
++ */
++static bool amd_cc_platform_has(enum cc_attr attr)
++{
++#ifdef CONFIG_AMD_MEM_ENCRYPT
++	switch (attr) {
++	case CC_ATTR_MEM_ENCRYPT:
++		return sme_me_mask;
++
++	case CC_ATTR_HOST_MEM_ENCRYPT:
++		return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED);
++
++	case CC_ATTR_GUEST_MEM_ENCRYPT:
++		return sev_status & MSR_AMD64_SEV_ENABLED;
++
++	case CC_ATTR_GUEST_STATE_ENCRYPT:
++		return sev_status & MSR_AMD64_SEV_ES_ENABLED;
++
++	default:
++		return false;
++	}
++#else
++	return false;
++#endif
++}
++
++
++bool cc_platform_has(enum cc_attr attr)
++{
++	if (sme_me_mask)
++		return amd_cc_platform_has(attr);
++
++	return false;
++}
++EXPORT_SYMBOL_GPL(cc_platform_has);
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index b7c003013d414..90bd46d1ce435 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -989,6 +989,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_IRPERF) &&
+ 	    !cpu_has_amd_erratum(c, amd_erratum_1054))
+ 		msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
++
++	check_null_seg_clears_base(c);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 340caa7aebfba..b16c8149bb9eb 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1390,9 +1390,8 @@ void __init early_cpu_init(void)
+ 	early_identify_cpu(&boot_cpu_data);
+ }
+ 
+-static void detect_null_seg_behavior(struct cpuinfo_x86 *c)
++static bool detect_null_seg_behavior(void)
+ {
+-#ifdef CONFIG_X86_64
+ 	/*
+ 	 * Empirically, writing zero to a segment selector on AMD does
+ 	 * not clear the base, whereas writing zero to a segment
+@@ -1413,10 +1412,43 @@ static void detect_null_seg_behavior(struct cpuinfo_x86 *c)
+ 	wrmsrl(MSR_FS_BASE, 1);
+ 	loadsegment(fs, 0);
+ 	rdmsrl(MSR_FS_BASE, tmp);
+-	if (tmp != 0)
+-		set_cpu_bug(c, X86_BUG_NULL_SEG);
+ 	wrmsrl(MSR_FS_BASE, old_base);
+-#endif
++	return tmp == 0;
++}
++
++void check_null_seg_clears_base(struct cpuinfo_x86 *c)
++{
++	/* BUG_NULL_SEG is only relevant with 64bit userspace */
++	if (!IS_ENABLED(CONFIG_X86_64))
++		return;
++
++	/* Zen3 CPUs advertise Null Selector Clears Base in CPUID. */
++	if (c->extended_cpuid_level >= 0x80000021 &&
++	    cpuid_eax(0x80000021) & BIT(6))
++		return;
++
++	/*
++	 * CPUID bit above wasn't set. If this kernel is still running
++	 * as a HV guest, then the HV has decided not to advertize
++	 * that CPUID bit for whatever reason.	For example, one
++	 * member of the migration pool might be vulnerable.  Which
++	 * means, the bug is present: set the BUG flag and return.
++	 */
++	if (cpu_has(c, X86_FEATURE_HYPERVISOR)) {
++		set_cpu_bug(c, X86_BUG_NULL_SEG);
++		return;
++	}
++
++	/*
++	 * Zen2 CPUs also have this behaviour, but no CPUID bit.
++	 * 0x18 is the respective family for Hygon.
++	 */
++	if ((c->x86 == 0x17 || c->x86 == 0x18) &&
++	    detect_null_seg_behavior())
++		return;
++
++	/* All the remaining ones are affected */
++	set_cpu_bug(c, X86_BUG_NULL_SEG);
+ }
+ 
+ static void generic_identify(struct cpuinfo_x86 *c)
+@@ -1452,8 +1484,6 @@ static void generic_identify(struct cpuinfo_x86 *c)
+ 
+ 	get_model_name(c); /* Default name */
+ 
+-	detect_null_seg_behavior(c);
+-
+ 	/*
+ 	 * ESPFIX is a strange bug.  All real CPUs have it.  Paravirt
+ 	 * systems that run Linux at CPL > 0 may or may not have the
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 95521302630d4..ee6f23f7587d4 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -75,6 +75,7 @@ extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
+ extern int detect_extended_topology(struct cpuinfo_x86 *c);
+ extern int detect_ht_early(struct cpuinfo_x86 *c);
+ extern void detect_ht(struct cpuinfo_x86 *c);
++extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
+ 
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index 6d50136f7ab98..3fcdda4c1e114 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -335,6 +335,8 @@ static void init_hygon(struct cpuinfo_x86 *c)
+ 	/* Hygon CPUs don't reset SS attributes on SYSRET, Xen does. */
+ 	if (!cpu_has(c, X86_FEATURE_XENPV))
+ 		set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
++
++	check_null_seg_clears_base(c);
+ }
+ 
+ static void cpu_detect_tlb_hygon(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
+index acfd5d9f93c68..bb9a46a804bf2 100644
+--- a/arch/x86/kernel/cpu/mce/intel.c
++++ b/arch/x86/kernel/cpu/mce/intel.c
+@@ -547,12 +547,13 @@ bool intel_filter_mce(struct mce *m)
+ {
+ 	struct cpuinfo_x86 *c = &boot_cpu_data;
+ 
+-	/* MCE errata HSD131, HSM142, HSW131, BDM48, and HSM142 */
++	/* MCE errata HSD131, HSM142, HSW131, BDM48, HSM142 and SKX37 */
+ 	if ((c->x86 == 6) &&
+ 	    ((c->x86_model == INTEL_FAM6_HASWELL) ||
+ 	     (c->x86_model == INTEL_FAM6_HASWELL_L) ||
+ 	     (c->x86_model == INTEL_FAM6_BROADWELL) ||
+-	     (c->x86_model == INTEL_FAM6_HASWELL_G)) &&
++	     (c->x86_model == INTEL_FAM6_HASWELL_G) ||
++	     (c->x86_model == INTEL_FAM6_SKYLAKE_X)) &&
+ 	    (m->bank == 0) &&
+ 	    ((m->status & 0xa0000000ffffffff) == 0x80000000000f0005))
+ 		return true;
+diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
+index 5601b95944fae..6c5defd6569a3 100644
+--- a/arch/x86/kernel/dumpstack_64.c
++++ b/arch/x86/kernel/dumpstack_64.c
+@@ -32,9 +32,15 @@ const char *stack_type_name(enum stack_type type)
+ {
+ 	BUILD_BUG_ON(N_EXCEPTION_STACKS != 6);
+ 
++	if (type == STACK_TYPE_TASK)
++		return "TASK";
++
+ 	if (type == STACK_TYPE_IRQ)
+ 		return "IRQ";
+ 
++	if (type == STACK_TYPE_SOFTIRQ)
++		return "SOFTIRQ";
++
+ 	if (type == STACK_TYPE_ENTRY) {
+ 		/*
+ 		 * On 64-bit, we have a generic entry stack that we
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index e28f6a5d14f1b..766ffe3ba3137 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -291,8 +291,10 @@ void kvm_set_posted_intr_wakeup_handler(void (*handler)(void))
+ {
+ 	if (handler)
+ 		kvm_posted_intr_wakeup_handler = handler;
+-	else
++	else {
+ 		kvm_posted_intr_wakeup_handler = dummy_handler;
++		synchronize_rcu();
++	}
+ }
+ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wakeup_handler);
+ 
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 1d9463e3096b6..2fe1810e922a9 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -43,6 +43,7 @@
+ #include <asm/io_bitmap.h>
+ #include <asm/proto.h>
+ #include <asm/frame.h>
++#include <asm/unwind.h>
+ 
+ #include "process.h"
+ 
+@@ -132,6 +133,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 	frame->ret_addr = (unsigned long) ret_from_fork;
+ 	p->thread.sp = (unsigned long) fork_frame;
+ 	p->thread.io_bitmap = NULL;
++	p->thread.iopl_warn = 0;
+ 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
+ 
+ #ifdef CONFIG_X86_64
+@@ -942,60 +944,22 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
+  * because the task might wake up and we might look at a stack
+  * changing under us.
+  */
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+-	unsigned long start, bottom, top, sp, fp, ip, ret = 0;
+-	int count = 0;
++	struct unwind_state state;
++	unsigned long addr = 0;
+ 
+-	if (p == current || task_is_running(p))
+-		return 0;
+-
+-	if (!try_get_task_stack(p))
+-		return 0;
+-
+-	start = (unsigned long)task_stack_page(p);
+-	if (!start)
+-		goto out;
+-
+-	/*
+-	 * Layout of the stack page:
+-	 *
+-	 * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
+-	 * PADDING
+-	 * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
+-	 * stack
+-	 * ----------- bottom = start
+-	 *
+-	 * The tasks stack pointer points at the location where the
+-	 * framepointer is stored. The data on the stack is:
+-	 * ... IP FP ... IP FP
+-	 *
+-	 * We need to read FP and IP, so we need to adjust the upper
+-	 * bound by another unsigned long.
+-	 */
+-	top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;
+-	top -= 2 * sizeof(unsigned long);
+-	bottom = start;
+-
+-	sp = READ_ONCE(p->thread.sp);
+-	if (sp < bottom || sp > top)
+-		goto out;
+-
+-	fp = READ_ONCE_NOCHECK(((struct inactive_task_frame *)sp)->bp);
+-	do {
+-		if (fp < bottom || fp > top)
+-			goto out;
+-		ip = READ_ONCE_NOCHECK(*(unsigned long *)(fp + sizeof(unsigned long)));
+-		if (!in_sched_functions(ip)) {
+-			ret = ip;
+-			goto out;
+-		}
+-		fp = READ_ONCE_NOCHECK(*(unsigned long *)fp);
+-	} while (count++ < 16 && !task_is_running(p));
++	for (unwind_start(&state, p, NULL, NULL); !unwind_done(&state);
++	     unwind_next_frame(&state)) {
++		addr = unwind_get_return_address(&state);
++		if (!addr)
++			break;
++		if (in_sched_functions(addr))
++			continue;
++		break;
++	}
+ 
+-out:
+-	put_task_stack(p);
+-	return ret;
++	return addr;
+ }
+ 
+ long do_arch_prctl_common(struct task_struct *task, int option,
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index a6895e440bc35..88401675dabb0 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb;
+ struct sev_es_runtime_data {
+ 	struct ghcb ghcb_page;
+ 
+-	/* Physical storage for the per-CPU IST stack of the #VC handler */
+-	char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
+-
+-	/*
+-	 * Physical storage for the per-CPU fall-back stack of the #VC handler.
+-	 * The fall-back stack is used when it is not safe to switch back to the
+-	 * interrupted stack in the #VC entry code.
+-	 */
+-	char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE);
+-
+ 	/*
+ 	 * Reserve one page per CPU as backup storage for the unencrypted GHCB.
+ 	 * It is needed when an NMI happens while the #VC handler uses the real
+@@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key);
+ /* Needed in vc_early_forward_exception */
+ void do_early_exception(struct pt_regs *regs, int trapnr);
+ 
+-static void __init setup_vc_stacks(int cpu)
+-{
+-	struct sev_es_runtime_data *data;
+-	struct cpu_entry_area *cea;
+-	unsigned long vaddr;
+-	phys_addr_t pa;
+-
+-	data = per_cpu(runtime_data, cpu);
+-	cea  = get_cpu_entry_area(cpu);
+-
+-	/* Map #VC IST stack */
+-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC);
+-	pa    = __pa(data->ist_stack);
+-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+-
+-	/* Map VC fall-back stack */
+-	vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2);
+-	pa    = __pa(data->fallback_stack);
+-	cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
+-}
+-
+ static __always_inline bool on_vc_stack(struct pt_regs *regs)
+ {
+ 	unsigned long sp = regs->sp;
+@@ -787,7 +756,6 @@ void __init sev_es_init_vc_handling(void)
+ 	for_each_possible_cpu(cpu) {
+ 		alloc_runtime_data(cpu);
+ 		init_ghcb(cpu);
+-		setup_vc_stacks(cpu);
+ 	}
+ 
+ 	sev_es_setup_play_dead();
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index a58800973aed3..5b1984d468227 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -313,17 +313,19 @@ out:
+ }
+ 
+ #ifdef CONFIG_VMAP_STACK
+-__visible void __noreturn handle_stack_overflow(const char *message,
+-						struct pt_regs *regs,
+-						unsigned long fault_address)
++__visible void __noreturn handle_stack_overflow(struct pt_regs *regs,
++						unsigned long fault_address,
++						struct stack_info *info)
+ {
+-	printk(KERN_EMERG "BUG: stack guard page was hit at %p (stack is %p..%p)\n",
+-		 (void *)fault_address, current->stack,
+-		 (char *)current->stack + THREAD_SIZE - 1);
+-	die(message, regs, 0);
++	const char *name = stack_type_name(info->type);
++
++	printk(KERN_EMERG "BUG: %s stack guard page was hit at %p (stack is %p..%p)\n",
++	       name, (void *)fault_address, info->begin, info->end);
++
++	die("stack guard page", regs, 0);
+ 
+ 	/* Be absolutely certain we don't return. */
+-	panic("%s", message);
++	panic("%s stack guard hit", name);
+ }
+ #endif
+ 
+@@ -353,6 +355,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
+ 
+ #ifdef CONFIG_VMAP_STACK
+ 	unsigned long address = read_cr2();
++	struct stack_info info;
+ #endif
+ 
+ #ifdef CONFIG_X86_ESPFIX64
+@@ -455,10 +458,8 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
+ 	 * stack even if the actual trigger for the double fault was
+ 	 * something else.
+ 	 */
+-	if ((unsigned long)task_stack_page(tsk) - 1 - address < PAGE_SIZE) {
+-		handle_stack_overflow("kernel stack overflow (double-fault)",
+-				      regs, address);
+-	}
++	if (get_stack_guard_info((void *)address, &info))
++		handle_stack_overflow(regs, address, &info);
+ #endif
+ 
+ 	pr_emerg("PANIC: double fault, error_code: 0x%lx\n", error_code);
+@@ -528,6 +529,36 @@ static enum kernel_gp_hint get_kernel_gp_address(struct pt_regs *regs,
+ 
+ #define GPFSTR "general protection fault"
+ 
++static bool fixup_iopl_exception(struct pt_regs *regs)
++{
++	struct thread_struct *t = &current->thread;
++	unsigned char byte;
++	unsigned long ip;
++
++	if (!IS_ENABLED(CONFIG_X86_IOPL_IOPERM) || t->iopl_emul != 3)
++		return false;
++
++	if (insn_get_effective_ip(regs, &ip))
++		return false;
++
++	if (get_user(byte, (const char __user *)ip))
++		return false;
++
++	if (byte != 0xfa && byte != 0xfb)
++		return false;
++
++	if (!t->iopl_warn && printk_ratelimit()) {
++		pr_err("%s[%d] attempts to use CLI/STI, pretending it's a NOP, ip:%lx",
++		       current->comm, task_pid_nr(current), ip);
++		print_vma_addr(KERN_CONT " in ", ip);
++		pr_cont("\n");
++		t->iopl_warn = 1;
++	}
++
++	regs->ip += 1;
++	return true;
++}
++
+ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection)
+ {
+ 	char desc[sizeof(GPFSTR) + 50 + 2*sizeof(unsigned long) + 1] = GPFSTR;
+@@ -553,6 +584,9 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection)
+ 	tsk = current;
+ 
+ 	if (user_mode(regs)) {
++		if (fixup_iopl_exception(regs))
++			goto exit;
++
+ 		tsk->thread.error_code = error_code;
+ 		tsk->thread.trap_nr = X86_TRAP_GP;
+ 
+@@ -709,7 +743,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
+ 	stack = (unsigned long *)sp;
+ 
+ 	if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY ||
+-	    info.type >= STACK_TYPE_EXCEPTION_LAST)
++	    info.type > STACK_TYPE_EXCEPTION_LAST)
+ 		sp = __this_cpu_ist_top_va(VC2);
+ 
+ sync:
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 751aa85a30012..f666fd79d8ad6 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -232,6 +232,25 @@ u64 kvm_vcpu_reserved_gpa_bits_raw(struct kvm_vcpu *vcpu)
+ 	return rsvd_bits(cpuid_maxphyaddr(vcpu), 63);
+ }
+ 
++static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
++                        int nent)
++{
++    int r;
++
++    r = kvm_check_cpuid(e2, nent);
++    if (r)
++        return r;
++
++    kvfree(vcpu->arch.cpuid_entries);
++    vcpu->arch.cpuid_entries = e2;
++    vcpu->arch.cpuid_nent = nent;
++
++    kvm_update_cpuid_runtime(vcpu);
++    kvm_vcpu_after_set_cpuid(vcpu);
++
++    return 0;
++}
++
+ /* when an old userspace process fills a new kernel module */
+ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
+ 			     struct kvm_cpuid *cpuid,
+@@ -268,18 +287,9 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
+ 		e2[i].padding[2] = 0;
+ 	}
+ 
+-	r = kvm_check_cpuid(e2, cpuid->nent);
+-	if (r) {
++	r = kvm_set_cpuid(vcpu, e2, cpuid->nent);
++	if (r)
+ 		kvfree(e2);
+-		goto out_free_cpuid;
+-	}
+-
+-	kvfree(vcpu->arch.cpuid_entries);
+-	vcpu->arch.cpuid_entries = e2;
+-	vcpu->arch.cpuid_nent = cpuid->nent;
+-
+-	kvm_update_cpuid_runtime(vcpu);
+-	kvm_vcpu_after_set_cpuid(vcpu);
+ 
+ out_free_cpuid:
+ 	kvfree(e);
+@@ -303,20 +313,11 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
+ 			return PTR_ERR(e2);
+ 	}
+ 
+-	r = kvm_check_cpuid(e2, cpuid->nent);
+-	if (r) {
++	r = kvm_set_cpuid(vcpu, e2, cpuid->nent);
++	if (r)
+ 		kvfree(e2);
+-		return r;
+-	}
+ 
+-	kvfree(vcpu->arch.cpuid_entries);
+-	vcpu->arch.cpuid_entries = e2;
+-	vcpu->arch.cpuid_nent = cpuid->nent;
+-
+-	kvm_update_cpuid_runtime(vcpu);
+-	kvm_vcpu_after_set_cpuid(vcpu);
+-
+-	return 0;
++	return r;
+ }
+ 
+ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index ce30503f5438f..0931b1ee1fc3c 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -523,29 +523,6 @@ static int nested_vmx_check_tpr_shadow_controls(struct kvm_vcpu *vcpu,
+ 	return 0;
+ }
+ 
+-/*
+- * Check if MSR is intercepted for L01 MSR bitmap.
+- */
+-static bool msr_write_intercepted_l01(struct kvm_vcpu *vcpu, u32 msr)
+-{
+-	unsigned long *msr_bitmap;
+-	int f = sizeof(unsigned long);
+-
+-	if (!cpu_has_vmx_msr_bitmap())
+-		return true;
+-
+-	msr_bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap;
+-
+-	if (msr <= 0x1fff) {
+-		return !!test_bit(msr, msr_bitmap + 0x800 / f);
+-	} else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
+-		msr &= 0x1fff;
+-		return !!test_bit(msr, msr_bitmap + 0xc00 / f);
+-	}
+-
+-	return true;
+-}
+-
+ /*
+  * If a msr is allowed by L0, we should check whether it is allowed by L1.
+  * The corresponding bit will be cleared unless both of L0 and L1 allow it.
+@@ -599,6 +576,34 @@ static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap)
+ 	}
+ }
+ 
++#define BUILD_NVMX_MSR_INTERCEPT_HELPER(rw)					\
++static inline									\
++void nested_vmx_set_msr_##rw##_intercept(struct vcpu_vmx *vmx,			\
++					 unsigned long *msr_bitmap_l1,		\
++					 unsigned long *msr_bitmap_l0, u32 msr)	\
++{										\
++	if (vmx_test_msr_bitmap_##rw(vmx->vmcs01.msr_bitmap, msr) ||		\
++	    vmx_test_msr_bitmap_##rw(msr_bitmap_l1, msr))			\
++		vmx_set_msr_bitmap_##rw(msr_bitmap_l0, msr);			\
++	else									\
++		vmx_clear_msr_bitmap_##rw(msr_bitmap_l0, msr);			\
++}
++BUILD_NVMX_MSR_INTERCEPT_HELPER(read)
++BUILD_NVMX_MSR_INTERCEPT_HELPER(write)
++
++static inline void nested_vmx_set_intercept_for_msr(struct vcpu_vmx *vmx,
++						    unsigned long *msr_bitmap_l1,
++						    unsigned long *msr_bitmap_l0,
++						    u32 msr, int types)
++{
++	if (types & MSR_TYPE_R)
++		nested_vmx_set_msr_read_intercept(vmx, msr_bitmap_l1,
++						  msr_bitmap_l0, msr);
++	if (types & MSR_TYPE_W)
++		nested_vmx_set_msr_write_intercept(vmx, msr_bitmap_l1,
++						   msr_bitmap_l0, msr);
++}
++
+ /*
+  * Merge L0's and L1's MSR bitmap, return false to indicate that
+  * we do not use the hardware.
+@@ -606,10 +611,11 @@ static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap)
+ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 						 struct vmcs12 *vmcs12)
+ {
++	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	int msr;
+ 	unsigned long *msr_bitmap_l1;
+-	unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;
+-	struct kvm_host_map *map = &to_vmx(vcpu)->nested.msr_bitmap_map;
++	unsigned long *msr_bitmap_l0 = vmx->nested.vmcs02.msr_bitmap;
++	struct kvm_host_map *map = &vmx->nested.msr_bitmap_map;
+ 
+ 	/* Nothing to do if the MSR bitmap is not in use.  */
+ 	if (!cpu_has_vmx_msr_bitmap() ||
+@@ -660,44 +666,27 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 		}
+ 	}
+ 
+-	/* KVM unconditionally exposes the FS/GS base MSRs to L1. */
++	/*
++	 * Always check vmcs01's bitmap to honor userspace MSR filters and any
++	 * other runtime changes to vmcs01's bitmap, e.g. dynamic pass-through.
++	 */
+ #ifdef CONFIG_X86_64
+-	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+-					     MSR_FS_BASE, MSR_TYPE_RW);
++	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
++					 MSR_FS_BASE, MSR_TYPE_RW);
+ 
+-	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+-					     MSR_GS_BASE, MSR_TYPE_RW);
++	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
++					 MSR_GS_BASE, MSR_TYPE_RW);
+ 
+-	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+-					     MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
++	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
++					 MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+ #endif
++	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
++					 MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
+ 
+-	/*
+-	 * Checking the L0->L1 bitmap is trying to verify two things:
+-	 *
+-	 * 1. L0 gave a permission to L1 to actually passthrough the MSR. This
+-	 *    ensures that we do not accidentally generate an L02 MSR bitmap
+-	 *    from the L12 MSR bitmap that is too permissive.
+-	 * 2. That L1 or L2s have actually used the MSR. This avoids
+-	 *    unnecessarily merging of the bitmap if the MSR is unused. This
+-	 *    works properly because we only update the L01 MSR bitmap lazily.
+-	 *    So even if L0 should pass L1 these MSRs, the L01 bitmap is only
+-	 *    updated to reflect this when L1 (or its L2s) actually write to
+-	 *    the MSR.
+-	 */
+-	if (!msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL))
+-		nested_vmx_disable_intercept_for_msr(
+-					msr_bitmap_l1, msr_bitmap_l0,
+-					MSR_IA32_SPEC_CTRL,
+-					MSR_TYPE_R | MSR_TYPE_W);
+-
+-	if (!msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD))
+-		nested_vmx_disable_intercept_for_msr(
+-					msr_bitmap_l1, msr_bitmap_l0,
+-					MSR_IA32_PRED_CMD,
+-					MSR_TYPE_W);
++	nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
++					 MSR_IA32_PRED_CMD, MSR_TYPE_W);
+ 
+-	kvm_vcpu_unmap(vcpu, &to_vmx(vcpu)->nested.msr_bitmap_map, false);
++	kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false);
+ 
+ 	return true;
+ }
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 3cb2f4739e324..26993681d8a1f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -770,24 +770,13 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu)
+ /*
+  * Check if MSR is intercepted for currently loaded MSR bitmap.
+  */
+-static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
++static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
+ {
+-	unsigned long *msr_bitmap;
+-	int f = sizeof(unsigned long);
+-
+-	if (!cpu_has_vmx_msr_bitmap())
++	if (!(exec_controls_get(vmx) & CPU_BASED_USE_MSR_BITMAPS))
+ 		return true;
+ 
+-	msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
+-
+-	if (msr <= 0x1fff) {
+-		return !!test_bit(msr, msr_bitmap + 0x800 / f);
+-	} else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
+-		msr &= 0x1fff;
+-		return !!test_bit(msr, msr_bitmap + 0xc00 / f);
+-	}
+-
+-	return true;
++	return vmx_test_msr_bitmap_write(vmx->loaded_vmcs->msr_bitmap,
++					 MSR_IA32_SPEC_CTRL);
+ }
+ 
+ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+@@ -3673,46 +3662,6 @@ void free_vpid(int vpid)
+ 	spin_unlock(&vmx_vpid_lock);
+ }
+ 
+-static void vmx_clear_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
+-{
+-	int f = sizeof(unsigned long);
+-
+-	if (msr <= 0x1fff)
+-		__clear_bit(msr, msr_bitmap + 0x000 / f);
+-	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
+-		__clear_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
+-}
+-
+-static void vmx_clear_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
+-{
+-	int f = sizeof(unsigned long);
+-
+-	if (msr <= 0x1fff)
+-		__clear_bit(msr, msr_bitmap + 0x800 / f);
+-	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
+-		__clear_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
+-}
+-
+-static void vmx_set_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
+-{
+-	int f = sizeof(unsigned long);
+-
+-	if (msr <= 0x1fff)
+-		__set_bit(msr, msr_bitmap + 0x000 / f);
+-	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
+-		__set_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
+-}
+-
+-static void vmx_set_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
+-{
+-	int f = sizeof(unsigned long);
+-
+-	if (msr <= 0x1fff)
+-		__set_bit(msr, msr_bitmap + 0x800 / f);
+-	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
+-		__set_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
+-}
+-
+ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -6685,7 +6634,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	 * If the L02 MSR bitmap does not intercept the MSR, then we need to
+ 	 * save it.
+ 	 */
+-	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
++	if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
+ 		vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+ 
+ 	x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);
+@@ -7517,6 +7466,8 @@ static void vmx_migrate_timers(struct kvm_vcpu *vcpu)
+ 
+ static void hardware_unsetup(void)
+ {
++	kvm_set_posted_intr_wakeup_handler(NULL);
++
+ 	if (nested)
+ 		nested_vmx_hardware_unsetup();
+ 
+@@ -7844,8 +7795,6 @@ static __init int hardware_setup(void)
+ 		vmx_x86_ops.request_immediate_exit = __kvm_request_immediate_exit;
+ 	}
+ 
+-	kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler);
+-
+ 	kvm_mce_cap_supported |= MCG_LMCE_P;
+ 
+ 	if (pt_mode != PT_MODE_SYSTEM && pt_mode != PT_MODE_HOST_GUEST)
+@@ -7869,6 +7818,9 @@ static __init int hardware_setup(void)
+ 	r = alloc_kvm_area();
+ 	if (r)
+ 		nested_vmx_hardware_unsetup();
++
++	kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler);
++
+ 	return r;
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 17a1cb4b059df..88d2e939aa360 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -406,6 +406,69 @@ static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
+ 
+ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+ 
++static inline bool vmx_test_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
++{
++	int f = sizeof(unsigned long);
++
++	if (msr <= 0x1fff)
++		return test_bit(msr, msr_bitmap + 0x000 / f);
++	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
++		return test_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
++	return true;
++}
++
++static inline bool vmx_test_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
++{
++	int f = sizeof(unsigned long);
++
++	if (msr <= 0x1fff)
++		return test_bit(msr, msr_bitmap + 0x800 / f);
++	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
++		return test_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
++	return true;
++}
++
++static inline void vmx_clear_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
++{
++	int f = sizeof(unsigned long);
++
++	if (msr <= 0x1fff)
++		__clear_bit(msr, msr_bitmap + 0x000 / f);
++	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
++		__clear_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
++}
++
++static inline void vmx_clear_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
++{
++	int f = sizeof(unsigned long);
++
++	if (msr <= 0x1fff)
++		__clear_bit(msr, msr_bitmap + 0x800 / f);
++	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
++		__clear_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
++}
++
++static inline void vmx_set_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
++{
++	int f = sizeof(unsigned long);
++
++	if (msr <= 0x1fff)
++		__set_bit(msr, msr_bitmap + 0x000 / f);
++	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
++		__set_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
++}
++
++static inline void vmx_set_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
++{
++	int f = sizeof(unsigned long);
++
++	if (msr <= 0x1fff)
++		__set_bit(msr, msr_bitmap + 0x800 / f);
++	else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
++		__set_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
++}
++
++
+ static inline u8 vmx_get_rvi(void)
+ {
+ 	return vmcs_read16(GUEST_INTR_STATUS) & 0xff;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 6aea38dfb0bb0..285e865931436 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3190,8 +3190,11 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
+ 
+ static void record_steal_time(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_host_map map;
+-	struct kvm_steal_time *st;
++	struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
++	struct kvm_steal_time __user *st;
++	struct kvm_memslots *slots;
++	u64 steal;
++	u32 version;
+ 
+ 	if (kvm_xen_msr_enabled(vcpu->kvm)) {
+ 		kvm_xen_runstate_set_running(vcpu);
+@@ -3201,47 +3204,86 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
+ 	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ 		return;
+ 
+-	/* -EAGAIN is returned in atomic context so we can just return. */
+-	if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT,
+-			&map, &vcpu->arch.st.cache, false))
++	if (WARN_ON_ONCE(current->mm != vcpu->kvm->mm))
+ 		return;
+ 
+-	st = map.hva +
+-		offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
++	slots = kvm_memslots(vcpu->kvm);
++
++	if (unlikely(slots->generation != ghc->generation ||
++		     kvm_is_error_hva(ghc->hva) || !ghc->memslot)) {
++		gfn_t gfn = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
++
++		/* We rely on the fact that it fits in a single page. */
++		BUILD_BUG_ON((sizeof(*st) - 1) & KVM_STEAL_VALID_BITS);
++
++		if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gfn, sizeof(*st)) ||
++		    kvm_is_error_hva(ghc->hva) || !ghc->memslot)
++			return;
++	}
+ 
++	st = (struct kvm_steal_time __user *)ghc->hva;
+ 	/*
+ 	 * Doing a TLB flush here, on the guest's behalf, can avoid
+ 	 * expensive IPIs.
+ 	 */
+ 	if (guest_pv_has(vcpu, KVM_FEATURE_PV_TLB_FLUSH)) {
+-		u8 st_preempted = xchg(&st->preempted, 0);
++		u8 st_preempted = 0;
++		int err = -EFAULT;
++
++		if (!user_access_begin(st, sizeof(*st)))
++			return;
++
++		asm volatile("1: xchgb %0, %2\n"
++			     "xor %1, %1\n"
++			     "2:\n"
++			     _ASM_EXTABLE_UA(1b, 2b)
++			     : "+r" (st_preempted),
++			       "+&r" (err)
++			     : "m" (st->preempted));
++		if (err)
++			goto out;
++
++		user_access_end();
++
++		vcpu->arch.st.preempted = 0;
+ 
+ 		trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
+ 				       st_preempted & KVM_VCPU_FLUSH_TLB);
+ 		if (st_preempted & KVM_VCPU_FLUSH_TLB)
+ 			kvm_vcpu_flush_tlb_guest(vcpu);
++
++		if (!user_access_begin(st, sizeof(*st)))
++			goto dirty;
+ 	} else {
+-		st->preempted = 0;
+-	}
++		if (!user_access_begin(st, sizeof(*st)))
++			return;
+ 
+-	vcpu->arch.st.preempted = 0;
++		unsafe_put_user(0, &st->preempted, out);
++		vcpu->arch.st.preempted = 0;
++	}
+ 
+-	if (st->version & 1)
+-		st->version += 1;  /* first time write, random junk */
++	unsafe_get_user(version, &st->version, out);
++	if (version & 1)
++		version += 1;  /* first time write, random junk */
+ 
+-	st->version += 1;
++	version += 1;
++	unsafe_put_user(version, &st->version, out);
+ 
+ 	smp_wmb();
+ 
+-	st->steal += current->sched_info.run_delay -
++	unsafe_get_user(steal, &st->steal, out);
++	steal += current->sched_info.run_delay -
+ 		vcpu->arch.st.last_steal;
+ 	vcpu->arch.st.last_steal = current->sched_info.run_delay;
++	unsafe_put_user(steal, &st->steal, out);
+ 
+-	smp_wmb();
+-
+-	st->version += 1;
++	version += 1;
++	unsafe_put_user(version, &st->version, out);
+ 
+-	kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, false);
++ out:
++	user_access_end();
++ dirty:
++	mark_page_dirty_in_slot(vcpu->kvm, ghc->memslot, gpa_to_gfn(ghc->gpa));
+ }
+ 
+ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+@@ -4280,8 +4322,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 
+ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_host_map map;
+-	struct kvm_steal_time *st;
++	struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
++	struct kvm_steal_time __user *st;
++	struct kvm_memslots *slots;
++	static const u8 preempted = KVM_VCPU_PREEMPTED;
+ 
+ 	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ 		return;
+@@ -4289,16 +4333,23 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+ 	if (vcpu->arch.st.preempted)
+ 		return;
+ 
+-	if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT, &map,
+-			&vcpu->arch.st.cache, true))
++	/* This happens on process exit */
++	if (unlikely(current->mm != vcpu->kvm->mm))
++		return;
++
++	slots = kvm_memslots(vcpu->kvm);
++
++	if (unlikely(slots->generation != ghc->generation ||
++		     kvm_is_error_hva(ghc->hva) || !ghc->memslot))
+ 		return;
+ 
+-	st = map.hva +
+-		offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
++	st = (struct kvm_steal_time __user *)ghc->hva;
++	BUILD_BUG_ON(sizeof(st->preempted) != sizeof(preempted));
+ 
+-	st->preempted = vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
++	if (!copy_to_user_nofault(&st->preempted, &preempted, sizeof(preempted)))
++		vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
+ 
+-	kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, true);
++	mark_page_dirty_in_slot(vcpu->kvm, ghc->memslot, gpa_to_gfn(ghc->gpa));
+ }
+ 
+ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+@@ -10812,11 +10863,8 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+-	struct gfn_to_pfn_cache *cache = &vcpu->arch.st.cache;
+ 	int idx;
+ 
+-	kvm_release_pfn(cache->pfn, cache->dirty, cache);
+-
+ 	kvmclock_reset(vcpu);
+ 
+ 	static_call(kvm_x86_vcpu_free)(vcpu);
+diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
+index a1d24fdc07cf0..eb3ccffb9b9dc 100644
+--- a/arch/x86/lib/insn-eval.c
++++ b/arch/x86/lib/insn-eval.c
+@@ -1417,7 +1417,7 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
+ 	}
+ }
+ 
+-static int insn_get_effective_ip(struct pt_regs *regs, unsigned long *ip)
++int insn_get_effective_ip(struct pt_regs *regs, unsigned long *ip)
+ {
+ 	unsigned long seg_base = 0;
+ 
+diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c
+index c565def611e24..55e371cc69fd5 100644
+--- a/arch/x86/lib/insn.c
++++ b/arch/x86/lib/insn.c
+@@ -13,6 +13,7 @@
+ #endif
+ #include <asm/inat.h> /*__ignore_sync_check__ */
+ #include <asm/insn.h> /* __ignore_sync_check__ */
++#include <asm/unaligned.h> /* __ignore_sync_check__ */
+ 
+ #include <linux/errno.h>
+ #include <linux/kconfig.h>
+@@ -37,10 +38,10 @@
+ 	((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
+ 
+ #define __get_next(t, insn)	\
+-	({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
++	({ t r = get_unaligned((t *)(insn)->next_byte); (insn)->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+ 
+ #define __peek_nbyte_next(t, insn, n)	\
+-	({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
++	({ t r = get_unaligned((t *)(insn)->next_byte + n); leXX_to_cpu(t, r); })
+ 
+ #define get_next(t, insn)	\
+ 	({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
+diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
+index f5e1e60c9095f..6c2f1b76a0b61 100644
+--- a/arch/x86/mm/cpu_entry_area.c
++++ b/arch/x86/mm/cpu_entry_area.c
+@@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu)
+ 	cea_map_stack(NMI);
+ 	cea_map_stack(DB);
+ 	cea_map_stack(MCE);
++
++	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) {
++		if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
++			cea_map_stack(VC);
++			cea_map_stack(VC2);
++		}
++	}
+ }
+ #else
+ static inline void percpu_setup_exception_stacks(unsigned int cpu)
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 84a2c8c4af735..4bfed53e210ec 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -32,6 +32,7 @@
+ #include <asm/pgtable_areas.h>		/* VMALLOC_START, ...		*/
+ #include <asm/kvm_para.h>		/* kvm_handle_async_pf		*/
+ #include <asm/vdso.h>			/* fixup_vdso_exception()	*/
++#include <asm/irq_stack.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <asm/trace/exceptions.h>
+@@ -631,6 +632,9 @@ static noinline void
+ page_fault_oops(struct pt_regs *regs, unsigned long error_code,
+ 		unsigned long address)
+ {
++#ifdef CONFIG_VMAP_STACK
++	struct stack_info info;
++#endif
+ 	unsigned long flags;
+ 	int sig;
+ 
+@@ -649,9 +653,7 @@ page_fault_oops(struct pt_regs *regs, unsigned long error_code,
+ 	 * that we're in vmalloc space to avoid this.
+ 	 */
+ 	if (is_vmalloc_addr((void *)address) &&
+-	    (((unsigned long)current->stack - 1 - address < PAGE_SIZE) ||
+-	     address - ((unsigned long)current->stack + THREAD_SIZE) < PAGE_SIZE)) {
+-		unsigned long stack = __this_cpu_ist_top_va(DF) - sizeof(void *);
++	    get_stack_guard_info((void *)address, &info)) {
+ 		/*
+ 		 * We're likely to be running with very little stack space
+ 		 * left.  It's plausible that we'd hit this condition but
+@@ -662,13 +664,11 @@ page_fault_oops(struct pt_regs *regs, unsigned long error_code,
+ 		 * and then double-fault, though, because we're likely to
+ 		 * break the console driver and lose most of the stack dump.
+ 		 */
+-		asm volatile ("movq %[stack], %%rsp\n\t"
+-			      "call handle_stack_overflow\n\t"
+-			      "1: jmp 1b"
+-			      : ASM_CALL_CONSTRAINT
+-			      : "D" ("kernel stack overflow (page fault)"),
+-				"S" (regs), "d" (address),
+-				[stack] "rm" (stack));
++		call_on_stack(__this_cpu_ist_top_va(DF) - sizeof(void*),
++			      handle_stack_overflow,
++			      ASM_CALL_ARG3,
++			      , [arg1] "r" (regs), [arg2] "r" (address), [arg3] "r" (&info));
++
+ 		unreachable();
+ 	}
+ #endif
+diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
+index ff08dc4636347..e29b1418d00c7 100644
+--- a/arch/x86/mm/mem_encrypt.c
++++ b/arch/x86/mm/mem_encrypt.c
+@@ -20,6 +20,7 @@
+ #include <linux/bitops.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/virtio_config.h>
++#include <linux/cc_platform.h>
+ 
+ #include <asm/tlbflush.h>
+ #include <asm/fixmap.h>
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index 470b202084306..700ce8fdea87c 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -27,6 +27,15 @@
+ #undef CONFIG_PARAVIRT_XXL
+ #undef CONFIG_PARAVIRT_SPINLOCKS
+ 
++/*
++ * This code runs before CPU feature bits are set. By default, the
++ * pgtable_l5_enabled() function uses bit X86_FEATURE_LA57 to determine if
++ * 5-level paging is active, so that won't work here. USE_EARLY_PGTABLE_L5
++ * is provided to handle this situation and, instead, use a variable that
++ * has been set by the early boot code.
++ */
++#define USE_EARLY_PGTABLE_L5
++
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/mem_encrypt.h>
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index 7f63aca6a0d34..ad15fbc572838 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -215,7 +215,7 @@ struct mm_struct;
+ /* Free all resources held by a thread. */
+ #define release_thread(thread) do { } while(0)
+ 
+-extern unsigned long get_wchan(struct task_struct *p);
++extern unsigned long __get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
+ #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->areg[1])
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 0601653406123..47f933fed8700 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -298,15 +298,12 @@ int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn,
+  * These bracket the sleeping functions..
+  */
+ 
+-unsigned long get_wchan(struct task_struct *p)
++unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long sp, pc;
+ 	unsigned long stack_page = (unsigned long) task_stack_page(p);
+ 	int count = 0;
+ 
+-	if (!p || p == current || task_is_running(p))
+-		return 0;
+-
+ 	sp = p->thread.sp;
+ 	pc = MAKE_PC_FROM_RA(p->thread.ra, p->thread.sp);
+ 
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 8e4dcf6036f60..8d9041e0f4bec 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -634,6 +634,14 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
+ 
+ 	q = bdev->bd_disk->queue;
+ 
++	/*
++	 * blkcg_deactivate_policy() requires queue to be frozen, we can grab
++	 * q_usage_counter to prevent concurrent with blkcg_deactivate_policy().
++	 */
++	ret = blk_queue_enter(q, 0);
++	if (ret)
++		return ret;
++
+ 	rcu_read_lock();
+ 	spin_lock_irq(&q->queue_lock);
+ 
+@@ -703,6 +711,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
+ 			goto success;
+ 	}
+ success:
++	blk_queue_exit(q);
+ 	ctx->bdev = bdev;
+ 	ctx->blkg = blkg;
+ 	ctx->body = input;
+@@ -715,6 +724,7 @@ fail_unlock:
+ 	rcu_read_unlock();
+ fail:
+ 	blkdev_put_no_open(bdev);
++	blk_queue_exit(q);
+ 	/*
+ 	 * If queue was bypassing, we should retry.  Do so after a
+ 	 * short msleep().  It isn't strictly necessary but queue
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9c64f0025a562..5a5dbb3d08759 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -756,7 +756,6 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
+ 	/* this request will be re-inserted to io scheduler queue */
+ 	blk_mq_sched_requeue_request(rq);
+ 
+-	BUG_ON(!list_empty(&rq->queuelist));
+ 	blk_mq_add_to_requeue_list(rq, true, kick_requeue_list);
+ }
+ EXPORT_SYMBOL(blk_mq_requeue_request);
+@@ -1318,6 +1317,7 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ 	int errors, queued;
+ 	blk_status_t ret = BLK_STS_OK;
+ 	LIST_HEAD(zone_list);
++	bool needs_resource = false;
+ 
+ 	if (list_empty(list))
+ 		return false;
+@@ -1363,6 +1363,8 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ 			queued++;
+ 			break;
+ 		case BLK_STS_RESOURCE:
++			needs_resource = true;
++			fallthrough;
+ 		case BLK_STS_DEV_RESOURCE:
+ 			blk_mq_handle_dev_resource(rq, list);
+ 			goto out;
+@@ -1373,6 +1375,7 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ 			 * accept.
+ 			 */
+ 			blk_mq_handle_zone_resource(rq, &zone_list);
++			needs_resource = true;
+ 			break;
+ 		default:
+ 			errors++;
+@@ -1399,7 +1402,6 @@ out:
+ 		/* For non-shared tags, the RESTART check will suffice */
+ 		bool no_tag = prep == PREP_DISPATCH_NO_TAG &&
+ 			(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED);
+-		bool no_budget_avail = prep == PREP_DISPATCH_NO_BUDGET;
+ 
+ 		if (nr_budgets)
+ 			blk_mq_release_budgets(q, list);
+@@ -1440,14 +1442,16 @@ out:
+ 		 * If driver returns BLK_STS_RESOURCE and SCHED_RESTART
+ 		 * bit is set, run queue after a delay to avoid IO stalls
+ 		 * that could otherwise occur if the queue is idle.  We'll do
+-		 * similar if we couldn't get budget and SCHED_RESTART is set.
++		 * similar if we couldn't get budget or couldn't lock a zone
++		 * and SCHED_RESTART is set.
+ 		 */
+ 		needs_restart = blk_mq_sched_needs_restart(hctx);
++		if (prep == PREP_DISPATCH_NO_BUDGET)
++			needs_resource = true;
+ 		if (!needs_restart ||
+ 		    (no_tag && list_empty_careful(&hctx->dispatch_wait.entry)))
+ 			blk_mq_run_hw_queue(hctx, true);
+-		else if (needs_restart && (ret == BLK_STS_RESOURCE ||
+-					   no_budget_avail))
++		else if (needs_restart && needs_resource)
+ 			blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
+ 
+ 		blk_mq_update_dispatch_busy(hctx, true);
+@@ -2136,14 +2140,14 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
+ }
+ 
+ /*
+- * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
++ * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
+  * queues. This is important for md arrays to benefit from merging
+  * requests.
+  */
+ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
+ {
+ 	if (plug->multiple_queues)
+-		return BLK_MAX_REQUEST_COUNT * 4;
++		return BLK_MAX_REQUEST_COUNT * 2;
+ 	return BLK_MAX_REQUEST_COUNT;
+ }
+ 
+diff --git a/block/blk.h b/block/blk.h
+index f10cc9b2c27f7..1af7c13ccc708 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -181,6 +181,12 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list,
+ void blk_account_io_start(struct request *req);
+ void blk_account_io_done(struct request *req, u64 now);
+ 
++/*
++ * Plug flush limits
++ */
++#define BLK_MAX_REQUEST_COUNT	32
++#define BLK_PLUG_FLUSH_SIZE	(128 * 1024)
++
+ /*
+  * Internal elevator interface
+  */
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 64b772c5d1c9b..46129f49a38c3 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -233,12 +233,12 @@ config CRYPTO_DH
+ 
+ config CRYPTO_ECC
+ 	tristate
++	select CRYPTO_RNG_DEFAULT
+ 
+ config CRYPTO_ECDH
+ 	tristate "ECDH algorithm"
+ 	select CRYPTO_ECC
+ 	select CRYPTO_KPP
+-	select CRYPTO_RNG_DEFAULT
+ 	help
+ 	  Generic implementation of the ECDH algorithm
+ 
+diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
+index d569c7ed6c800..9d10b846ccf73 100644
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -78,12 +78,14 @@ static void pcrypt_aead_enc(struct padata_priv *padata)
+ {
+ 	struct pcrypt_request *preq = pcrypt_padata_request(padata);
+ 	struct aead_request *req = pcrypt_request_ctx(preq);
++	int ret;
+ 
+-	padata->info = crypto_aead_encrypt(req);
++	ret = crypto_aead_encrypt(req);
+ 
+-	if (padata->info == -EINPROGRESS)
++	if (ret == -EINPROGRESS)
+ 		return;
+ 
++	padata->info = ret;
+ 	padata_do_serial(padata);
+ }
+ 
+@@ -123,12 +125,14 @@ static void pcrypt_aead_dec(struct padata_priv *padata)
+ {
+ 	struct pcrypt_request *preq = pcrypt_padata_request(padata);
+ 	struct aead_request *req = pcrypt_request_ctx(preq);
++	int ret;
+ 
+-	padata->info = crypto_aead_decrypt(req);
++	ret = crypto_aead_decrypt(req);
+ 
+-	if (padata->info == -EINPROGRESS)
++	if (ret == -EINPROGRESS)
+ 		return;
+ 
++	padata->info = ret;
+ 	padata_do_serial(padata);
+ }
+ 
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index 6863e57b088d5..54cf01020b435 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1333,7 +1333,7 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
+ 
+ 			if (bs > XBUFSIZE * PAGE_SIZE) {
+ 				pr_err("template (%u) too big for buffer (%lu)\n",
+-				       *b_size, XBUFSIZE * PAGE_SIZE);
++				       bs, XBUFSIZE * PAGE_SIZE);
+ 				goto out;
+ 			}
+ 
+@@ -1386,8 +1386,7 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
+ 				memset(cur->xbuf[p], 0xff, k);
+ 
+ 				skcipher_request_set_crypt(cur->req, cur->sg,
+-							   cur->sg, *b_size,
+-							   iv);
++							   cur->sg, bs, iv);
+ 			}
+ 
+ 			if (secs) {
+diff --git a/drivers/acpi/ac.c b/drivers/acpi/ac.c
+index b0cb662233f1a..81aff651a0d49 100644
+--- a/drivers/acpi/ac.c
++++ b/drivers/acpi/ac.c
+@@ -61,6 +61,7 @@ static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume);
+ 
+ static int ac_sleep_before_get_state_ms;
+ static int ac_check_pmic = 1;
++static int ac_only;
+ 
+ static struct acpi_driver acpi_ac_driver = {
+ 	.name = "ac",
+@@ -93,6 +94,11 @@ static int acpi_ac_get_state(struct acpi_ac *ac)
+ 	if (!ac)
+ 		return -EINVAL;
+ 
++	if (ac_only) {
++		ac->state = 1;
++		return 0;
++	}
++
+ 	status = acpi_evaluate_integer(ac->device->handle, "_PSR", NULL,
+ 				       &ac->state);
+ 	if (ACPI_FAILURE(status)) {
+@@ -200,6 +206,12 @@ static int __init ac_do_not_check_pmic_quirk(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++static int __init ac_only_quirk(const struct dmi_system_id *d)
++{
++	ac_only = 1;
++	return 0;
++}
++
+ /* Please keep this list alphabetically sorted */
+ static const struct dmi_system_id ac_dmi_table[]  __initconst = {
+ 	{
+@@ -209,6 +221,13 @@ static const struct dmi_system_id ac_dmi_table[]  __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"),
+ 		},
+ 	},
++	{
++		/* Kodlix GK45 returning incorrect state */
++		.callback = ac_only_quirk,
++		.matches = {
++			DMI_MATCH(DMI_PRODUCT_NAME, "GK45"),
++		},
++	},
+ 	{
+ 		/* Lenovo Ideapad Miix 320, AXP288 PMIC, separate fuel-gauge */
+ 		.callback = ac_do_not_check_pmic_quirk,
+diff --git a/drivers/acpi/acpica/acglobal.h b/drivers/acpi/acpica/acglobal.h
+index d41b810e367c4..4366d36ef1198 100644
+--- a/drivers/acpi/acpica/acglobal.h
++++ b/drivers/acpi/acpica/acglobal.h
+@@ -226,6 +226,8 @@ extern struct acpi_bit_register_info
+     acpi_gbl_bit_register_info[ACPI_NUM_BITREG];
+ ACPI_GLOBAL(u8, acpi_gbl_sleep_type_a);
+ ACPI_GLOBAL(u8, acpi_gbl_sleep_type_b);
++ACPI_GLOBAL(u8, acpi_gbl_sleep_type_a_s0);
++ACPI_GLOBAL(u8, acpi_gbl_sleep_type_b_s0);
+ 
+ /*****************************************************************************
+  *
+diff --git a/drivers/acpi/acpica/hwesleep.c b/drivers/acpi/acpica/hwesleep.c
+index 803402aefaeb6..808fdf54aeebf 100644
+--- a/drivers/acpi/acpica/hwesleep.c
++++ b/drivers/acpi/acpica/hwesleep.c
+@@ -147,17 +147,13 @@ acpi_status acpi_hw_extended_sleep(u8 sleep_state)
+ 
+ acpi_status acpi_hw_extended_wake_prep(u8 sleep_state)
+ {
+-	acpi_status status;
+ 	u8 sleep_type_value;
+ 
+ 	ACPI_FUNCTION_TRACE(hw_extended_wake_prep);
+ 
+-	status = acpi_get_sleep_type_data(ACPI_STATE_S0,
+-					  &acpi_gbl_sleep_type_a,
+-					  &acpi_gbl_sleep_type_b);
+-	if (ACPI_SUCCESS(status)) {
++	if (acpi_gbl_sleep_type_a_s0 != ACPI_SLEEP_TYPE_INVALID) {
+ 		sleep_type_value =
+-		    ((acpi_gbl_sleep_type_a << ACPI_X_SLEEP_TYPE_POSITION) &
++		    ((acpi_gbl_sleep_type_a_s0 << ACPI_X_SLEEP_TYPE_POSITION) &
+ 		     ACPI_X_SLEEP_TYPE_MASK);
+ 
+ 		(void)acpi_write((u64)(sleep_type_value | ACPI_X_SLEEP_ENABLE),
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index 14baa13bf8482..34a3825f25d37 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -179,7 +179,7 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 
+ acpi_status acpi_hw_legacy_wake_prep(u8 sleep_state)
+ {
+-	acpi_status status;
++	acpi_status status = AE_OK;
+ 	struct acpi_bit_register_info *sleep_type_reg_info;
+ 	struct acpi_bit_register_info *sleep_enable_reg_info;
+ 	u32 pm1a_control;
+@@ -192,10 +192,7 @@ acpi_status acpi_hw_legacy_wake_prep(u8 sleep_state)
+ 	 * This is unclear from the ACPI Spec, but it is required
+ 	 * by some machines.
+ 	 */
+-	status = acpi_get_sleep_type_data(ACPI_STATE_S0,
+-					  &acpi_gbl_sleep_type_a,
+-					  &acpi_gbl_sleep_type_b);
+-	if (ACPI_SUCCESS(status)) {
++	if (acpi_gbl_sleep_type_a_s0 != ACPI_SLEEP_TYPE_INVALID) {
+ 		sleep_type_reg_info =
+ 		    acpi_hw_get_bit_register_info(ACPI_BITREG_SLEEP_TYPE);
+ 		sleep_enable_reg_info =
+@@ -216,9 +213,9 @@ acpi_status acpi_hw_legacy_wake_prep(u8 sleep_state)
+ 
+ 			/* Insert the SLP_TYP bits */
+ 
+-			pm1a_control |= (acpi_gbl_sleep_type_a <<
++			pm1a_control |= (acpi_gbl_sleep_type_a_s0 <<
+ 					 sleep_type_reg_info->bit_position);
+-			pm1b_control |= (acpi_gbl_sleep_type_b <<
++			pm1b_control |= (acpi_gbl_sleep_type_b_s0 <<
+ 					 sleep_type_reg_info->bit_position);
+ 
+ 			/* Write the control registers and ignore any errors */
+diff --git a/drivers/acpi/acpica/hwxfsleep.c b/drivers/acpi/acpica/hwxfsleep.c
+index 89b12afed564e..e4cde23a29061 100644
+--- a/drivers/acpi/acpica/hwxfsleep.c
++++ b/drivers/acpi/acpica/hwxfsleep.c
+@@ -217,6 +217,13 @@ acpi_status acpi_enter_sleep_state_prep(u8 sleep_state)
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
++	status = acpi_get_sleep_type_data(ACPI_STATE_S0,
++					  &acpi_gbl_sleep_type_a_s0,
++					  &acpi_gbl_sleep_type_b_s0);
++	if (ACPI_FAILURE(status)) {
++		acpi_gbl_sleep_type_a_s0 = ACPI_SLEEP_TYPE_INVALID;
++	}
++
+ 	/* Execute the _PTS method (Prepare To Sleep) */
+ 
+ 	arg_list.count = 1;
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index dae91f906cea9..8afa85d6eb6a7 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -169,7 +169,7 @@ static int acpi_battery_is_charged(struct acpi_battery *battery)
+ 		return 1;
+ 
+ 	/* fallback to using design values for broken batteries */
+-	if (battery->design_capacity == battery->capacity_now)
++	if (battery->design_capacity <= battery->capacity_now)
+ 		return 1;
+ 
+ 	/* we don't do any sort of metric based on percentages */
+diff --git a/drivers/acpi/glue.c b/drivers/acpi/glue.c
+index fce3f3bba714a..3fd1713f1f626 100644
+--- a/drivers/acpi/glue.c
++++ b/drivers/acpi/glue.c
+@@ -363,3 +363,28 @@ int acpi_platform_notify(struct device *dev, enum kobject_action action)
+ 	}
+ 	return 0;
+ }
++
++int acpi_dev_turn_off_if_unused(struct device *dev, void *not_used)
++{
++	struct acpi_device *adev = to_acpi_device(dev);
++
++	/*
++	 * Skip device objects with device IDs, because they may be in use even
++	 * if they are not companions of any physical device objects.
++	 */
++	if (adev->pnp.type.hardware_id)
++		return 0;
++
++	mutex_lock(&adev->physical_node_lock);
++
++	/*
++	 * Device objects without device IDs are not in use if they have no
++	 * corresponding physical device objects.
++	 */
++	if (list_empty(&adev->physical_node_list))
++		acpi_device_set_power(adev, ACPI_STATE_D3_COLD);
++
++	mutex_unlock(&adev->physical_node_lock);
++
++	return 0;
++}
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index d91b560e88674..8fbdc172864b0 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -117,6 +117,7 @@ bool acpi_device_is_battery(struct acpi_device *adev);
+ bool acpi_device_is_first_physical_node(struct acpi_device *adev,
+ 					const struct device *dev);
+ int acpi_bus_register_early_device(int type);
++int acpi_dev_turn_off_if_unused(struct device *dev, void *not_used);
+ 
+ /* --------------------------------------------------------------------------
+                      Device Matching and Notification
+diff --git a/drivers/acpi/pmic/intel_pmic.c b/drivers/acpi/pmic/intel_pmic.c
+index a371f273f99dd..9cde299eba880 100644
+--- a/drivers/acpi/pmic/intel_pmic.c
++++ b/drivers/acpi/pmic/intel_pmic.c
+@@ -211,31 +211,36 @@ static acpi_status intel_pmic_regs_handler(u32 function,
+ 		void *handler_context, void *region_context)
+ {
+ 	struct intel_pmic_opregion *opregion = region_context;
+-	int result = 0;
++	int result = -EINVAL;
++
++	if (function == ACPI_WRITE) {
++		switch (address) {
++		case 0:
++			return AE_OK;
++		case 1:
++			opregion->ctx.addr |= (*value64 & 0xff) << 8;
++			return AE_OK;
++		case 2:
++			opregion->ctx.addr |= *value64 & 0xff;
++			return AE_OK;
++		case 3:
++			opregion->ctx.val = *value64 & 0xff;
++			return AE_OK;
++		case 4:
++			if (*value64) {
++				result = regmap_write(opregion->regmap, opregion->ctx.addr,
++						      opregion->ctx.val);
++			} else {
++				result = regmap_read(opregion->regmap, opregion->ctx.addr,
++						     &opregion->ctx.val);
++			}
++			opregion->ctx.addr = 0;
++		}
++	}
+ 
+-	switch (address) {
+-	case 0:
+-		return AE_OK;
+-	case 1:
+-		opregion->ctx.addr |= (*value64 & 0xff) << 8;
+-		return AE_OK;
+-	case 2:
+-		opregion->ctx.addr |= *value64 & 0xff;
++	if (function == ACPI_READ && address == 3) {
++		*value64 = opregion->ctx.val;
+ 		return AE_OK;
+-	case 3:
+-		opregion->ctx.val = *value64 & 0xff;
+-		return AE_OK;
+-	case 4:
+-		if (*value64) {
+-			result = regmap_write(opregion->regmap, opregion->ctx.addr,
+-					      opregion->ctx.val);
+-		} else {
+-			result = regmap_read(opregion->regmap, opregion->ctx.addr,
+-					     &opregion->ctx.val);
+-			if (result == 0)
+-				*value64 = opregion->ctx.val;
+-		}
+-		memset(&opregion->ctx, 0x00, sizeof(opregion->ctx));
+ 	}
+ 
+ 	if (result < 0) {
+diff --git a/drivers/acpi/power.c b/drivers/acpi/power.c
+index eba7785047cad..36c554a1cfbf4 100644
+--- a/drivers/acpi/power.c
++++ b/drivers/acpi/power.c
+@@ -53,7 +53,6 @@ struct acpi_power_resource {
+ 	u32 order;
+ 	unsigned int ref_count;
+ 	u8 state;
+-	bool wakeup_enabled;
+ 	struct mutex resource_lock;
+ 	struct list_head dependents;
+ };
+@@ -606,20 +605,19 @@ int acpi_power_wakeup_list_init(struct list_head *list, int *system_level_p)
+ 
+ 	list_for_each_entry(entry, list, node) {
+ 		struct acpi_power_resource *resource = entry->resource;
+-		int result;
+ 		u8 state;
+ 
+ 		mutex_lock(&resource->resource_lock);
+ 
+-		result = acpi_power_get_state(resource, &state);
+-		if (result) {
+-			mutex_unlock(&resource->resource_lock);
+-			return result;
+-		}
+-		if (state == ACPI_POWER_RESOURCE_STATE_ON) {
+-			resource->ref_count++;
+-			resource->wakeup_enabled = true;
+-		}
++		/*
++		 * Make sure that the power resource state and its reference
++		 * counter value are consistent with each other.
++		 */
++		if (!resource->ref_count &&
++		    !acpi_power_get_state(resource, &state) &&
++		    state == ACPI_POWER_RESOURCE_STATE_ON)
++			__acpi_power_off(resource);
++
+ 		if (system_level > resource->system_level)
+ 			system_level = resource->system_level;
+ 
+@@ -702,7 +700,6 @@ int acpi_device_sleep_wake(struct acpi_device *dev,
+  */
+ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int sleep_state)
+ {
+-	struct acpi_power_resource_entry *entry;
+ 	int err = 0;
+ 
+ 	if (!dev || !dev->wakeup.flags.valid)
+@@ -713,26 +710,13 @@ int acpi_enable_wakeup_device_power(struct acpi_device *dev, int sleep_state)
+ 	if (dev->wakeup.prepare_count++)
+ 		goto out;
+ 
+-	list_for_each_entry(entry, &dev->wakeup.resources, node) {
+-		struct acpi_power_resource *resource = entry->resource;
+-
+-		mutex_lock(&resource->resource_lock);
+-
+-		if (!resource->wakeup_enabled) {
+-			err = acpi_power_on_unlocked(resource);
+-			if (!err)
+-				resource->wakeup_enabled = true;
+-		}
+-
+-		mutex_unlock(&resource->resource_lock);
+-
+-		if (err) {
+-			dev_err(&dev->dev,
+-				"Cannot turn wakeup power resources on\n");
+-			dev->wakeup.flags.valid = 0;
+-			goto out;
+-		}
++	err = acpi_power_on_list(&dev->wakeup.resources);
++	if (err) {
++		dev_err(&dev->dev, "Cannot turn on wakeup power resources\n");
++		dev->wakeup.flags.valid = 0;
++		goto out;
+ 	}
++
+ 	/*
+ 	 * Passing 3 as the third argument below means the device may be
+ 	 * put into arbitrary power state afterward.
+@@ -762,39 +746,31 @@ int acpi_disable_wakeup_device_power(struct acpi_device *dev)
+ 
+ 	mutex_lock(&acpi_device_lock);
+ 
+-	if (--dev->wakeup.prepare_count > 0)
++	/* Do nothing if wakeup power has not been enabled for this device. */
++	if (dev->wakeup.prepare_count <= 0)
+ 		goto out;
+ 
+-	/*
+-	 * Executing the code below even if prepare_count is already zero when
+-	 * the function is called may be useful, for example for initialisation.
+-	 */
+-	if (dev->wakeup.prepare_count < 0)
+-		dev->wakeup.prepare_count = 0;
++	if (--dev->wakeup.prepare_count > 0)
++		goto out;
+ 
+ 	err = acpi_device_sleep_wake(dev, 0, 0, 0);
+ 	if (err)
+ 		goto out;
+ 
++	/*
++	 * All of the power resources in the list need to be turned off even if
++	 * there are errors.
++	 */
+ 	list_for_each_entry(entry, &dev->wakeup.resources, node) {
+-		struct acpi_power_resource *resource = entry->resource;
+-
+-		mutex_lock(&resource->resource_lock);
+-
+-		if (resource->wakeup_enabled) {
+-			err = acpi_power_off_unlocked(resource);
+-			if (!err)
+-				resource->wakeup_enabled = false;
+-		}
+-
+-		mutex_unlock(&resource->resource_lock);
++		int ret;
+ 
+-		if (err) {
+-			dev_err(&dev->dev,
+-				"Cannot turn wakeup power resources off\n");
+-			dev->wakeup.flags.valid = 0;
+-			break;
+-		}
++		ret = acpi_power_off(entry->resource);
++		if (ret && !err)
++			err = ret;
++	}
++	if (err) {
++		dev_err(&dev->dev, "Cannot turn off wakeup power resources\n");
++		dev->wakeup.flags.valid = 0;
+ 	}
+ 
+  out:
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index ee78a210c6068..3c25ce8c95ba1 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -16,6 +16,7 @@
+ #include <linux/ioport.h>
+ #include <linux/slab.h>
+ #include <linux/irq.h>
++#include <linux/dmi.h>
+ 
+ #ifdef CONFIG_X86
+ #define valid_IRQ(i) (((i) != 0) && ((i) != 2))
+@@ -380,9 +381,58 @@ unsigned int acpi_dev_get_irq_type(int triggering, int polarity)
+ }
+ EXPORT_SYMBOL_GPL(acpi_dev_get_irq_type);
+ 
++static const struct dmi_system_id medion_laptop[] = {
++	{
++		.ident = "MEDION P15651",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
++			DMI_MATCH(DMI_BOARD_NAME, "M15T"),
++		},
++	},
++	{
++		.ident = "MEDION S17405",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MEDION"),
++			DMI_MATCH(DMI_BOARD_NAME, "M17T"),
++		},
++	},
++	{ }
++};
++
++struct irq_override_cmp {
++	const struct dmi_system_id *system;
++	unsigned char irq;
++	unsigned char triggering;
++	unsigned char polarity;
++	unsigned char shareable;
++};
++
++static const struct irq_override_cmp skip_override_table[] = {
++	{ medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 },
++};
++
++static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
++				  u8 shareable)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) {
++		const struct irq_override_cmp *entry = &skip_override_table[i];
++
++		if (dmi_check_system(entry->system) &&
++		    entry->irq == gsi &&
++		    entry->triggering == triggering &&
++		    entry->polarity == polarity &&
++		    entry->shareable == shareable)
++			return false;
++	}
++
++	return true;
++}
++
+ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 				     u8 triggering, u8 polarity, u8 shareable,
+-				     bool legacy)
++				     bool check_override)
+ {
+ 	int irq, p, t;
+ 
+@@ -401,7 +451,9 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 	 * using extended IRQ descriptors we take the IRQ configuration
+ 	 * from _CRS directly.
+ 	 */
+-	if (legacy && !acpi_get_override_irq(gsi, &t, &p)) {
++	if (check_override &&
++	    acpi_dev_irq_override(gsi, triggering, polarity, shareable) &&
++	    !acpi_get_override_irq(gsi, &t, &p)) {
+ 		u8 trig = t ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE;
+ 		u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH;
+ 
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index b24513ec3fae1..ae9464091f1b1 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -2560,6 +2560,12 @@ int __init acpi_scan_init(void)
+ 		}
+ 	}
+ 
++	/*
++	 * Make sure that power management resources are not blocked by ACPI
++	 * device objects with no users.
++	 */
++	bus_for_each_dev(&acpi_bus_type, NULL, NULL, acpi_dev_turn_off_if_unused);
++
+ 	acpi_turn_off_unused_power_resources();
+ 
+ 	acpi_scan_initialized = true;
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 8916163d508e0..8acf99b88b21e 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2004,7 +2004,7 @@ unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ 
+ retry:
+ 	ata_tf_init(dev, &tf);
+-	if (dev->dma_mode && ata_id_has_read_log_dma_ext(dev->id) &&
++	if (ata_dma_enabled(dev) && ata_id_has_read_log_dma_ext(dev->id) &&
+ 	    !(dev->horkage & ATA_HORKAGE_NO_DMA_LOG)) {
+ 		tf.command = ATA_CMD_READ_LOG_DMA_EXT;
+ 		tf.protocol = ATA_PROT_DMA;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index bb3637762985a..9347354258941 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -93,6 +93,12 @@ static const unsigned long ata_eh_identify_timeouts[] = {
+ 	ULONG_MAX,
+ };
+ 
++static const unsigned long ata_eh_revalidate_timeouts[] = {
++	15000,	/* Some drives are slow to read log pages when waking-up */
++	15000,  /* combined time till here is enough even for media access */
++	ULONG_MAX,
++};
++
+ static const unsigned long ata_eh_flush_timeouts[] = {
+ 	15000,	/* be generous with flush */
+ 	15000,  /* ditto */
+@@ -129,6 +135,8 @@ static const struct ata_eh_cmd_timeout_ent
+ ata_eh_cmd_timeout_table[ATA_EH_CMD_TIMEOUT_TABLE_SIZE] = {
+ 	{ .commands = CMDS(ATA_CMD_ID_ATA, ATA_CMD_ID_ATAPI),
+ 	  .timeouts = ata_eh_identify_timeouts, },
++	{ .commands = CMDS(ATA_CMD_READ_LOG_EXT, ATA_CMD_READ_LOG_DMA_EXT),
++	  .timeouts = ata_eh_revalidate_timeouts, },
+ 	{ .commands = CMDS(ATA_CMD_READ_NATIVE_MAX, ATA_CMD_READ_NATIVE_MAX_EXT),
+ 	  .timeouts = ata_eh_other_timeouts, },
+ 	{ .commands = CMDS(ATA_CMD_SET_MAX, ATA_CMD_SET_MAX_EXT),
+diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c
+index 1e69cc6d21a0d..ed58083499907 100644
+--- a/drivers/auxdisplay/ht16k33.c
++++ b/drivers/auxdisplay/ht16k33.c
+@@ -219,6 +219,15 @@ static const struct backlight_ops ht16k33_bl_ops = {
+ 	.check_fb	= ht16k33_bl_check_fb,
+ };
+ 
++/*
++ * Blank events will be passed to the actual device handling the backlight when
++ * we return zero here.
++ */
++static int ht16k33_blank(int blank, struct fb_info *info)
++{
++	return 0;
++}
++
+ static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
+ {
+ 	struct ht16k33_priv *priv = info->par;
+@@ -231,6 +240,7 @@ static const struct fb_ops ht16k33_fb_ops = {
+ 	.owner = THIS_MODULE,
+ 	.fb_read = fb_sys_read,
+ 	.fb_write = fb_sys_write,
++	.fb_blank = ht16k33_blank,
+ 	.fb_fillrect = sys_fillrect,
+ 	.fb_copyarea = sys_copyarea,
+ 	.fb_imageblit = sys_imageblit,
+@@ -413,6 +423,33 @@ static int ht16k33_probe(struct i2c_client *client,
+ 	if (err)
+ 		return err;
+ 
++	/* Backlight */
++	memset(&bl_props, 0, sizeof(struct backlight_properties));
++	bl_props.type = BACKLIGHT_RAW;
++	bl_props.max_brightness = MAX_BRIGHTNESS;
++
++	bl = devm_backlight_device_register(&client->dev, DRIVER_NAME"-bl",
++					    &client->dev, priv,
++					    &ht16k33_bl_ops, &bl_props);
++	if (IS_ERR(bl)) {
++		dev_err(&client->dev, "failed to register backlight\n");
++		return PTR_ERR(bl);
++	}
++
++	err = of_property_read_u32(node, "default-brightness-level",
++				   &dft_brightness);
++	if (err) {
++		dft_brightness = MAX_BRIGHTNESS;
++	} else if (dft_brightness > MAX_BRIGHTNESS) {
++		dev_warn(&client->dev,
++			 "invalid default brightness level: %u, using %u\n",
++			 dft_brightness, MAX_BRIGHTNESS);
++		dft_brightness = MAX_BRIGHTNESS;
++	}
++
++	bl->props.brightness = dft_brightness;
++	ht16k33_bl_update_status(bl);
++
+ 	/* Framebuffer (2 bytes per column) */
+ 	BUILD_BUG_ON(PAGE_SIZE < HT16K33_FB_SIZE);
+ 	fbdev->buffer = (unsigned char *) get_zeroed_page(GFP_KERNEL);
+@@ -445,6 +482,7 @@ static int ht16k33_probe(struct i2c_client *client,
+ 	fbdev->info->screen_size = HT16K33_FB_SIZE;
+ 	fbdev->info->fix = ht16k33_fb_fix;
+ 	fbdev->info->var = ht16k33_fb_var;
++	fbdev->info->bl_dev = bl;
+ 	fbdev->info->pseudo_palette = NULL;
+ 	fbdev->info->flags = FBINFO_FLAG_DEFAULT;
+ 	fbdev->info->par = priv;
+@@ -460,34 +498,6 @@ static int ht16k33_probe(struct i2c_client *client,
+ 			goto err_fbdev_unregister;
+ 	}
+ 
+-	/* Backlight */
+-	memset(&bl_props, 0, sizeof(struct backlight_properties));
+-	bl_props.type = BACKLIGHT_RAW;
+-	bl_props.max_brightness = MAX_BRIGHTNESS;
+-
+-	bl = devm_backlight_device_register(&client->dev, DRIVER_NAME"-bl",
+-					    &client->dev, priv,
+-					    &ht16k33_bl_ops, &bl_props);
+-	if (IS_ERR(bl)) {
+-		dev_err(&client->dev, "failed to register backlight\n");
+-		err = PTR_ERR(bl);
+-		goto err_fbdev_unregister;
+-	}
+-
+-	err = of_property_read_u32(node, "default-brightness-level",
+-				   &dft_brightness);
+-	if (err) {
+-		dft_brightness = MAX_BRIGHTNESS;
+-	} else if (dft_brightness > MAX_BRIGHTNESS) {
+-		dev_warn(&client->dev,
+-			 "invalid default brightness level: %u, using %u\n",
+-			 dft_brightness, MAX_BRIGHTNESS);
+-		dft_brightness = MAX_BRIGHTNESS;
+-	}
+-
+-	bl->props.brightness = dft_brightness;
+-	ht16k33_bl_update_status(bl);
+-
+ 	ht16k33_fb_queue(priv);
+ 	return 0;
+ 
+diff --git a/drivers/auxdisplay/img-ascii-lcd.c b/drivers/auxdisplay/img-ascii-lcd.c
+index 1cce409ce5cac..e33ce0151cdfd 100644
+--- a/drivers/auxdisplay/img-ascii-lcd.c
++++ b/drivers/auxdisplay/img-ascii-lcd.c
+@@ -280,6 +280,16 @@ static int img_ascii_lcd_display(struct img_ascii_lcd_ctx *ctx,
+ 	if (msg[count - 1] == '\n')
+ 		count--;
+ 
++	if (!count) {
++		/* clear the LCD */
++		devm_kfree(&ctx->pdev->dev, ctx->message);
++		ctx->message = NULL;
++		ctx->message_len = 0;
++		memset(ctx->curr, ' ', ctx->cfg->num_chars);
++		ctx->cfg->update(ctx);
++		return 0;
++	}
++
+ 	new_msg = devm_kmalloc(&ctx->pdev->dev, count + 1, GFP_KERNEL);
+ 	if (!new_msg)
+ 		return -ENOMEM;
+diff --git a/drivers/base/component.c b/drivers/base/component.c
+index 5e79299f6c3ff..870485cbbb87c 100644
+--- a/drivers/base/component.c
++++ b/drivers/base/component.c
+@@ -246,7 +246,7 @@ static int try_to_bring_up_master(struct master *master,
+ 		return 0;
+ 	}
+ 
+-	if (!devres_open_group(master->parent, NULL, GFP_KERNEL))
++	if (!devres_open_group(master->parent, master, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+ 	/* Found all components */
+@@ -258,6 +258,7 @@ static int try_to_bring_up_master(struct master *master,
+ 		return ret;
+ 	}
+ 
++	devres_close_group(master->parent, NULL);
+ 	master->bound = true;
+ 	return 1;
+ }
+@@ -282,7 +283,7 @@ static void take_down_master(struct master *master)
+ {
+ 	if (master->bound) {
+ 		master->ops->unbind(master->parent);
+-		devres_release_group(master->parent, NULL);
++		devres_release_group(master->parent, master);
+ 		master->bound = false;
+ 	}
+ }
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index f150ebebb3068..7033d0c33a026 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -809,9 +809,7 @@ struct device_link *device_link_add(struct device *consumer,
+ 		     dev_bus_name(supplier), dev_name(supplier),
+ 		     dev_bus_name(consumer), dev_name(consumer));
+ 	if (device_register(&link->link_dev)) {
+-		put_device(consumer);
+-		put_device(supplier);
+-		kfree(link);
++		put_device(&link->link_dev);
+ 		link = NULL;
+ 		goto out;
+ 	}
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index cbea78e79f3df..a7fdd86fad057 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -711,6 +711,7 @@ static void dpm_noirq_resume_devices(pm_message_t state)
+ 		dev = to_device(dpm_noirq_list.next);
+ 		get_device(dev);
+ 		list_move_tail(&dev->power.entry, &dpm_late_early_list);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		if (!is_async(dev)) {
+@@ -725,8 +726,9 @@ static void dpm_noirq_resume_devices(pm_message_t state)
+ 			}
+ 		}
+ 
+-		mutex_lock(&dpm_list_mtx);
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	async_synchronize_full();
+@@ -852,6 +854,7 @@ void dpm_resume_early(pm_message_t state)
+ 		dev = to_device(dpm_late_early_list.next);
+ 		get_device(dev);
+ 		list_move_tail(&dev->power.entry, &dpm_suspended_list);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		if (!is_async(dev)) {
+@@ -865,8 +868,10 @@ void dpm_resume_early(pm_message_t state)
+ 				pm_dev_err(dev, state, " early", error);
+ 			}
+ 		}
+-		mutex_lock(&dpm_list_mtx);
++
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	async_synchronize_full();
+@@ -1029,7 +1034,12 @@ void dpm_resume(pm_message_t state)
+ 		}
+ 		if (!list_empty(&dev->power.entry))
+ 			list_move_tail(&dev->power.entry, &dpm_prepared_list);
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	async_synchronize_full();
+@@ -1051,7 +1061,7 @@ static void device_complete(struct device *dev, pm_message_t state)
+ 	const char *info = NULL;
+ 
+ 	if (dev->power.syscore)
+-		return;
++		goto out;
+ 
+ 	device_lock(dev);
+ 
+@@ -1081,6 +1091,7 @@ static void device_complete(struct device *dev, pm_message_t state)
+ 
+ 	device_unlock(dev);
+ 
++out:
+ 	pm_runtime_put(dev);
+ }
+ 
+@@ -1106,14 +1117,16 @@ void dpm_complete(pm_message_t state)
+ 		get_device(dev);
+ 		dev->power.is_prepared = false;
+ 		list_move(&dev->power.entry, &list);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		trace_device_pm_callback_start(dev, "", state.event);
+ 		device_complete(dev, state);
+ 		trace_device_pm_callback_end(dev, 0);
+ 
+-		mutex_lock(&dpm_list_mtx);
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	list_splice(&list, &dpm_list);
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1298,17 +1311,21 @@ static int dpm_noirq_suspend_devices(pm_message_t state)
+ 		error = device_suspend_noirq(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
++
+ 		if (error) {
+ 			pm_dev_err(dev, state, " noirq", error);
+ 			dpm_save_failed_dev(dev_name(dev));
+-			put_device(dev);
+-			break;
+-		}
+-		if (!list_empty(&dev->power.entry))
++		} else if (!list_empty(&dev->power.entry)) {
+ 			list_move(&dev->power.entry, &dpm_noirq_list);
++		}
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
+ 
+-		if (async_error)
++		mutex_lock(&dpm_list_mtx);
++
++		if (error || async_error)
+ 			break;
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1475,23 +1492,28 @@ int dpm_suspend_late(pm_message_t state)
+ 		struct device *dev = to_device(dpm_suspended_list.prev);
+ 
+ 		get_device(dev);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		error = device_suspend_late(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
++
+ 		if (!list_empty(&dev->power.entry))
+ 			list_move(&dev->power.entry, &dpm_late_early_list);
+ 
+ 		if (error) {
+ 			pm_dev_err(dev, state, " late", error);
+ 			dpm_save_failed_dev(dev_name(dev));
+-			put_device(dev);
+-			break;
+ 		}
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
+ 
+-		if (async_error)
++		mutex_lock(&dpm_list_mtx);
++
++		if (error || async_error)
+ 			break;
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1751,21 +1773,27 @@ int dpm_suspend(pm_message_t state)
+ 		struct device *dev = to_device(dpm_prepared_list.prev);
+ 
+ 		get_device(dev);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		error = device_suspend(dev);
+ 
+ 		mutex_lock(&dpm_list_mtx);
++
+ 		if (error) {
+ 			pm_dev_err(dev, state, "", error);
+ 			dpm_save_failed_dev(dev_name(dev));
+-			put_device(dev);
+-			break;
+-		}
+-		if (!list_empty(&dev->power.entry))
++		} else if (!list_empty(&dev->power.entry)) {
+ 			list_move(&dev->power.entry, &dpm_suspended_list);
++		}
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
+-		if (async_error)
++
++		mutex_lock(&dpm_list_mtx);
++
++		if (error || async_error)
+ 			break;
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+@@ -1794,9 +1822,6 @@ static int device_prepare(struct device *dev, pm_message_t state)
+ 	int (*callback)(struct device *) = NULL;
+ 	int ret = 0;
+ 
+-	if (dev->power.syscore)
+-		return 0;
+-
+ 	/*
+ 	 * If a device's parent goes into runtime suspend at the wrong time,
+ 	 * it won't be possible to resume the device.  To prevent this we
+@@ -1805,6 +1830,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
+ 	 */
+ 	pm_runtime_get_noresume(dev);
+ 
++	if (dev->power.syscore)
++		return 0;
++
+ 	device_lock(dev);
+ 
+ 	dev->power.wakeup_path = false;
+@@ -1882,6 +1910,7 @@ int dpm_prepare(pm_message_t state)
+ 		struct device *dev = to_device(dpm_list.next);
+ 
+ 		get_device(dev);
++
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+ 		trace_device_pm_callback_start(dev, "", state.event);
+@@ -1889,21 +1918,23 @@ int dpm_prepare(pm_message_t state)
+ 		trace_device_pm_callback_end(dev, error);
+ 
+ 		mutex_lock(&dpm_list_mtx);
+-		if (error) {
+-			if (error == -EAGAIN) {
+-				put_device(dev);
+-				error = 0;
+-				continue;
+-			}
++
++		if (!error) {
++			dev->power.is_prepared = true;
++			if (!list_empty(&dev->power.entry))
++				list_move_tail(&dev->power.entry, &dpm_prepared_list);
++		} else if (error == -EAGAIN) {
++			error = 0;
++		} else {
+ 			dev_info(dev, "not prepared for power transition: code %d\n",
+ 				 error);
+-			put_device(dev);
+-			break;
+ 		}
+-		dev->power.is_prepared = true;
+-		if (!list_empty(&dev->power.entry))
+-			list_move_tail(&dev->power.entry, &dpm_prepared_list);
++
++		mutex_unlock(&dpm_list_mtx);
++
+ 		put_device(dev);
++
++		mutex_lock(&dpm_list_mtx);
+ 	}
+ 	mutex_unlock(&dpm_list_mtx);
+ 	trace_suspend_resume(TPS("dpm_prepare"), state.event, false);
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index a093644ac39fb..aab48b292a3bb 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -298,6 +298,7 @@ static struct atari_floppy_struct {
+ 				   disk change detection) */
+ 	int flags;		/* flags */
+ 	struct gendisk *disk[NUM_DISK_MINORS];
++	bool registered[NUM_DISK_MINORS];
+ 	int ref;
+ 	int type;
+ 	struct blk_mq_tag_set tag_set;
+@@ -456,10 +457,20 @@ static DEFINE_TIMER(fd_timer, check_change);
+ 	
+ static void fd_end_request_cur(blk_status_t err)
+ {
++	DPRINT(("fd_end_request_cur(), bytes %d of %d\n",
++		blk_rq_cur_bytes(fd_request),
++		blk_rq_bytes(fd_request)));
++
+ 	if (!blk_update_request(fd_request, err,
+ 				blk_rq_cur_bytes(fd_request))) {
++		DPRINT(("calling __blk_mq_end_request()\n"));
+ 		__blk_mq_end_request(fd_request, err);
+ 		fd_request = NULL;
++	} else {
++		/* requeue rest of request */
++		DPRINT(("calling blk_mq_requeue_request()\n"));
++		blk_mq_requeue_request(fd_request, true);
++		fd_request = NULL;
+ 	}
+ }
+ 
+@@ -653,9 +664,6 @@ static inline void copy_buffer(void *from, void *to)
+ 		*p2++ = *p1++;
+ }
+ 
+-  
+-  
+-
+ /* General Interrupt Handling */
+ 
+ static void (*FloppyIRQHandler)( int status ) = NULL;
+@@ -700,12 +708,21 @@ static void fd_error( void )
+ 	if (fd_request->error_count >= MAX_ERRORS) {
+ 		printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive );
+ 		fd_end_request_cur(BLK_STS_IOERR);
++		finish_fdc();
++		return;
+ 	}
+ 	else if (fd_request->error_count == RECALIBRATE_ERRORS) {
+ 		printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive );
+ 		if (SelectedDrive != -1)
+ 			SUD.track = -1;
+ 	}
++	/* need to re-run request to recalibrate */
++	atari_disable_irq( IRQ_MFP_FDC );
++
++	setup_req_params( SelectedDrive );
++	do_fd_action( SelectedDrive );
++
++	atari_enable_irq( IRQ_MFP_FDC );
+ }
+ 
+ 
+@@ -732,8 +749,10 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 	if (type) {
+ 		type--;
+ 		if (type >= NUM_DISK_MINORS ||
+-		    minor2disktype[type].drive_types > DriveType)
++		    minor2disktype[type].drive_types > DriveType) {
++			finish_fdc();
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	q = unit[drive].disk[type]->queue;
+@@ -751,6 +770,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 	}
+ 
+ 	if (!UDT || desc->track >= UDT->blocks/UDT->spt/2 || desc->head >= 2) {
++		finish_fdc();
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+@@ -791,6 +811,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 
+ 	wait_for_completion(&format_wait);
+ 
++	finish_fdc();
+ 	ret = FormatError ? -EIO : 0;
+ out:
+ 	blk_mq_unquiesce_queue(q);
+@@ -825,6 +846,7 @@ static void do_fd_action( int drive )
+ 		    else {
+ 			/* all sectors finished */
+ 			fd_end_request_cur(BLK_STS_OK);
++			finish_fdc();
+ 			return;
+ 		    }
+ 		}
+@@ -1229,6 +1251,7 @@ static void fd_rwsec_done1(int status)
+ 	else {
+ 		/* all sectors finished */
+ 		fd_end_request_cur(BLK_STS_OK);
++		finish_fdc();
+ 	}
+ 	return;
+   
+@@ -1350,7 +1373,7 @@ static void fd_times_out(struct timer_list *unused)
+ 
+ static void finish_fdc( void )
+ {
+-	if (!NeedSeek) {
++	if (!NeedSeek || !stdma_is_locked_by(floppy_irq)) {
+ 		finish_fdc_done( 0 );
+ 	}
+ 	else {
+@@ -1385,7 +1408,8 @@ static void finish_fdc_done( int dummy )
+ 	start_motor_off_timer();
+ 
+ 	local_irq_save(flags);
+-	stdma_release();
++	if (stdma_is_locked_by(floppy_irq))
++		stdma_release();
+ 	local_irq_restore(flags);
+ 
+ 	DPRINT(("finish_fdc() finished\n"));
+@@ -1475,15 +1499,6 @@ static void setup_req_params( int drive )
+ 			ReqTrack, ReqSector, (unsigned long)ReqData ));
+ }
+ 
+-static void ataflop_commit_rqs(struct blk_mq_hw_ctx *hctx)
+-{
+-	spin_lock_irq(&ataflop_lock);
+-	atari_disable_irq(IRQ_MFP_FDC);
+-	finish_fdc();
+-	atari_enable_irq(IRQ_MFP_FDC);
+-	spin_unlock_irq(&ataflop_lock);
+-}
+-
+ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 				     const struct blk_mq_queue_data *bd)
+ {
+@@ -1491,6 +1506,10 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	int drive = floppy - unit;
+ 	int type = floppy->type;
+ 
++	DPRINT(("Queue request: drive %d type %d sectors %d of %d last %d\n",
++		drive, type, blk_rq_cur_sectors(bd->rq),
++		blk_rq_sectors(bd->rq), bd->last));
++
+ 	spin_lock_irq(&ataflop_lock);
+ 	if (fd_request) {
+ 		spin_unlock_irq(&ataflop_lock);
+@@ -1511,6 +1530,7 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		/* drive not connected */
+ 		printk(KERN_ERR "Unknown Device: fd%d\n", drive );
+ 		fd_end_request_cur(BLK_STS_IOERR);
++		stdma_release();
+ 		goto out;
+ 	}
+ 		
+@@ -1527,11 +1547,13 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		if (--type >= NUM_DISK_MINORS) {
+ 			printk(KERN_WARNING "fd%d: invalid disk format", drive );
+ 			fd_end_request_cur(BLK_STS_IOERR);
++			stdma_release();
+ 			goto out;
+ 		}
+ 		if (minor2disktype[type].drive_types > DriveType)  {
+ 			printk(KERN_WARNING "fd%d: unsupported disk format", drive );
+ 			fd_end_request_cur(BLK_STS_IOERR);
++			stdma_release();
+ 			goto out;
+ 		}
+ 		type = minor2disktype[type].index;
+@@ -1550,8 +1572,6 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 	setup_req_params( drive );
+ 	do_fd_action( drive );
+ 
+-	if (bd->last)
+-		finish_fdc();
+ 	atari_enable_irq( IRQ_MFP_FDC );
+ 
+ out:
+@@ -1634,6 +1654,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
+ 		/* what if type > 0 here? Overwrite specified entry ? */
+ 		if (type) {
+ 		        /* refuse to re-set a predefined type for now */
++			finish_fdc();
+ 			return -EINVAL;
+ 		}
+ 
+@@ -1701,8 +1722,10 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
+ 
+ 		/* sanity check */
+ 		if (setprm.track != dtp->blocks/dtp->spt/2 ||
+-		    setprm.head != 2)
++		    setprm.head != 2) {
++			finish_fdc();
+ 			return -EINVAL;
++		}
+ 
+ 		UDT = dtp;
+ 		set_capacity(disk, UDT->blocks);
+@@ -1962,7 +1985,6 @@ static const struct block_device_operations floppy_fops = {
+ 
+ static const struct blk_mq_ops ataflop_mq_ops = {
+ 	.queue_rq = ataflop_queue_rq,
+-	.commit_rqs = ataflop_commit_rqs,
+ };
+ 
+ static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
+@@ -1986,8 +2008,6 @@ static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
+ 	return 0;
+ }
+ 
+-static DEFINE_MUTEX(ataflop_probe_lock);
+-
+ static void ataflop_probe(dev_t dev)
+ {
+ 	int drive = MINOR(dev) & 3;
+@@ -1998,12 +2018,46 @@ static void ataflop_probe(dev_t dev)
+ 
+ 	if (drive >= FD_MAX_UNITS || type >= NUM_DISK_MINORS)
+ 		return;
+-	mutex_lock(&ataflop_probe_lock);
+ 	if (!unit[drive].disk[type]) {
+-		if (ataflop_alloc_disk(drive, type) == 0)
++		if (ataflop_alloc_disk(drive, type) == 0) {
+ 			add_disk(unit[drive].disk[type]);
++			unit[drive].registered[type] = true;
++		}
++	}
++}
++
++static void atari_floppy_cleanup(void)
++{
++	int i;
++	int type;
++
++	for (i = 0; i < FD_MAX_UNITS; i++) {
++		for (type = 0; type < NUM_DISK_MINORS; type++) {
++			if (!unit[i].disk[type])
++				continue;
++			del_gendisk(unit[i].disk[type]);
++			blk_cleanup_queue(unit[i].disk[type]->queue);
++			put_disk(unit[i].disk[type]);
++		}
++		blk_mq_free_tag_set(&unit[i].tag_set);
++	}
++
++	del_timer_sync(&fd_timer);
++	atari_stram_free(DMABuffer);
++}
++
++static void atari_cleanup_floppy_disk(struct atari_floppy_struct *fs)
++{
++	int type;
++
++	for (type = 0; type < NUM_DISK_MINORS; type++) {
++		if (!fs->disk[type])
++			continue;
++		if (fs->registered[type])
++			del_gendisk(fs->disk[type]);
++		blk_cleanup_disk(fs->disk[type]);
+ 	}
+-	mutex_unlock(&ataflop_probe_lock);
++	blk_mq_free_tag_set(&fs->tag_set);
+ }
+ 
+ static int __init atari_floppy_init (void)
+@@ -2015,11 +2069,6 @@ static int __init atari_floppy_init (void)
+ 		/* Amiga, Mac, ... don't have Atari-compatible floppy :-) */
+ 		return -ENODEV;
+ 
+-	mutex_lock(&ataflop_probe_lock);
+-	ret = __register_blkdev(FLOPPY_MAJOR, "fd", ataflop_probe);
+-	if (ret)
+-		goto out_unlock;
+-
+ 	for (i = 0; i < FD_MAX_UNITS; i++) {
+ 		memset(&unit[i].tag_set, 0, sizeof(unit[i].tag_set));
+ 		unit[i].tag_set.ops = &ataflop_mq_ops;
+@@ -2065,6 +2114,7 @@ static int __init atari_floppy_init (void)
+ 		unit[i].track = -1;
+ 		unit[i].flags = 0;
+ 		add_disk(unit[i].disk[0]);
++		unit[i].registered[0] = true;
+ 	}
+ 
+ 	printk(KERN_INFO "Atari floppy driver: max. %cD, %strack buffering\n",
+@@ -2072,18 +2122,17 @@ static int __init atari_floppy_init (void)
+ 	       UseTrackbuffer ? "" : "no ");
+ 	config_types();
+ 
+-	return 0;
++	ret = __register_blkdev(FLOPPY_MAJOR, "fd", ataflop_probe);
++	if (ret) {
++		printk(KERN_ERR "atari_floppy_init: cannot register block device\n");
++		atari_floppy_cleanup();
++	}
++	return ret;
+ 
+ err:
+-	while (--i >= 0) {
+-		blk_cleanup_queue(unit[i].disk[0]->queue);
+-		put_disk(unit[i].disk[0]);
+-		blk_mq_free_tag_set(&unit[i].tag_set);
+-	}
++	while (--i >= 0)
++		atari_cleanup_floppy_disk(&unit[i]);
+ 
+-	unregister_blkdev(FLOPPY_MAJOR, "fd");
+-out_unlock:
+-	mutex_unlock(&ataflop_probe_lock);
+ 	return ret;
+ }
+ 
+@@ -2128,22 +2177,8 @@ __setup("floppy=", atari_floppy_setup);
+ 
+ static void __exit atari_floppy_exit(void)
+ {
+-	int i, type;
+-
+-	for (i = 0; i < FD_MAX_UNITS; i++) {
+-		for (type = 0; type < NUM_DISK_MINORS; type++) {
+-			if (!unit[i].disk[type])
+-				continue;
+-			del_gendisk(unit[i].disk[type]);
+-			blk_cleanup_queue(unit[i].disk[type]->queue);
+-			put_disk(unit[i].disk[type]);
+-		}
+-		blk_mq_free_tag_set(&unit[i].tag_set);
+-	}
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+-
+-	del_timer_sync(&fd_timer);
+-	atari_stram_free( DMABuffer );
++	atari_floppy_cleanup();
+ }
+ 
+ module_init(atari_floppy_init)
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index fef79ea52e3ed..fb2aafabfebc1 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -4478,6 +4478,7 @@ static const struct blk_mq_ops floppy_mq_ops = {
+ };
+ 
+ static struct platform_device floppy_device[N_DRIVE];
++static bool registered[N_DRIVE];
+ 
+ static bool floppy_available(int drive)
+ {
+@@ -4693,6 +4694,8 @@ static int __init do_floppy_init(void)
+ 		if (err)
+ 			goto out_remove_drives;
+ 
++		registered[drive] = true;
++
+ 		device_add_disk(&floppy_device[drive].dev, disks[drive][0],
+ 				NULL);
+ 	}
+@@ -4703,7 +4706,8 @@ out_remove_drives:
+ 	while (drive--) {
+ 		if (floppy_available(drive)) {
+ 			del_gendisk(disks[drive][0]);
+-			platform_device_unregister(&floppy_device[drive]);
++			if (registered[drive])
++				platform_device_unregister(&floppy_device[drive]);
+ 		}
+ 	}
+ out_release_dma:
+@@ -4946,7 +4950,8 @@ static void __exit floppy_module_exit(void)
+ 				if (disks[drive][i])
+ 					del_gendisk(disks[drive][i]);
+ 			}
+-			platform_device_unregister(&floppy_device[drive]);
++			if (registered[drive])
++				platform_device_unregister(&floppy_device[drive]);
+ 		}
+ 		for (i = 0; i < ARRAY_SIZE(floppy_type); i++) {
+ 			if (disks[drive][i])
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 99ab58b877f8c..f72ff515ee51b 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -122,11 +122,11 @@ struct nbd_device {
+ 	struct work_struct remove_work;
+ 
+ 	struct list_head list;
+-	struct task_struct *task_recv;
+ 	struct task_struct *task_setup;
+ 
+ 	struct completion *destroy_complete;
+ 	unsigned long flags;
++	pid_t pid; /* pid of nbd-client, if attached */
+ 
+ 	char *backend;
+ };
+@@ -218,7 +218,7 @@ static ssize_t pid_show(struct device *dev,
+ 	struct gendisk *disk = dev_to_disk(dev);
+ 	struct nbd_device *nbd = (struct nbd_device *)disk->private_data;
+ 
+-	return sprintf(buf, "%d\n", task_pid_nr(nbd->task_recv));
++	return sprintf(buf, "%d\n", nbd->pid);
+ }
+ 
+ static const struct device_attribute pid_attr = {
+@@ -362,7 +362,7 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+ 	nbd->config->bytesize = bytesize;
+ 	nbd->config->blksize_bits = __ffs(blksize);
+ 
+-	if (!nbd->task_recv)
++	if (!nbd->pid)
+ 		return 0;
+ 
+ 	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
+@@ -1274,7 +1274,7 @@ static void nbd_config_put(struct nbd_device *nbd)
+ 		if (test_and_clear_bit(NBD_RT_HAS_PID_FILE,
+ 				       &config->runtime_flags))
+ 			device_remove_file(disk_to_dev(nbd->disk), &pid_attr);
+-		nbd->task_recv = NULL;
++		nbd->pid = 0;
+ 		if (test_and_clear_bit(NBD_RT_HAS_BACKEND_FILE,
+ 				       &config->runtime_flags)) {
+ 			device_remove_file(disk_to_dev(nbd->disk), &backend_attr);
+@@ -1315,7 +1315,7 @@ static int nbd_start_device(struct nbd_device *nbd)
+ 	int num_connections = config->num_connections;
+ 	int error = 0, i;
+ 
+-	if (nbd->task_recv)
++	if (nbd->pid)
+ 		return -EBUSY;
+ 	if (!config->socks)
+ 		return -EINVAL;
+@@ -1334,7 +1334,7 @@ static int nbd_start_device(struct nbd_device *nbd)
+ 	}
+ 
+ 	blk_mq_update_nr_hw_queues(&nbd->tag_set, config->num_connections);
+-	nbd->task_recv = current;
++	nbd->pid = task_pid_nr(current);
+ 
+ 	nbd_parse_flags(nbd);
+ 
+@@ -1590,8 +1590,8 @@ static int nbd_dbg_tasks_show(struct seq_file *s, void *unused)
+ {
+ 	struct nbd_device *nbd = s->private;
+ 
+-	if (nbd->task_recv)
+-		seq_printf(s, "recv: %d\n", task_pid_nr(nbd->task_recv));
++	if (nbd->pid)
++		seq_printf(s, "recv: %d\n", nbd->pid);
+ 
+ 	return 0;
+ }
+@@ -1777,11 +1777,11 @@ static int nbd_dev_add(int index)
+ 	disk->major = NBD_MAJOR;
+ 
+ 	/* Too big first_minor can cause duplicate creation of
+-	 * sysfs files/links, since first_minor will be truncated to
+-	 * byte in __device_add_disk().
++	 * sysfs files/links, since index << part_shift might overflow, or
++	 * MKDEV() expect that the max bits of first_minor is 20.
+ 	 */
+ 	disk->first_minor = index << part_shift;
+-	if (disk->first_minor > 0xff) {
++	if (disk->first_minor < index || disk->first_minor > MINORMASK) {
+ 		err = -EINVAL;
+ 		goto out_free_idr;
+ 	}
+@@ -2177,7 +2177,7 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info)
+ 	mutex_lock(&nbd->config_lock);
+ 	config = nbd->config;
+ 	if (!test_bit(NBD_RT_BOUND, &config->runtime_flags) ||
+-	    !nbd->task_recv) {
++	    !nbd->pid) {
+ 		dev_err(nbd_to_dev(nbd),
+ 			"not configured, cannot reconfigure\n");
+ 		ret = -EINVAL;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index fcaf2750f68f7..6383c81ac5b37 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -910,7 +910,7 @@ static ssize_t read_block_state(struct file *file, char __user *buf,
+ 			zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.',
+ 			zram_test_flag(zram, index, ZRAM_IDLE) ? 'i' : '.');
+ 
+-		if (count < copied) {
++		if (count <= copied) {
+ 			zram_slot_unlock(zram, index);
+ 			break;
+ 		}
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index e9d91d7c0db48..9ba22b13b4fa0 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -158,8 +158,10 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	int err;
+ 
+ 	hlen = sizeof(*hdr) + wmt_params->dlen;
+-	if (hlen > 255)
+-		return -EINVAL;
++	if (hlen > 255) {
++		err = -EINVAL;
++		goto err_free_skb;
++	}
+ 
+ 	hdr = (struct mtk_wmt_hdr *)&wc;
+ 	hdr->dir = 1;
+@@ -173,7 +175,7 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	err = __hci_cmd_send(hdev, 0xfc6f, hlen, &wc);
+ 	if (err < 0) {
+ 		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+-		return err;
++		goto err_free_skb;
+ 	}
+ 
+ 	/* The vendor specific WMT commands are all answered by a vendor
+@@ -190,13 +192,14 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	if (err == -EINTR) {
+ 		bt_dev_err(hdev, "Execution of wmt command interrupted");
+ 		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+-		return err;
++		goto err_free_skb;
+ 	}
+ 
+ 	if (err) {
+ 		bt_dev_err(hdev, "Execution of wmt command timed out");
+ 		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+-		return -ETIMEDOUT;
++		err = -ETIMEDOUT;
++		goto err_free_skb;
+ 	}
+ 
+ 	/* Parse and handle the return WMT event */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 418ada474a85d..dd149cffe5e5b 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -17,6 +17,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/slab.h>
+ #include <linux/sys_soc.h>
++#include <linux/timekeeping.h>
+ #include <linux/iopoll.h>
+ 
+ #include <linux/platform_data/ti-sysc.h>
+@@ -223,37 +224,77 @@ static u32 sysc_read_sysstatus(struct sysc *ddata)
+ 	return sysc_read(ddata, offset);
+ }
+ 
+-/* Poll on reset status */
+-static int sysc_wait_softreset(struct sysc *ddata)
++static int sysc_poll_reset_sysstatus(struct sysc *ddata)
+ {
+-	u32 sysc_mask, syss_done, rstval;
+-	int syss_offset, error = 0;
+-
+-	if (ddata->cap->regbits->srst_shift < 0)
+-		return 0;
+-
+-	syss_offset = ddata->offsets[SYSC_SYSSTATUS];
+-	sysc_mask = BIT(ddata->cap->regbits->srst_shift);
++	int error, retries;
++	u32 syss_done, rstval;
+ 
+ 	if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED)
+ 		syss_done = 0;
+ 	else
+ 		syss_done = ddata->cfg.syss_mask;
+ 
+-	if (syss_offset >= 0) {
++	if (likely(!timekeeping_suspended)) {
+ 		error = readx_poll_timeout_atomic(sysc_read_sysstatus, ddata,
+ 				rstval, (rstval & ddata->cfg.syss_mask) ==
+ 				syss_done, 100, MAX_MODULE_SOFTRESET_WAIT);
++	} else {
++		retries = MAX_MODULE_SOFTRESET_WAIT;
++		while (retries--) {
++			rstval = sysc_read_sysstatus(ddata);
++			if ((rstval & ddata->cfg.syss_mask) == syss_done)
++				return 0;
++			udelay(2); /* Account for udelay flakeyness */
++		}
++		error = -ETIMEDOUT;
++	}
+ 
+-	} else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {
++	return error;
++}
++
++static int sysc_poll_reset_sysconfig(struct sysc *ddata)
++{
++	int error, retries;
++	u32 sysc_mask, rstval;
++
++	sysc_mask = BIT(ddata->cap->regbits->srst_shift);
++
++	if (likely(!timekeeping_suspended)) {
+ 		error = readx_poll_timeout_atomic(sysc_read_sysconfig, ddata,
+ 				rstval, !(rstval & sysc_mask),
+ 				100, MAX_MODULE_SOFTRESET_WAIT);
++	} else {
++		retries = MAX_MODULE_SOFTRESET_WAIT;
++		while (retries--) {
++			rstval = sysc_read_sysconfig(ddata);
++			if (!(rstval & sysc_mask))
++				return 0;
++			udelay(2); /* Account for udelay flakeyness */
++		}
++		error = -ETIMEDOUT;
+ 	}
+ 
+ 	return error;
+ }
+ 
++/* Poll on reset status */
++static int sysc_wait_softreset(struct sysc *ddata)
++{
++	int syss_offset, error = 0;
++
++	if (ddata->cap->regbits->srst_shift < 0)
++		return 0;
++
++	syss_offset = ddata->offsets[SYSC_SYSSTATUS];
++
++	if (syss_offset >= 0)
++		error = sysc_poll_reset_sysstatus(ddata);
++	else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS)
++		error = sysc_poll_reset_sysconfig(ddata);
++
++	return error;
++}
++
+ static int sysc_add_named_clock_from_child(struct sysc *ddata,
+ 					   const char *name,
+ 					   const char *optfck_name)
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index 8ad7b515a51b8..6c00ea0085553 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -166,8 +166,13 @@ static int mtk_rng_runtime_resume(struct device *dev)
+ 	return mtk_rng_init(&priv->rng);
+ }
+ 
+-static UNIVERSAL_DEV_PM_OPS(mtk_rng_pm_ops, mtk_rng_runtime_suspend,
+-			    mtk_rng_runtime_resume, NULL);
++static const struct dev_pm_ops mtk_rng_pm_ops = {
++	SET_RUNTIME_PM_OPS(mtk_rng_runtime_suspend,
++			   mtk_rng_runtime_resume, NULL)
++	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++				pm_runtime_force_resume)
++};
++
+ #define MTK_RNG_PM_OPS (&mtk_rng_pm_ops)
+ #else	/* CONFIG_PM */
+ #define MTK_RNG_PM_OPS NULL
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index e96cb5c4f97a3..a08f53f208bfe 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -4789,7 +4789,9 @@ static atomic_t recv_msg_inuse_count = ATOMIC_INIT(0);
+ static void free_smi_msg(struct ipmi_smi_msg *msg)
+ {
+ 	atomic_dec(&smi_msg_inuse_count);
+-	kfree(msg);
++	/* Try to keep as much stuff out of the panic path as possible. */
++	if (!oops_in_progress)
++		kfree(msg);
+ }
+ 
+ struct ipmi_smi_msg *ipmi_alloc_smi_msg(void)
+@@ -4808,7 +4810,9 @@ EXPORT_SYMBOL(ipmi_alloc_smi_msg);
+ static void free_recv_msg(struct ipmi_recv_msg *msg)
+ {
+ 	atomic_dec(&recv_msg_inuse_count);
+-	kfree(msg);
++	/* Try to keep as much stuff out of the panic path as possible. */
++	if (!oops_in_progress)
++		kfree(msg);
+ }
+ 
+ static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void)
+@@ -4826,7 +4830,7 @@ static struct ipmi_recv_msg *ipmi_alloc_recv_msg(void)
+ 
+ void ipmi_free_recv_msg(struct ipmi_recv_msg *msg)
+ {
+-	if (msg->user)
++	if (msg->user && !oops_in_progress)
+ 		kref_put(&msg->user->refcount, free_user);
+ 	msg->done(msg);
+ }
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index e4ff3b50de7f3..883b4a3410122 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -342,13 +342,17 @@ static atomic_t msg_tofree = ATOMIC_INIT(0);
+ static DECLARE_COMPLETION(msg_wait);
+ static void msg_free_smi(struct ipmi_smi_msg *msg)
+ {
+-	if (atomic_dec_and_test(&msg_tofree))
+-		complete(&msg_wait);
++	if (atomic_dec_and_test(&msg_tofree)) {
++		if (!oops_in_progress)
++			complete(&msg_wait);
++	}
+ }
+ static void msg_free_recv(struct ipmi_recv_msg *msg)
+ {
+-	if (atomic_dec_and_test(&msg_tofree))
+-		complete(&msg_wait);
++	if (atomic_dec_and_test(&msg_tofree)) {
++		if (!oops_in_progress)
++			complete(&msg_wait);
++	}
+ }
+ static struct ipmi_smi_msg smi_msg = {
+ 	.done = msg_free_smi
+@@ -434,8 +438,10 @@ static int _ipmi_set_timeout(int do_heartbeat)
+ 	rv = __ipmi_set_timeout(&smi_msg,
+ 				&recv_msg,
+ 				&send_heartbeat_now);
+-	if (rv)
++	if (rv) {
++		atomic_set(&msg_tofree, 0);
+ 		return rv;
++	}
+ 
+ 	wait_for_completion(&msg_wait);
+ 
+@@ -497,7 +503,7 @@ static void panic_halt_ipmi_heartbeat(void)
+ 	msg.cmd = IPMI_WDOG_RESET_TIMER;
+ 	msg.data = NULL;
+ 	msg.data_len = 0;
+-	atomic_inc(&panic_done_count);
++	atomic_add(2, &panic_done_count);
+ 	rv = ipmi_request_supply_msgs(watchdog_user,
+ 				      (struct ipmi_addr *) &addr,
+ 				      0,
+@@ -507,7 +513,7 @@ static void panic_halt_ipmi_heartbeat(void)
+ 				      &panic_halt_heartbeat_recv_msg,
+ 				      1);
+ 	if (rv)
+-		atomic_dec(&panic_done_count);
++		atomic_sub(2, &panic_done_count);
+ }
+ 
+ static struct ipmi_smi_msg panic_halt_smi_msg = {
+@@ -531,12 +537,12 @@ static void panic_halt_ipmi_set_timeout(void)
+ 	/* Wait for the messages to be free. */
+ 	while (atomic_read(&panic_done_count) != 0)
+ 		ipmi_poll_interface(watchdog_user);
+-	atomic_inc(&panic_done_count);
++	atomic_add(2, &panic_done_count);
+ 	rv = __ipmi_set_timeout(&panic_halt_smi_msg,
+ 				&panic_halt_recv_msg,
+ 				&send_heartbeat_now);
+ 	if (rv) {
+-		atomic_dec(&panic_done_count);
++		atomic_sub(2, &panic_done_count);
+ 		pr_warn("Unable to extend the watchdog timeout\n");
+ 	} else {
+ 		if (send_heartbeat_now)
+@@ -580,6 +586,7 @@ restart:
+ 				      &recv_msg,
+ 				      1);
+ 	if (rv) {
++		atomic_set(&msg_tofree, 0);
+ 		pr_warn("heartbeat send failure: %d\n", rv);
+ 		return rv;
+ 	}
+diff --git a/drivers/char/ipmi/kcs_bmc_serio.c b/drivers/char/ipmi/kcs_bmc_serio.c
+index 7948cabde50b4..7e2067628a6ce 100644
+--- a/drivers/char/ipmi/kcs_bmc_serio.c
++++ b/drivers/char/ipmi/kcs_bmc_serio.c
+@@ -73,10 +73,12 @@ static int kcs_bmc_serio_add_device(struct kcs_bmc_device *kcs_bmc)
+ 	struct serio *port;
+ 
+ 	priv = devm_kzalloc(kcs_bmc->dev, sizeof(*priv), GFP_KERNEL);
++	if (!priv)
++		return -ENOMEM;
+ 
+ 	/* Use kzalloc() as the allocation is cleaned up with kfree() via serio_unregister_port() */
+ 	port = kzalloc(sizeof(*port), GFP_KERNEL);
+-	if (!(priv && port))
++	if (!port)
+ 		return -ENOMEM;
+ 
+ 	port->id.type = SERIO_8042;
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 784b8b3cb903f..97e916856cf3e 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -455,6 +455,9 @@ static int tpm2_map_response_body(struct tpm_chip *chip, u32 cc, u8 *rsp,
+ 	if (be32_to_cpu(data->capability) != TPM2_CAP_HANDLES)
+ 		return 0;
+ 
++	if (be32_to_cpu(data->count) > (UINT_MAX - TPM_HEADER_SIZE - 9) / 4)
++		return -EFAULT;
++
+ 	if (len != TPM_HEADER_SIZE + 9 + 4 * be32_to_cpu(data->count))
+ 		return -EFAULT;
+ 
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 69579efb247b3..b2659a4c40168 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -48,6 +48,7 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
+ 		unsigned long timeout, wait_queue_head_t *queue,
+ 		bool check_cancel)
+ {
++	struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ 	unsigned long stop;
+ 	long rc;
+ 	u8 status;
+@@ -80,8 +81,8 @@ again:
+ 		}
+ 	} else {
+ 		do {
+-			usleep_range(TPM_TIMEOUT_USECS_MIN,
+-				     TPM_TIMEOUT_USECS_MAX);
++			usleep_range(priv->timeout_min,
++				     priv->timeout_max);
+ 			status = chip->ops->status(chip);
+ 			if ((status & mask) == mask)
+ 				return 0;
+@@ -945,7 +946,22 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	chip->timeout_b = msecs_to_jiffies(TIS_TIMEOUT_B_MAX);
+ 	chip->timeout_c = msecs_to_jiffies(TIS_TIMEOUT_C_MAX);
+ 	chip->timeout_d = msecs_to_jiffies(TIS_TIMEOUT_D_MAX);
++	priv->timeout_min = TPM_TIMEOUT_USECS_MIN;
++	priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
+ 	priv->phy_ops = phy_ops;
++
++	rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
++	if (rc < 0)
++		goto out_err;
++
++	priv->manufacturer_id = vendor;
++
++	if (priv->manufacturer_id == TPM_VID_ATML &&
++		!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
++		priv->timeout_min = TIS_TIMEOUT_MIN_ATML;
++		priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
++	}
++
+ 	dev_set_drvdata(&chip->dev, priv);
+ 
+ 	if (is_bsw()) {
+@@ -988,12 +1004,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	if (rc)
+ 		goto out_err;
+ 
+-	rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
+-	if (rc < 0)
+-		goto out_err;
+-
+-	priv->manufacturer_id = vendor;
+-
+ 	rc = tpm_tis_read8(priv, TPM_RID(0), &rid);
+ 	if (rc < 0)
+ 		goto out_err;
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index b2a3c6c72882d..3be24f221e32a 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -54,6 +54,8 @@ enum tis_defaults {
+ 	TIS_MEM_LEN = 0x5000,
+ 	TIS_SHORT_TIMEOUT = 750,	/* ms */
+ 	TIS_LONG_TIMEOUT = 2000,	/* 2 sec */
++	TIS_TIMEOUT_MIN_ATML = 14700,	/* usecs */
++	TIS_TIMEOUT_MAX_ATML = 15000,	/* usecs */
+ };
+ 
+ /* Some timeout values are needed before it is known whether the chip is
+@@ -98,6 +100,8 @@ struct tpm_tis_data {
+ 	wait_queue_head_t read_queue;
+ 	const struct tpm_tis_phy_ops *phy_ops;
+ 	unsigned short rng_quality;
++	unsigned int timeout_min; /* usecs */
++	unsigned int timeout_max; /* usecs */
+ };
+ 
+ struct tpm_tis_phy_ops {
+diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
+index 54584b4b00d19..aaa59a00eeaef 100644
+--- a/drivers/char/tpm/tpm_tis_spi_main.c
++++ b/drivers/char/tpm/tpm_tis_spi_main.c
+@@ -267,6 +267,7 @@ static const struct spi_device_id tpm_tis_spi_id[] = {
+ 	{ "st33htpm-spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "slb9670", (unsigned long)tpm_tis_spi_probe },
+ 	{ "tpm_tis_spi", (unsigned long)tpm_tis_spi_probe },
++	{ "tpm_tis-spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "cr50", (unsigned long)cr50_spi_probe },
+ 	{}
+ };
+diff --git a/drivers/char/xillybus/xillyusb.c b/drivers/char/xillybus/xillyusb.c
+index e7f88f35c7028..dc3551796e5ed 100644
+--- a/drivers/char/xillybus/xillyusb.c
++++ b/drivers/char/xillybus/xillyusb.c
+@@ -1912,6 +1912,7 @@ static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev)
+ 
+ dealloc:
+ 	endpoint_dealloc(xdev->msg_ep); /* Also frees FIFO mem if allocated */
++	xdev->msg_ep = NULL;
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/clk/at91/clk-master.c b/drivers/clk/at91/clk-master.c
+index a80427980bf73..04d0dd8385945 100644
+--- a/drivers/clk/at91/clk-master.c
++++ b/drivers/clk/at91/clk-master.c
+@@ -280,7 +280,7 @@ static int clk_master_pres_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	else if (pres == 3)
+ 		pres = MASTER_PRES_MAX;
+-	else
++	else if (pres)
+ 		pres = ffs(pres) - 1;
+ 
+ 	spin_lock_irqsave(master->lock, flags);
+@@ -309,7 +309,7 @@ static unsigned long clk_master_pres_recalc_rate(struct clk_hw *hw,
+ 	spin_unlock_irqrestore(master->lock, flags);
+ 
+ 	pres = (val >> master->layout->pres_shift) & MASTER_PRES_MASK;
+-	if (pres == 3 && characteristics->have_div3_pres)
++	if (pres == MASTER_PRES_MAX && characteristics->have_div3_pres)
+ 		pres = 3;
+ 	else
+ 		pres = (1 << pres);
+@@ -610,7 +610,7 @@ static int clk_sama7g5_master_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	if (div == 3)
+ 		div = MASTER_PRES_MAX;
+-	else
++	else if (div)
+ 		div = ffs(div) - 1;
+ 
+ 	spin_lock_irqsave(master->lock, flags);
+diff --git a/drivers/clk/at91/clk-sam9x60-pll.c b/drivers/clk/at91/clk-sam9x60-pll.c
+index 34e3ab13741ac..1f52409475e9c 100644
+--- a/drivers/clk/at91/clk-sam9x60-pll.c
++++ b/drivers/clk/at91/clk-sam9x60-pll.c
+@@ -71,8 +71,8 @@ static unsigned long sam9x60_frac_pll_recalc_rate(struct clk_hw *hw,
+ 	struct sam9x60_pll_core *core = to_sam9x60_pll_core(hw);
+ 	struct sam9x60_frac *frac = to_sam9x60_frac(core);
+ 
+-	return (parent_rate * (frac->mul + 1) +
+-		((u64)parent_rate * frac->frac >> 22));
++	return parent_rate * (frac->mul + 1) +
++		DIV_ROUND_CLOSEST_ULL((u64)parent_rate * frac->frac, (1 << 22));
+ }
+ 
+ static int sam9x60_frac_pll_prepare(struct clk_hw *hw)
+diff --git a/drivers/clk/at91/pmc.c b/drivers/clk/at91/pmc.c
+index 20ee9dccee787..b40035b011d0a 100644
+--- a/drivers/clk/at91/pmc.c
++++ b/drivers/clk/at91/pmc.c
+@@ -267,6 +267,11 @@ static int __init pmc_register_ops(void)
+ 	if (!np)
+ 		return -ENODEV;
+ 
++	if (!of_device_is_available(np)) {
++		of_node_put(np);
++		return -ENODEV;
++	}
++
+ 	pmcreg = device_node_to_regmap(np);
+ 	of_node_put(np);
+ 	if (IS_ERR(pmcreg))
+diff --git a/drivers/clk/mvebu/ap-cpu-clk.c b/drivers/clk/mvebu/ap-cpu-clk.c
+index 08ba59ec3fb17..71bdd7c3ff034 100644
+--- a/drivers/clk/mvebu/ap-cpu-clk.c
++++ b/drivers/clk/mvebu/ap-cpu-clk.c
+@@ -256,12 +256,15 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		int cpu, err;
+ 
+ 		err = of_property_read_u32(dn, "reg", &cpu);
+-		if (WARN_ON(err))
++		if (WARN_ON(err)) {
++			of_node_put(dn);
+ 			return err;
++		}
+ 
+ 		/* If cpu2 or cpu3 is enabled */
+ 		if (cpu & APN806_CLUSTER_NUM_MASK) {
+ 			nclusters = 2;
++			of_node_put(dn);
+ 			break;
+ 		}
+ 	}
+@@ -288,8 +291,10 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		int cpu, err;
+ 
+ 		err = of_property_read_u32(dn, "reg", &cpu);
+-		if (WARN_ON(err))
++		if (WARN_ON(err)) {
++			of_node_put(dn);
+ 			return err;
++		}
+ 
+ 		cluster_index = cpu & APN806_CLUSTER_NUM_MASK;
+ 		cluster_index >>= APN806_CLUSTER_NUM_OFFSET;
+@@ -301,6 +306,7 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		parent = of_clk_get(np, cluster_index);
+ 		if (IS_ERR(parent)) {
+ 			dev_err(dev, "Could not get the clock parent\n");
++			of_node_put(dn);
+ 			return -EINVAL;
+ 		}
+ 		parent_name =  __clk_get_name(parent);
+@@ -319,8 +325,10 @@ static int ap_cpu_clock_probe(struct platform_device *pdev)
+ 		init.parent_names = &parent_name;
+ 
+ 		ret = devm_clk_hw_register(dev, &ap_cpu_clk[cluster_index].hw);
+-		if (ret)
++		if (ret) {
++			of_node_put(dn);
+ 			return ret;
++		}
+ 		ap_cpu_data->hws[cluster_index] = &ap_cpu_clk[cluster_index].hw;
+ 	}
+ 
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index eb661b539a3ed..da4b9ecec6448 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -24,6 +24,7 @@ config I8253_LOCK
+ 
+ config OMAP_DM_TIMER
+ 	bool
++	select TIMER_OF
+ 
+ config CLKBLD_I8253
+ 	def_bool y if CLKSRC_I8253 || CLKEVT_I8253 || I8253_LOCK
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 45f3416988f1a..2c003d193c69b 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2514,8 +2514,15 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Resolve policy min/max to available frequencies. It ensures
++	 * no frequency resolution will neither overshoot the requested maximum
++	 * nor undershoot the requested minimum.
++	 */
+ 	policy->min = new_data.min;
+ 	policy->max = new_data.max;
++	policy->min = __resolve_freq(policy, policy->min, CPUFREQ_RELATION_L);
++	policy->max = __resolve_freq(policy, policy->max, CPUFREQ_RELATION_H);
+ 	trace_cpu_frequency_limits(policy);
+ 
+ 	policy->cached_target_freq = UINT_MAX;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index e7cd3882bda4d..9c56c209aac18 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -615,9 +615,8 @@ static void intel_pstate_hybrid_hwp_calibrate(struct cpudata *cpu)
+ 	 * the scaling factor is too high, so recompute it so that the HWP_CAP
+ 	 * highest performance corresponds to the maximum turbo frequency.
+ 	 */
+-	if (turbo_freq < cpu->pstate.turbo_pstate * scaling) {
+-		pr_debug("CPU%d: scaling too high (%d)\n", cpu->cpu, scaling);
+-
++	cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * scaling;
++	if (turbo_freq < cpu->pstate.turbo_freq) {
+ 		cpu->pstate.turbo_freq = turbo_freq;
+ 		scaling = DIV_ROUND_UP(turbo_freq, cpu->pstate.turbo_pstate);
+ 	}
+@@ -1077,9 +1076,16 @@ static void intel_pstate_hwp_offline(struct cpudata *cpu)
+ 		 */
+ 		value &= ~GENMASK_ULL(31, 24);
+ 		value |= HWP_ENERGY_PERF_PREFERENCE(cpu->epp_cached);
+-		WRITE_ONCE(cpu->hwp_req_cached, value);
+ 	}
+ 
++	/*
++	 * Clear the desired perf field in the cached HWP request value to
++	 * prevent nonzero desired values from being leaked into the active
++	 * mode.
++	 */
++	value &= ~HWP_DESIRED_PERF(~0L);
++	WRITE_ONCE(cpu->hwp_req_cached, value);
++
+ 	value &= ~GENMASK_ULL(31, 0);
+ 	min_perf = HWP_LOWEST_PERF(READ_ONCE(cpu->hwp_cap_cached));
+ 
+@@ -2948,6 +2954,27 @@ static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+ 	return intel_pstate_cpu_exit(policy);
+ }
+ 
++static int intel_cpufreq_suspend(struct cpufreq_policy *policy)
++{
++	intel_pstate_suspend(policy);
++
++	if (hwp_active) {
++		struct cpudata *cpu = all_cpu_data[policy->cpu];
++		u64 value = READ_ONCE(cpu->hwp_req_cached);
++
++		/*
++		 * Clear the desired perf field in MSR_HWP_REQUEST in case
++		 * intel_cpufreq_adjust_perf() is in use and the last value
++		 * written by it may not be suitable.
++		 */
++		value &= ~HWP_DESIRED_PERF(~0L);
++		wrmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
++		WRITE_ONCE(cpu->hwp_req_cached, value);
++	}
++
++	return 0;
++}
++
+ static struct cpufreq_driver intel_cpufreq = {
+ 	.flags		= CPUFREQ_CONST_LOOPS,
+ 	.verify		= intel_cpufreq_verify_policy,
+@@ -2957,7 +2984,7 @@ static struct cpufreq_driver intel_cpufreq = {
+ 	.exit		= intel_cpufreq_cpu_exit,
+ 	.offline	= intel_cpufreq_cpu_offline,
+ 	.online		= intel_pstate_cpu_online,
+-	.suspend	= intel_pstate_suspend,
++	.suspend	= intel_cpufreq_suspend,
+ 	.resume		= intel_pstate_resume,
+ 	.update_limits	= intel_pstate_update_limits,
+ 	.name		= "intel_cpufreq",
+diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
+index 53ec9585ccd44..469e18547d06c 100644
+--- a/drivers/cpuidle/sysfs.c
++++ b/drivers/cpuidle/sysfs.c
+@@ -488,6 +488,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
+ 					   &kdev->kobj, "state%d", i);
+ 		if (ret) {
+ 			kobject_put(&kobj->kobj);
++			kfree(kobj);
+ 			goto error_state;
+ 		}
+ 		cpuidle_add_s2idle_attr_group(kobj);
+@@ -619,6 +620,7 @@ static int cpuidle_add_driver_sysfs(struct cpuidle_device *dev)
+ 				   &kdev->kobj, "driver");
+ 	if (ret) {
+ 		kobject_put(&kdrv->kobj);
++		kfree(kdrv);
+ 		return ret;
+ 	}
+ 
+@@ -705,7 +707,6 @@ int cpuidle_add_sysfs(struct cpuidle_device *dev)
+ 	if (!kdev)
+ 		return -ENOMEM;
+ 	kdev->dev = dev;
+-	dev->kobj_dev = kdev;
+ 
+ 	init_completion(&kdev->kobj_unregister);
+ 
+@@ -713,9 +714,11 @@ int cpuidle_add_sysfs(struct cpuidle_device *dev)
+ 				   "cpuidle");
+ 	if (error) {
+ 		kobject_put(&kdev->kobj);
++		kfree(kdev);
+ 		return error;
+ 	}
+ 
++	dev->kobj_dev = kdev;
+ 	kobject_uevent(&kdev->kobj, KOBJ_ADD);
+ 
+ 	return 0;
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index e313233ec6de7..bf6275ffc4aad 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -1153,16 +1153,27 @@ static struct caam_akcipher_alg caam_rsa = {
+ int caam_pkc_init(struct device *ctrldev)
+ {
+ 	struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
+-	u32 pk_inst;
++	u32 pk_inst, pkha;
+ 	int err;
+ 	init_done = false;
+ 
+ 	/* Determine public key hardware accelerator presence. */
+-	if (priv->era < 10)
++	if (priv->era < 10) {
+ 		pk_inst = (rd_reg32(&priv->ctrl->perfmon.cha_num_ls) &
+ 			   CHA_ID_LS_PK_MASK) >> CHA_ID_LS_PK_SHIFT;
+-	else
+-		pk_inst = rd_reg32(&priv->ctrl->vreg.pkha) & CHA_VER_NUM_MASK;
++	} else {
++		pkha = rd_reg32(&priv->ctrl->vreg.pkha);
++		pk_inst = pkha & CHA_VER_NUM_MASK;
++
++		/*
++		 * Newer CAAMs support partially disabled functionality. If this is the
++		 * case, the number is non-zero, but this bit is set to indicate that
++		 * no encryption or decryption is supported. Only signing and verifying
++		 * is supported.
++		 */
++		if (pkha & CHA_VER_MISC_PKHA_NO_CRYPT)
++			pk_inst = 0;
++	}
+ 
+ 	/* Do not register algorithms if PKHA is not present. */
+ 	if (!pk_inst)
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index af61f3a2c0d46..3738625c02509 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -322,6 +322,9 @@ struct version_regs {
+ /* CHA Miscellaneous Information - AESA_MISC specific */
+ #define CHA_VER_MISC_AES_GCM	BIT(1 + CHA_VER_MISC_SHIFT)
+ 
++/* CHA Miscellaneous Information - PKHA_MISC specific */
++#define CHA_VER_MISC_PKHA_NO_CRYPT	BIT(7 + CHA_VER_MISC_SHIFT)
++
+ /*
+  * caam_perfmon - Performance Monitor/Secure Memory Status/
+  *                CAAM Global Status/Component Version IDs
+diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
+index e599ac6dc162a..790fa9058a36d 100644
+--- a/drivers/crypto/ccree/cc_driver.c
++++ b/drivers/crypto/ccree/cc_driver.c
+@@ -103,7 +103,8 @@ MODULE_DEVICE_TABLE(of, arm_ccree_dev_of_match);
+ static void init_cc_cache_params(struct cc_drvdata *drvdata)
+ {
+ 	struct device *dev = drvdata_to_dev(drvdata);
+-	u32 cache_params, ace_const, val, mask;
++	u32 cache_params, ace_const, val;
++	u64 mask;
+ 
+ 	/* compute CC_AXIM_CACHE_PARAMS */
+ 	cache_params = cc_ioread(drvdata, CC_REG(AXIM_CACHE_PARAMS));
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+index a72723455df72..877a948469bd1 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+@@ -1274,6 +1274,7 @@ static int aead_do_fallback(struct aead_request *req, bool is_enc)
+ 					  req->base.complete, req->base.data);
+ 		aead_request_set_crypt(&rctx->fbk_req, req->src,
+ 				       req->dst, req->cryptlen, req->iv);
++		aead_request_set_ad(&rctx->fbk_req, req->assoclen);
+ 		ret = is_enc ? crypto_aead_encrypt(&rctx->fbk_req) :
+ 			       crypto_aead_decrypt(&rctx->fbk_req);
+ 	} else {
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index efa4bffb4f601..e3da97286980e 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -150,6 +150,13 @@ static int __adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 		val = ADF_CSR_RD(pmisc_bar_addr, pf2vf_offset);
+ 	} while ((val & int_bit) && (count++ < ADF_IOV_MSG_ACK_MAX_RETRY));
+ 
++	if (val != msg) {
++		dev_dbg(&GET_DEV(accel_dev),
++			"Collision - PFVF CSR overwritten by remote function\n");
++		ret = -EIO;
++		goto out;
++	}
++
+ 	if (val & int_bit) {
+ 		dev_dbg(&GET_DEV(accel_dev), "ACK not received from remote\n");
+ 		val &= ~int_bit;
+@@ -198,6 +205,11 @@ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ 
+ 	/* Read message from the VF */
+ 	msg = ADF_CSR_RD(pmisc_addr, hw_data->get_pf2vf_offset(vf_nr));
++	if (!(msg & ADF_VF2PF_INT)) {
++		dev_info(&GET_DEV(accel_dev),
++			 "Spurious VF2PF interrupt, msg %X. Ignored\n", msg);
++		goto out;
++	}
+ 
+ 	/* To ACK, clear the VF2PFINT bit */
+ 	msg &= ~ADF_VF2PF_INT;
+@@ -281,6 +293,7 @@ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ 	if (resp && adf_iov_putmsg(accel_dev, resp, vf_nr))
+ 		dev_err(&GET_DEV(accel_dev), "Failed to send response to VF\n");
+ 
++out:
+ 	/* re-enable interrupt on PF from this VF */
+ 	adf_enable_vf2pf_interrupts(accel_dev, (1 << vf_nr));
+ 	return;
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index 3e4f64d248f9b..35e952092b4a8 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -79,6 +79,11 @@ static void adf_pf2vf_bh_handler(void *data)
+ 
+ 	/* Read the message from PF */
+ 	msg = ADF_CSR_RD(pmisc_bar_addr, hw_data->get_pf2vf_offset(0));
++	if (!(msg & ADF_PF2VF_INT)) {
++		dev_info(&GET_DEV(accel_dev),
++			 "Spurious PF2VF interrupt, msg %X. Ignored\n", msg);
++		goto out;
++	}
+ 
+ 	if (!(msg & ADF_PF2VF_MSGORIGIN_SYSTEM))
+ 		/* Ignore legacy non-system (non-kernel) PF2VF messages */
+@@ -127,6 +132,7 @@ static void adf_pf2vf_bh_handler(void *data)
+ 	msg &= ~ADF_PF2VF_INT;
+ 	ADF_CSR_WR(pmisc_bar_addr, hw_data->get_pf2vf_offset(0), msg);
+ 
++out:
+ 	/* Re-enable PF2VF interrupts */
+ 	adf_enable_pf2vf_interrupts(accel_dev);
+ 	return;
+diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
+index 55aa3a71169b0..7717e9e5977bb 100644
+--- a/drivers/crypto/s5p-sss.c
++++ b/drivers/crypto/s5p-sss.c
+@@ -2171,6 +2171,8 @@ static int s5p_aes_probe(struct platform_device *pdev)
+ 
+ 	variant = find_s5p_sss_version(pdev);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 
+ 	/*
+ 	 * Note: HASH and PRNG uses the same registers in secss, avoid
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index e809596049b66..59c2e3c623cb9 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -970,7 +970,7 @@ static void __iomem *cxl_mem_map_regblock(struct cxl_mem *cxlm,
+ 	if (pci_resource_len(pdev, bar) < offset) {
+ 		dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
+ 			&pdev->resource[bar], (unsigned long long)offset);
+-		return IOMEM_ERR_PTR(-ENXIO);
++		return NULL;
+ 	}
+ 
+ 	addr = pci_iomap(pdev, bar, 0);
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 511fe0d217a08..733c8b1c8467c 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -79,6 +79,7 @@ static void dma_buf_release(struct dentry *dentry)
+ 	if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
+ 		dma_resv_fini(dmabuf->resv);
+ 
++	WARN_ON(!list_empty(&dmabuf->attachments));
+ 	module_put(dmabuf->owner);
+ 	kfree(dmabuf->name);
+ 	kfree(dmabuf);
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 64a52bf4d7377..9089b67b3e468 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -155,7 +155,7 @@
+ #define		AT_XDMAC_CC_WRIP	(0x1 << 23)	/* Write in Progress (read only) */
+ #define			AT_XDMAC_CC_WRIP_DONE		(0x0 << 23)
+ #define			AT_XDMAC_CC_WRIP_IN_PROGRESS	(0x1 << 23)
+-#define		AT_XDMAC_CC_PERID(i)	(0x7f & (i) << 24)	/* Channel Peripheral Identifier */
++#define		AT_XDMAC_CC_PERID(i)	((0x7f & (i)) << 24)	/* Channel Peripheral Identifier */
+ #define AT_XDMAC_CDS_MSP	0x2C	/* Channel Data Stride Memory Set Pattern */
+ #define AT_XDMAC_CSUS		0x30	/* Channel Source Microblock Stride */
+ #define AT_XDMAC_CDUS		0x34	/* Channel Destination Microblock Stride */
+@@ -1926,6 +1926,30 @@ static void at_xdmac_free_chan_resources(struct dma_chan *chan)
+ 	return;
+ }
+ 
++static void at_xdmac_axi_config(struct platform_device *pdev)
++{
++	struct at_xdmac	*atxdmac = (struct at_xdmac *)platform_get_drvdata(pdev);
++	bool dev_m2m = false;
++	u32 dma_requests;
++
++	if (!atxdmac->layout->axi_config)
++		return; /* Not supported */
++
++	if (!of_property_read_u32(pdev->dev.of_node, "dma-requests",
++				  &dma_requests)) {
++		dev_info(&pdev->dev, "controller in mem2mem mode.\n");
++		dev_m2m = true;
++	}
++
++	if (dev_m2m) {
++		at_xdmac_write(atxdmac, AT_XDMAC_GCFG, AT_XDMAC_GCFG_M2M);
++		at_xdmac_write(atxdmac, AT_XDMAC_GWAC, AT_XDMAC_GWAC_M2M);
++	} else {
++		at_xdmac_write(atxdmac, AT_XDMAC_GCFG, AT_XDMAC_GCFG_P2M);
++		at_xdmac_write(atxdmac, AT_XDMAC_GWAC, AT_XDMAC_GWAC_P2M);
++	}
++}
++
+ #ifdef CONFIG_PM
+ static int atmel_xdmac_prepare(struct device *dev)
+ {
+@@ -1975,6 +1999,7 @@ static int atmel_xdmac_resume(struct device *dev)
+ 	struct at_xdmac		*atxdmac = dev_get_drvdata(dev);
+ 	struct at_xdmac_chan	*atchan;
+ 	struct dma_chan		*chan, *_chan;
++	struct platform_device	*pdev = container_of(dev, struct platform_device, dev);
+ 	int			i;
+ 	int ret;
+ 
+@@ -1982,6 +2007,8 @@ static int atmel_xdmac_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	at_xdmac_axi_config(pdev);
++
+ 	/* Clear pending interrupts. */
+ 	for (i = 0; i < atxdmac->dma.chancnt; i++) {
+ 		atchan = &atxdmac->chan[i];
+@@ -2007,30 +2034,6 @@ static int atmel_xdmac_resume(struct device *dev)
+ }
+ #endif /* CONFIG_PM_SLEEP */
+ 
+-static void at_xdmac_axi_config(struct platform_device *pdev)
+-{
+-	struct at_xdmac	*atxdmac = (struct at_xdmac *)platform_get_drvdata(pdev);
+-	bool dev_m2m = false;
+-	u32 dma_requests;
+-
+-	if (!atxdmac->layout->axi_config)
+-		return; /* Not supported */
+-
+-	if (!of_property_read_u32(pdev->dev.of_node, "dma-requests",
+-				  &dma_requests)) {
+-		dev_info(&pdev->dev, "controller in mem2mem mode.\n");
+-		dev_m2m = true;
+-	}
+-
+-	if (dev_m2m) {
+-		at_xdmac_write(atxdmac, AT_XDMAC_GCFG, AT_XDMAC_GCFG_M2M);
+-		at_xdmac_write(atxdmac, AT_XDMAC_GWAC, AT_XDMAC_GWAC_M2M);
+-	} else {
+-		at_xdmac_write(atxdmac, AT_XDMAC_GCFG, AT_XDMAC_GCFG_P2M);
+-		at_xdmac_write(atxdmac, AT_XDMAC_GWAC, AT_XDMAC_GWAC_P2M);
+-	}
+-}
+-
+ static int at_xdmac_probe(struct platform_device *pdev)
+ {
+ 	struct at_xdmac	*atxdmac;
+diff --git a/drivers/dma/bestcomm/ata.c b/drivers/dma/bestcomm/ata.c
+index 2fd87f83cf90b..e169f18da551f 100644
+--- a/drivers/dma/bestcomm/ata.c
++++ b/drivers/dma/bestcomm/ata.c
+@@ -133,7 +133,7 @@ void bcom_ata_reset_bd(struct bcom_task *tsk)
+ 	struct bcom_ata_var *var;
+ 
+ 	/* Reset all BD */
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+diff --git a/drivers/dma/bestcomm/bestcomm.c b/drivers/dma/bestcomm/bestcomm.c
+index d91cbbe7a48fb..8c42e5ca00a99 100644
+--- a/drivers/dma/bestcomm/bestcomm.c
++++ b/drivers/dma/bestcomm/bestcomm.c
+@@ -95,7 +95,7 @@ bcom_task_alloc(int bd_count, int bd_size, int priv_size)
+ 		tsk->bd = bcom_sram_alloc(bd_count * bd_size, 4, &tsk->bd_pa);
+ 		if (!tsk->bd)
+ 			goto error;
+-		memset(tsk->bd, 0x00, bd_count * bd_size);
++		memset_io(tsk->bd, 0x00, bd_count * bd_size);
+ 
+ 		tsk->num_bd = bd_count;
+ 		tsk->bd_size = bd_size;
+@@ -186,16 +186,16 @@ bcom_load_image(int task, u32 *task_image)
+ 	inc = bcom_task_inc(task);
+ 
+ 	/* Clear & copy */
+-	memset(var, 0x00, BCOM_VAR_SIZE);
+-	memset(inc, 0x00, BCOM_INC_SIZE);
++	memset_io(var, 0x00, BCOM_VAR_SIZE);
++	memset_io(inc, 0x00, BCOM_INC_SIZE);
+ 
+ 	desc_src = (u32 *)(hdr + 1);
+ 	var_src = desc_src + hdr->desc_size;
+ 	inc_src = var_src + hdr->var_size;
+ 
+-	memcpy(desc, desc_src, hdr->desc_size * sizeof(u32));
+-	memcpy(var + hdr->first_var, var_src, hdr->var_size * sizeof(u32));
+-	memcpy(inc, inc_src, hdr->inc_size * sizeof(u32));
++	memcpy_toio(desc, desc_src, hdr->desc_size * sizeof(u32));
++	memcpy_toio(var + hdr->first_var, var_src, hdr->var_size * sizeof(u32));
++	memcpy_toio(inc, inc_src, hdr->inc_size * sizeof(u32));
+ 
+ 	return 0;
+ }
+@@ -302,13 +302,13 @@ static int bcom_engine_init(void)
+ 		return -ENOMEM;
+ 	}
+ 
+-	memset(bcom_eng->tdt, 0x00, tdt_size);
+-	memset(bcom_eng->ctx, 0x00, ctx_size);
+-	memset(bcom_eng->var, 0x00, var_size);
+-	memset(bcom_eng->fdt, 0x00, fdt_size);
++	memset_io(bcom_eng->tdt, 0x00, tdt_size);
++	memset_io(bcom_eng->ctx, 0x00, ctx_size);
++	memset_io(bcom_eng->var, 0x00, var_size);
++	memset_io(bcom_eng->fdt, 0x00, fdt_size);
+ 
+ 	/* Copy the FDT for the EU#3 */
+-	memcpy(&bcom_eng->fdt[48], fdt_ops, sizeof(fdt_ops));
++	memcpy_toio(&bcom_eng->fdt[48], fdt_ops, sizeof(fdt_ops));
+ 
+ 	/* Initialize Task base structure */
+ 	for (task=0; task<BCOM_MAX_TASKS; task++)
+diff --git a/drivers/dma/bestcomm/fec.c b/drivers/dma/bestcomm/fec.c
+index 7f1fb1c999e43..d203618ac11fe 100644
+--- a/drivers/dma/bestcomm/fec.c
++++ b/drivers/dma/bestcomm/fec.c
+@@ -140,7 +140,7 @@ bcom_fec_rx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_FEC_RX_BD_PRAGMA);
+@@ -241,7 +241,7 @@ bcom_fec_tx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_FEC_TX_BD_PRAGMA);
+diff --git a/drivers/dma/bestcomm/gen_bd.c b/drivers/dma/bestcomm/gen_bd.c
+index 906ddba6a6f5d..8a24a5cbc2633 100644
+--- a/drivers/dma/bestcomm/gen_bd.c
++++ b/drivers/dma/bestcomm/gen_bd.c
+@@ -142,7 +142,7 @@ bcom_gen_bd_rx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_GEN_RX_BD_PRAGMA);
+@@ -226,7 +226,7 @@ bcom_gen_bd_tx_reset(struct bcom_task *tsk)
+ 	tsk->index = 0;
+ 	tsk->outdex = 0;
+ 
+-	memset(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
++	memset_io(tsk->bd, 0x00, tsk->num_bd * tsk->bd_size);
+ 
+ 	/* Configure some stuff */
+ 	bcom_set_task_pragma(tsk->tasknum, BCOM_GEN_TX_BD_PRAGMA);
+diff --git a/drivers/dma/dmaengine.h b/drivers/dma/dmaengine.h
+index 1bfbd64b13717..53f16d3f00294 100644
+--- a/drivers/dma/dmaengine.h
++++ b/drivers/dma/dmaengine.h
+@@ -176,7 +176,7 @@ dmaengine_desc_get_callback_invoke(struct dma_async_tx_descriptor *tx,
+ static inline bool
+ dmaengine_desc_callback_valid(struct dmaengine_desc_callback *cb)
+ {
+-	return (cb->callback) ? true : false;
++	return cb->callback || cb->callback_result;
+ }
+ 
+ struct dma_chan *dma_get_slave_channel(struct dma_chan *chan);
+diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c
+index 7dd1d3d0bf063..bf3042b655485 100644
+--- a/drivers/dma/stm32-dma.c
++++ b/drivers/dma/stm32-dma.c
+@@ -268,7 +268,6 @@ static enum dma_slave_buswidth stm32_dma_get_max_width(u32 buf_len,
+ 						       u32 threshold)
+ {
+ 	enum dma_slave_buswidth max_width;
+-	u64 addr = buf_addr;
+ 
+ 	if (threshold == STM32_DMA_FIFO_THRESHOLD_FULL)
+ 		max_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+@@ -279,7 +278,7 @@ static enum dma_slave_buswidth stm32_dma_get_max_width(u32 buf_len,
+ 	       max_width > DMA_SLAVE_BUSWIDTH_1_BYTE)
+ 		max_width = max_width >> 1;
+ 
+-	if (do_div(addr, max_width))
++	if (buf_addr & (max_width - 1))
+ 		max_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 
+ 	return max_width;
+@@ -751,8 +750,14 @@ static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan,
+ 		if (src_bus_width < 0)
+ 			return src_bus_width;
+ 
+-		/* Set memory burst size */
+-		src_maxburst = STM32_DMA_MAX_BURST;
++		/*
++		 * Set memory burst size - burst not possible if address is not aligned on
++		 * the address boundary equal to the size of the transfer
++		 */
++		if (buf_addr & (buf_len - 1))
++			src_maxburst = 1;
++		else
++			src_maxburst = STM32_DMA_MAX_BURST;
+ 		src_best_burst = stm32_dma_get_best_burst(buf_len,
+ 							  src_maxburst,
+ 							  fifoth,
+@@ -801,8 +806,14 @@ static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan,
+ 		if (dst_bus_width < 0)
+ 			return dst_bus_width;
+ 
+-		/* Set memory burst size */
+-		dst_maxburst = STM32_DMA_MAX_BURST;
++		/*
++		 * Set memory burst size - burst not possible if address is not aligned on
++		 * the address boundary equal to the size of the transfer
++		 */
++		if (buf_addr & (buf_len - 1))
++			dst_maxburst = 1;
++		else
++			dst_maxburst = STM32_DMA_MAX_BURST;
+ 		dst_best_burst = stm32_dma_get_best_burst(buf_len,
+ 							  dst_maxburst,
+ 							  fifoth,
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index a35858610780c..041d8e32d6300 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -1348,6 +1348,7 @@ static int bcdma_get_bchan(struct udma_chan *uc)
+ {
+ 	struct udma_dev *ud = uc->ud;
+ 	enum udma_tp_level tpl;
++	int ret;
+ 
+ 	if (uc->bchan) {
+ 		dev_dbg(ud->dev, "chan%d: already have bchan%d allocated\n",
+@@ -1365,8 +1366,11 @@ static int bcdma_get_bchan(struct udma_chan *uc)
+ 		tpl = ud->bchan_tpl.levels - 1;
+ 
+ 	uc->bchan = __udma_reserve_bchan(ud, tpl, -1);
+-	if (IS_ERR(uc->bchan))
+-		return PTR_ERR(uc->bchan);
++	if (IS_ERR(uc->bchan)) {
++		ret = PTR_ERR(uc->bchan);
++		uc->bchan = NULL;
++		return ret;
++	}
+ 
+ 	uc->tchan = uc->bchan;
+ 
+@@ -1376,6 +1380,7 @@ static int bcdma_get_bchan(struct udma_chan *uc)
+ static int udma_get_tchan(struct udma_chan *uc)
+ {
+ 	struct udma_dev *ud = uc->ud;
++	int ret;
+ 
+ 	if (uc->tchan) {
+ 		dev_dbg(ud->dev, "chan%d: already have tchan%d allocated\n",
+@@ -1390,8 +1395,11 @@ static int udma_get_tchan(struct udma_chan *uc)
+ 	 */
+ 	uc->tchan = __udma_reserve_tchan(ud, uc->config.channel_tpl,
+ 					 uc->config.mapped_channel_id);
+-	if (IS_ERR(uc->tchan))
+-		return PTR_ERR(uc->tchan);
++	if (IS_ERR(uc->tchan)) {
++		ret = PTR_ERR(uc->tchan);
++		uc->tchan = NULL;
++		return ret;
++	}
+ 
+ 	if (ud->tflow_cnt) {
+ 		int tflow_id;
+@@ -1421,6 +1429,7 @@ static int udma_get_tchan(struct udma_chan *uc)
+ static int udma_get_rchan(struct udma_chan *uc)
+ {
+ 	struct udma_dev *ud = uc->ud;
++	int ret;
+ 
+ 	if (uc->rchan) {
+ 		dev_dbg(ud->dev, "chan%d: already have rchan%d allocated\n",
+@@ -1435,8 +1444,13 @@ static int udma_get_rchan(struct udma_chan *uc)
+ 	 */
+ 	uc->rchan = __udma_reserve_rchan(ud, uc->config.channel_tpl,
+ 					 uc->config.mapped_channel_id);
++	if (IS_ERR(uc->rchan)) {
++		ret = PTR_ERR(uc->rchan);
++		uc->rchan = NULL;
++		return ret;
++	}
+ 
+-	return PTR_ERR_OR_ZERO(uc->rchan);
++	return 0;
+ }
+ 
+ static int udma_get_chan_pair(struct udma_chan *uc)
+@@ -1490,6 +1504,7 @@ static int udma_get_chan_pair(struct udma_chan *uc)
+ static int udma_get_rflow(struct udma_chan *uc, int flow_id)
+ {
+ 	struct udma_dev *ud = uc->ud;
++	int ret;
+ 
+ 	if (!uc->rchan) {
+ 		dev_err(ud->dev, "chan%d: does not have rchan??\n", uc->id);
+@@ -1503,8 +1518,13 @@ static int udma_get_rflow(struct udma_chan *uc, int flow_id)
+ 	}
+ 
+ 	uc->rflow = __udma_get_rflow(ud, flow_id);
++	if (IS_ERR(uc->rflow)) {
++		ret = PTR_ERR(uc->rflow);
++		uc->rflow = NULL;
++		return ret;
++	}
+ 
+-	return PTR_ERR_OR_ZERO(uc->rflow);
++	return 0;
+ }
+ 
+ static void bcdma_put_bchan(struct udma_chan *uc)
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index f0d8f60acee10..b31ee9b0d2c03 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -1070,12 +1070,14 @@ static void debug_dump_dramcfg_low(struct amd64_pvt *pvt, u32 dclr, int chan)
+ #define CS_ODD_PRIMARY		BIT(1)
+ #define CS_EVEN_SECONDARY	BIT(2)
+ #define CS_ODD_SECONDARY	BIT(3)
++#define CS_3R_INTERLEAVE	BIT(4)
+ 
+ #define CS_EVEN			(CS_EVEN_PRIMARY | CS_EVEN_SECONDARY)
+ #define CS_ODD			(CS_ODD_PRIMARY | CS_ODD_SECONDARY)
+ 
+ static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt)
+ {
++	u8 base, count = 0;
+ 	int cs_mode = 0;
+ 
+ 	if (csrow_enabled(2 * dimm, ctrl, pvt))
+@@ -1088,6 +1090,20 @@ static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt)
+ 	if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt))
+ 		cs_mode |= CS_ODD_SECONDARY;
+ 
++	/*
++	 * 3 Rank inteleaving support.
++	 * There should be only three bases enabled and their two masks should
++	 * be equal.
++	 */
++	for_each_chip_select(base, ctrl, pvt)
++		count += csrow_enabled(base, ctrl, pvt);
++
++	if (count == 3 &&
++	    pvt->csels[ctrl].csmasks[0] == pvt->csels[ctrl].csmasks[1]) {
++		edac_dbg(1, "3R interleaving in use.\n");
++		cs_mode |= CS_3R_INTERLEAVE;
++	}
++
+ 	return cs_mode;
+ }
+ 
+@@ -1896,10 +1912,14 @@ static int f17_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,
+ 	 *
+ 	 * The MSB is the number of bits in the full mask because BIT[0] is
+ 	 * always 0.
++	 *
++	 * In the special 3 Rank interleaving case, a single bit is flipped
++	 * without swapping with the most significant bit. This can be handled
++	 * by keeping the MSB where it is and ignoring the single zero bit.
+ 	 */
+ 	msb = fls(addr_mask_orig) - 1;
+ 	weight = hweight_long(addr_mask_orig);
+-	num_zero_bits = msb - weight;
++	num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE);
+ 
+ 	/* Take the number of zero bits off from the top of the mask. */
+ 	addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1);
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 4c626fcd4dcbb..1522d4aa2ca62 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -1052,7 +1052,7 @@ static u64 haswell_get_tohm(struct sbridge_pvt *pvt)
+ 	pci_read_config_dword(pvt->info.pci_vtd, HASWELL_TOHM_1, &reg);
+ 	rc = ((reg << 6) | rc) << 26;
+ 
+-	return rc | 0x1ffffff;
++	return rc | 0x3ffffff;
+ }
+ 
+ static u64 knl_get_tolm(struct sbridge_pvt *pvt)
+diff --git a/drivers/firmware/psci/psci_checker.c b/drivers/firmware/psci/psci_checker.c
+index 9a369a2eda71d..116eb465cdb42 100644
+--- a/drivers/firmware/psci/psci_checker.c
++++ b/drivers/firmware/psci/psci_checker.c
+@@ -155,7 +155,7 @@ static int alloc_init_cpu_groups(cpumask_var_t **pcpu_groups)
+ 	if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+-	cpu_groups = kcalloc(nb_available_cpus, sizeof(cpu_groups),
++	cpu_groups = kcalloc(nb_available_cpus, sizeof(*cpu_groups),
+ 			     GFP_KERNEL);
+ 	if (!cpu_groups) {
+ 		free_cpumask_var(tmp);
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 47ea2bd42b100..0aa0fe86ca8c7 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -252,7 +252,7 @@ static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+ 		break;
+ 	default:
+ 		pr_err("Unknown SMC convention being used\n");
+-		return -EINVAL;
++		return false;
+ 	}
+ 
+ 	ret = qcom_scm_call(dev, &desc, &res);
+diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
+index befa5e1099439..d4b250b470b41 100644
+--- a/drivers/gpio/gpio-mlxbf2.c
++++ b/drivers/gpio/gpio-mlxbf2.c
+@@ -268,6 +268,11 @@ mlxbf2_gpio_probe(struct platform_device *pdev)
+ 			NULL,
+ 			0);
+ 
++	if (ret) {
++		dev_err(dev, "bgpio_init failed\n");
++		return ret;
++	}
++
+ 	gc->direction_input = mlxbf2_gpio_direction_input;
+ 	gc->direction_output = mlxbf2_gpio_direction_output;
+ 	gc->ngpio = npins;
+diff --git a/drivers/gpio/gpio-realtek-otto.c b/drivers/gpio/gpio-realtek-otto.c
+index cb64fb5a51aa1..e0cbaa1ea22ec 100644
+--- a/drivers/gpio/gpio-realtek-otto.c
++++ b/drivers/gpio/gpio-realtek-otto.c
+@@ -206,7 +206,7 @@ static void realtek_gpio_irq_handler(struct irq_desc *desc)
+ 		status = realtek_gpio_read_isr(ctrl, lines_done / 8);
+ 		port_pin_count = min(gc->ngpio - lines_done, 8U);
+ 		for_each_set_bit(offset, &status, port_pin_count) {
+-			irq = irq_find_mapping(gc->irq.domain, offset);
++			irq = irq_find_mapping(gc->irq.domain, offset + lines_done);
+ 			generic_handle_irq(irq);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 7ff89690a976a..0bba672176b10 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -97,9 +97,8 @@ config DRM_DEBUG_DP_MST_TOPOLOGY_REFS
+ 
+ config DRM_FBDEV_EMULATION
+ 	bool "Enable legacy fbdev support for your modesetting driver"
+-	depends on DRM
+-	depends on FB
+-	select DRM_KMS_HELPER
++	depends on DRM_KMS_HELPER
++	depends on FB=y || FB=DRM_KMS_HELPER
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index b18c0697356c1..05a3c021738a1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1490,7 +1490,7 @@ allocate_init_user_pages_failed:
+ 	remove_kgd_mem_from_kfd_bo_list(*mem, avm->process_info);
+ 	drm_vma_node_revoke(&gobj->vma_node, drm_priv);
+ err_node_allow:
+-	amdgpu_bo_unref(&bo);
++	drm_gem_object_put(gobj);
+ 	/* Don't unreserve system mem limit twice */
+ 	goto err_reserve_limit;
+ err_bo_create:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+index 15c45b2a39835..714178f1b6c6e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+@@ -61,7 +61,7 @@ static void amdgpu_bo_list_free(struct kref *ref)
+ 
+ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
+ 			  struct drm_amdgpu_bo_list_entry *info,
+-			  unsigned num_entries, struct amdgpu_bo_list **result)
++			  size_t num_entries, struct amdgpu_bo_list **result)
+ {
+ 	unsigned last_entry = 0, first_userptr = num_entries;
+ 	struct amdgpu_bo_list_entry *array;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
+index a130e766cbdbe..529d52a204cf4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.h
+@@ -60,7 +60,7 @@ int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
+ int amdgpu_bo_list_create(struct amdgpu_device *adev,
+ 				 struct drm_file *filp,
+ 				 struct drm_amdgpu_bo_list_entry *info,
+-				 unsigned num_entries,
++				 size_t num_entries,
+ 				 struct amdgpu_bo_list **list);
+ 
+ static inline struct amdgpu_bo_list_entry *
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 08e53ff747282..f03247e2af78f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2689,6 +2689,11 @@ static int amdgpu_device_ip_fini_early(struct amdgpu_device *adev)
+ 		adev->ip_blocks[i].status.hw = false;
+ 	}
+ 
++	if (amdgpu_sriov_vf(adev)) {
++		if (amdgpu_virt_release_full_gpu(adev, false))
++			DRM_ERROR("failed to release exclusive mode on fini\n");
++	}
++
+ 	return 0;
+ }
+ 
+@@ -2749,10 +2754,6 @@ static int amdgpu_device_ip_fini(struct amdgpu_device *adev)
+ 
+ 	amdgpu_ras_fini(adev);
+ 
+-	if (amdgpu_sriov_vf(adev))
+-		if (amdgpu_virt_release_full_gpu(adev, false))
+-			DRM_ERROR("failed to release exclusive mode on fini\n");
+-
+ 	return 0;
+ }
+ 
+@@ -3817,8 +3818,8 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+ 
+ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ {
+-	amdgpu_device_ip_fini(adev);
+ 	amdgpu_fence_driver_sw_fini(adev);
++	amdgpu_device_ip_fini(adev);
+ 	release_firmware(adev->firmware.gpu_info_fw);
+ 	adev->firmware.gpu_info_fw = NULL;
+ 	adev->accel_working = false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 9a67746c10edd..b14ff19231b91 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -61,7 +61,7 @@ static vm_fault_t amdgpu_gem_fault(struct vm_fault *vmf)
+ 		}
+ 
+ 		 ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
+-						TTM_BO_VM_NUM_PREFAULT, 1);
++						TTM_BO_VM_NUM_PREFAULT);
+ 
+ 		 drm_dev_exit(idx);
+ 	} else {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+index 0e81e03e9b498..0fe714f54cca9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+@@ -841,12 +841,12 @@ static int gmc_v6_0_sw_init(void *handle)
+ 
+ 	adev->gmc.mc_mask = 0xffffffffffULL;
+ 
+-	r = dma_set_mask_and_coherent(adev->dev, DMA_BIT_MASK(44));
++	r = dma_set_mask_and_coherent(adev->dev, DMA_BIT_MASK(40));
+ 	if (r) {
+ 		dev_warn(adev->dev, "No suitable DMA available.\n");
+ 		return r;
+ 	}
+-	adev->need_swiotlb = drm_need_swiotlb(44);
++	adev->need_swiotlb = drm_need_swiotlb(40);
+ 
+ 	r = gmc_v6_0_init_microcode(adev);
+ 	if (r) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index f4686e918e0d1..c405075a572c1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -22,6 +22,7 @@
+  */
+ 
+ #include <linux/firmware.h>
++#include <drm/drm_drv.h>
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_vcn.h"
+@@ -192,11 +193,14 @@ static int vcn_v2_0_sw_init(void *handle)
+  */
+ static int vcn_v2_0_sw_fini(void *handle)
+ {
+-	int r;
++	int r, idx;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	volatile struct amdgpu_fw_shared *fw_shared = adev->vcn.inst->fw_shared_cpu_addr;
+ 
+-	fw_shared->present_flag_0 = 0;
++	if (drm_dev_enter(&adev->ddev, &idx)) {
++		fw_shared->present_flag_0 = 0;
++		drm_dev_exit(idx);
++	}
+ 
+ 	amdgpu_virt_free_mm_table(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index e0c0c3734432e..a0956d8623770 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -22,6 +22,7 @@
+  */
+ 
+ #include <linux/firmware.h>
++#include <drm/drm_drv.h>
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_vcn.h"
+@@ -233,17 +234,21 @@ static int vcn_v2_5_sw_init(void *handle)
+  */
+ static int vcn_v2_5_sw_fini(void *handle)
+ {
+-	int i, r;
++	int i, r, idx;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	volatile struct amdgpu_fw_shared *fw_shared;
+ 
+-	for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+-		if (adev->vcn.harvest_config & (1 << i))
+-			continue;
+-		fw_shared = adev->vcn.inst[i].fw_shared_cpu_addr;
+-		fw_shared->present_flag_0 = 0;
++	if (drm_dev_enter(&adev->ddev, &idx)) {
++		for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
++			if (adev->vcn.harvest_config & (1 << i))
++				continue;
++			fw_shared = adev->vcn.inst[i].fw_shared_cpu_addr;
++			fw_shared->present_flag_0 = 0;
++		}
++		drm_dev_exit(idx);
+ 	}
+ 
++
+ 	if (amdgpu_sriov_vf(adev))
+ 		amdgpu_virt_free_mm_table(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index ef64fb8f1bbf5..900ea693c71c6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -867,6 +867,7 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 	kfd_double_confirm_iommu_support(kfd);
+ 
+ 	if (kfd_iommu_device_init(kfd)) {
++		kfd->use_iommu_v2 = false;
+ 		dev_err(kfd_device, "Error initializing iommuv2\n");
+ 		goto device_iommu_error;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index e85035fd1ccb4..b6a19ac2bc607 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1303,7 +1303,7 @@ struct svm_validate_context {
+ 	struct svm_range *prange;
+ 	bool intr;
+ 	unsigned long bitmap[MAX_GPU_INSTANCE];
+-	struct ttm_validate_buffer tv[MAX_GPU_INSTANCE+1];
++	struct ttm_validate_buffer tv[MAX_GPU_INSTANCE];
+ 	struct list_head validate_list;
+ 	struct ww_acquire_ctx ticket;
+ };
+@@ -1330,11 +1330,6 @@ static int svm_range_reserve_bos(struct svm_validate_context *ctx)
+ 		ctx->tv[gpuidx].num_shared = 4;
+ 		list_add(&ctx->tv[gpuidx].head, &ctx->validate_list);
+ 	}
+-	if (ctx->prange->svm_bo && ctx->prange->ttm_res) {
+-		ctx->tv[MAX_GPU_INSTANCE].bo = &ctx->prange->svm_bo->bo->tbo;
+-		ctx->tv[MAX_GPU_INSTANCE].num_shared = 1;
+-		list_add(&ctx->tv[MAX_GPU_INSTANCE].head, &ctx->validate_list);
+-	}
+ 
+ 	r = ttm_eu_reserve_buffers(&ctx->ticket, &ctx->validate_list,
+ 				   ctx->intr, NULL);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index a03d7682cd8f2..d02630d72d5eb 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1143,8 +1143,15 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	case CHIP_RAVEN:
+ 	case CHIP_RENOIR:
+ 		init_data.flags.gpu_vm_support = true;
+-		if (ASICREV_IS_GREEN_SARDINE(adev->external_rev_id))
++		switch (adev->dm.dmcub_fw_version) {
++		case 0: /* development */
++		case 0x1: /* linux-firmware.git hash 6d9f399 */
++		case 0x01000000: /* linux-firmware.git hash 9a0b0f4 */
++			init_data.flags.disable_dmcu = false;
++			break;
++		default:
+ 			init_data.flags.disable_dmcu = true;
++		}
+ 		break;
+ 	case CHIP_VANGOGH:
+ 	case CHIP_YELLOW_CARP:
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 3c8da3665a274..7b418f3f9291c 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -4681,7 +4681,7 @@ enum dc_status dp_set_fec_ready(struct dc_link *link, bool ready)
+ 				link_enc->funcs->fec_set_ready(link_enc, true);
+ 				link->fec_state = dc_link_fec_ready;
+ 			} else {
+-				link_enc->funcs->fec_set_ready(link->link_enc, false);
++				link_enc->funcs->fec_set_ready(link_enc, false);
+ 				link->fec_state = dc_link_fec_not_ready;
+ 				dm_error("dpcd write failed to set fec_ready");
+ 			}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 75fa4adcf5f40..da7c906ba5eb5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1522,7 +1522,7 @@ void dcn10_power_down_on_boot(struct dc *dc)
+ 		for (i = 0; i < dc->link_count; i++) {
+ 			struct dc_link *link = dc->links[i];
+ 
+-			if (link->link_enc->funcs->is_dig_enabled &&
++			if (link->link_enc && link->link_enc->funcs->is_dig_enabled &&
+ 					link->link_enc->funcs->is_dig_enabled(link->link_enc) &&
+ 					dc->hwss.power_down) {
+ 				dc->hwss.power_down(dc);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index c78933a9d31c1..2c1d3d9b7cc14 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3701,16 +3701,22 @@ static bool init_soc_bounding_box(struct dc *dc,
+ 			clock_limits_available = (status == PP_SMU_RESULT_OK);
+ 		}
+ 
+-		if (clock_limits_available && uclk_states_available && num_states)
++		if (clock_limits_available && uclk_states_available && num_states) {
++			DC_FP_START();
+ 			dcn20_update_bounding_box(dc, loaded_bb, &max_clocks, uclk_states, num_states);
+-		else if (clock_limits_available)
++			DC_FP_END();
++		} else if (clock_limits_available) {
++			DC_FP_START();
+ 			dcn20_cap_soc_clocks(loaded_bb, max_clocks);
++			DC_FP_END();
++		}
+ 	}
+ 
+ 	loaded_ip->max_num_otg = pool->base.res_cap->num_timing_generator;
+ 	loaded_ip->max_num_dpp = pool->base.pipe_count;
++	DC_FP_START();
+ 	dcn20_patch_bounding_box(dc, loaded_bb);
+-
++	DC_FP_END();
+ 	return true;
+ }
+ 
+@@ -3730,8 +3736,6 @@ static bool dcn20_resource_construct(
+ 	enum dml_project dml_project_version =
+ 			get_dml_project_version(ctx->asic_id.hw_internal_rev);
+ 
+-	DC_FP_START();
+-
+ 	ctx->dc_bios->regs = &bios_regs;
+ 	pool->base.funcs = &dcn20_res_pool_funcs;
+ 
+@@ -4080,12 +4084,10 @@ static bool dcn20_resource_construct(
+ 		pool->base.oem_device = NULL;
+ 	}
+ 
+-	DC_FP_END();
+ 	return true;
+ 
+ create_fail:
+ 
+-	DC_FP_END();
+ 	dcn20_resource_destruct(pool);
+ 
+ 	return false;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index fafed1e4a998d..0950784bafa49 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -1002,7 +1002,8 @@ void dcn30_set_disp_pattern_generator(const struct dc *dc,
+ 		/* turning off DPG */
+ 		pipe_ctx->plane_res.hubp->funcs->set_blank(pipe_ctx->plane_res.hubp, false);
+ 		for (mpcc_pipe = pipe_ctx->bottom_pipe; mpcc_pipe; mpcc_pipe = mpcc_pipe->bottom_pipe)
+-			mpcc_pipe->plane_res.hubp->funcs->set_blank(mpcc_pipe->plane_res.hubp, false);
++			if (mpcc_pipe->plane_res.hubp)
++				mpcc_pipe->plane_res.hubp->funcs->set_blank(mpcc_pipe->plane_res.hubp, false);
+ 
+ 		stream_res->opp->funcs->opp_set_disp_pattern_generator(stream_res->opp, test_pattern, color_space,
+ 				color_depth, solid_color, width, height, offset);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index bcaaa086fc2fb..69fc009570a0b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -1382,52 +1382,38 @@ static int vangogh_set_performance_level(struct smu_context *smu,
+ 	uint32_t soc_mask, mclk_mask, fclk_mask;
+ 	uint32_t vclk_mask = 0, dclk_mask = 0;
+ 
++	smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
++	smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
++
+ 	switch (level) {
+ 	case AMD_DPM_FORCED_LEVEL_HIGH:
+-		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
++		smu->gfx_actual_hard_min_freq = smu->gfx_default_soft_max_freq;
+ 		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+ 
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
+ 
+ 		ret = vangogh_force_dpm_limit_value(smu, true);
++		if (ret)
++			return ret;
+ 		break;
+ 	case AMD_DPM_FORCED_LEVEL_LOW:
+ 		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
+-		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+-
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
++		smu->gfx_actual_soft_max_freq = smu->gfx_default_hard_min_freq;
+ 
+ 		ret = vangogh_force_dpm_limit_value(smu, false);
++		if (ret)
++			return ret;
+ 		break;
+ 	case AMD_DPM_FORCED_LEVEL_AUTO:
+ 		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
+ 		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+ 
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
+-
+ 		ret = vangogh_unforce_dpm_levels(smu);
+-		break;
+-	case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
+-		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
+-		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+-
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
+-
+-		ret = smu_cmn_send_smc_msg_with_param(smu,
+-					SMU_MSG_SetHardMinGfxClk,
+-					VANGOGH_UMD_PSTATE_STANDARD_GFXCLK, NULL);
+-		if (ret)
+-			return ret;
+-
+-		ret = smu_cmn_send_smc_msg_with_param(smu,
+-					SMU_MSG_SetSoftMaxGfxClk,
+-					VANGOGH_UMD_PSTATE_STANDARD_GFXCLK, NULL);
+ 		if (ret)
+ 			return ret;
++		break;
++	case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
++		smu->gfx_actual_hard_min_freq = VANGOGH_UMD_PSTATE_STANDARD_GFXCLK;
++		smu->gfx_actual_soft_max_freq = VANGOGH_UMD_PSTATE_STANDARD_GFXCLK;
+ 
+ 		ret = vangogh_get_profiling_clk_mask(smu, level,
+ 							&vclk_mask,
+@@ -1442,32 +1428,15 @@ static int vangogh_set_performance_level(struct smu_context *smu,
+ 		vangogh_force_clk_levels(smu, SMU_SOCCLK, 1 << soc_mask);
+ 		vangogh_force_clk_levels(smu, SMU_VCLK, 1 << vclk_mask);
+ 		vangogh_force_clk_levels(smu, SMU_DCLK, 1 << dclk_mask);
+-
+ 		break;
+ 	case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
+ 		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
+-		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+-
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
+-
+-		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetHardMinVcn,
+-								VANGOGH_UMD_PSTATE_PEAK_DCLK, NULL);
+-		if (ret)
+-			return ret;
+-
+-		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxVcn,
+-								VANGOGH_UMD_PSTATE_PEAK_DCLK, NULL);
+-		if (ret)
+-			return ret;
++		smu->gfx_actual_soft_max_freq = smu->gfx_default_hard_min_freq;
+ 		break;
+ 	case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
+ 		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
+ 		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+ 
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
+-
+ 		ret = vangogh_get_profiling_clk_mask(smu, level,
+ 							NULL,
+ 							NULL,
+@@ -1480,29 +1449,29 @@ static int vangogh_set_performance_level(struct smu_context *smu,
+ 		vangogh_force_clk_levels(smu, SMU_FCLK, 1 << fclk_mask);
+ 		break;
+ 	case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
+-		smu->gfx_actual_hard_min_freq = smu->gfx_default_hard_min_freq;
+-		smu->gfx_actual_soft_max_freq = smu->gfx_default_soft_max_freq;
+-
+-		smu->cpu_actual_soft_min_freq = smu->cpu_default_soft_min_freq;
+-		smu->cpu_actual_soft_max_freq = smu->cpu_default_soft_max_freq;
+-
+-		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetHardMinGfxClk,
+-				VANGOGH_UMD_PSTATE_PEAK_GFXCLK, NULL);
+-		if (ret)
+-			return ret;
++		smu->gfx_actual_hard_min_freq = VANGOGH_UMD_PSTATE_PEAK_GFXCLK;
++		smu->gfx_actual_soft_max_freq = VANGOGH_UMD_PSTATE_PEAK_GFXCLK;
+ 
+-		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxGfxClk,
+-				VANGOGH_UMD_PSTATE_PEAK_GFXCLK, NULL);
++		ret = vangogh_set_peak_clock_by_device(smu);
+ 		if (ret)
+ 			return ret;
+-
+-		ret = vangogh_set_peak_clock_by_device(smu);
+ 		break;
+ 	case AMD_DPM_FORCED_LEVEL_MANUAL:
+ 	case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
+ 	default:
+-		break;
++		return 0;
+ 	}
++
++	ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetHardMinGfxClk,
++					      smu->gfx_actual_hard_min_freq, NULL);
++	if (ret)
++		return ret;
++
++	ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxGfxClk,
++					      smu->gfx_actual_soft_max_freq, NULL);
++	if (ret)
++		return ret;
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 7519b7a0f29dd..439c7bed33ff2 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -702,7 +702,7 @@ static int edid_read(struct anx7625_data *ctx,
+ 		ret = sp_tx_aux_rd(ctx, 0xf1);
+ 
+ 		if (ret) {
+-			sp_tx_rst_aux(ctx);
++			ret = sp_tx_rst_aux(ctx);
+ 			DRM_DEV_DEBUG_DRIVER(dev, "edid read fail, reset!\n");
+ 		} else {
+ 			ret = anx7625_reg_block_read(ctx, ctx->i2c.rx_p0_client,
+@@ -717,7 +717,7 @@ static int edid_read(struct anx7625_data *ctx,
+ 	if (cnt > EDID_TRY_CNT)
+ 		return -EIO;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int segments_edid_read(struct anx7625_data *ctx,
+@@ -767,7 +767,7 @@ static int segments_edid_read(struct anx7625_data *ctx,
+ 	if (cnt > EDID_TRY_CNT)
+ 		return -EIO;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int sp_tx_edid_read(struct anx7625_data *ctx,
+@@ -869,7 +869,11 @@ static int sp_tx_edid_read(struct anx7625_data *ctx,
+ 	}
+ 
+ 	/* Reset aux channel */
+-	sp_tx_rst_aux(ctx);
++	ret = sp_tx_rst_aux(ctx);
++	if (ret < 0) {
++		DRM_DEV_ERROR(dev, "Failed to reset aux channel!\n");
++		return ret;
++	}
+ 
+ 	return (blocks_num + 1);
+ }
+diff --git a/drivers/gpu/drm/bridge/ite-it66121.c b/drivers/gpu/drm/bridge/ite-it66121.c
+index 2f2a09adb4bc8..06b59b422c696 100644
+--- a/drivers/gpu/drm/bridge/ite-it66121.c
++++ b/drivers/gpu/drm/bridge/ite-it66121.c
+@@ -889,7 +889,7 @@ unlock:
+ static int it66121_probe(struct i2c_client *client,
+ 			 const struct i2c_device_id *id)
+ {
+-	u32 vendor_ids[2], device_ids[2], revision_id;
++	u32 revision_id, vendor_ids[2] = { 0 }, device_ids[2] = { 0 };
+ 	struct device_node *ep;
+ 	int ret;
+ 	struct it66121_ctx *ctx;
+@@ -918,11 +918,26 @@ static int it66121_probe(struct i2c_client *client,
+ 		return -EINVAL;
+ 
+ 	ep = of_graph_get_remote_node(dev->of_node, 1, -1);
+-	if (!ep)
+-		return -EPROBE_DEFER;
++	if (!ep) {
++		dev_err(ctx->dev, "The endpoint is unconnected\n");
++		return -EINVAL;
++	}
++
++	if (!of_device_is_available(ep)) {
++		of_node_put(ep);
++		dev_err(ctx->dev, "The remote device is disabled\n");
++		return -ENODEV;
++	}
+ 
+ 	ctx->next_bridge = of_drm_find_bridge(ep);
+ 	of_node_put(ep);
++	if (!ctx->next_bridge) {
++		dev_dbg(ctx->dev, "Next bridge not found, deferring probe\n");
++		return -EPROBE_DEFER;
++	}
++
++	if (!ctx->next_bridge)
++		return -EPROBE_DEFER;
+ 
+ 	i2c_set_clientdata(client, ctx);
+ 	mutex_init(&ctx->lock);
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+index 3cac16db970f0..010657ea7af78 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+@@ -167,9 +167,10 @@ static void lt9611uxc_hpd_work(struct work_struct *work)
+ 	struct lt9611uxc *lt9611uxc = container_of(work, struct lt9611uxc, work);
+ 	bool connected;
+ 
+-	if (lt9611uxc->connector.dev)
+-		drm_kms_helper_hotplug_event(lt9611uxc->connector.dev);
+-	else {
++	if (lt9611uxc->connector.dev) {
++		if (lt9611uxc->connector.dev->mode_config.funcs)
++			drm_kms_helper_hotplug_event(lt9611uxc->connector.dev);
++	} else {
+ 
+ 		mutex_lock(&lt9611uxc->ocm_lock);
+ 		connected = lt9611uxc->hdmi_connected;
+@@ -339,6 +340,8 @@ static int lt9611uxc_connector_init(struct drm_bridge *bridge, struct lt9611uxc
+ 		return -ENODEV;
+ 	}
+ 
++	lt9611uxc->connector.polled = DRM_CONNECTOR_POLL_HPD;
++
+ 	drm_connector_helper_add(&lt9611uxc->connector,
+ 				 &lt9611uxc_bridge_connector_helper_funcs);
+ 	ret = drm_connector_init(bridge->dev, &lt9611uxc->connector,
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index f6bdec7fa9253..a950d5db211c5 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -109,6 +109,12 @@ static const struct drm_dmi_panel_orientation_data lcd1200x1920_rightside_up = {
+ 	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+ 
++static const struct drm_dmi_panel_orientation_data lcd1280x1920_rightside_up = {
++	.width = 1280,
++	.height = 1920,
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct dmi_system_id orientation_data[] = {
+ 	{	/* Acer One 10 (S1003) */
+ 		.matches = {
+@@ -134,6 +140,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* AYA NEO 2021 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYADEVICE"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* GPD MicroPC (generic strings, also match on bios date) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+@@ -185,6 +197,12 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
+ 		},
+ 		.driver_data = (void *)&gpd_win2,
++	}, {	/* GPD Win 3 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "G1618-03")
++		},
++		.driver_data = (void *)&lcd720x1280_rightside_up,
+ 	}, {	/* I.T.Works TW891 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
+@@ -193,6 +211,13 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "TW891"),
+ 		},
+ 		.driver_data = (void *)&itworks_tw891,
++	}, {	/* KD Kurio Smart C15200 2-in-1 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "KD Interactive"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Kurio Smart"),
++		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "KDM960BCP"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/*
+ 		 * Lenovo Ideapad Miix 310 laptop, only some production batches
+ 		 * have a portrait screen, the resolution checks makes the quirk
+@@ -211,10 +236,15 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"),
+ 		},
+ 		.driver_data = (void *)&lcd800x1280_rightside_up,
+-	}, {	/* Lenovo Ideapad D330 */
++	}, {	/* Lenovo Ideapad D330-10IGM (HD) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
++	}, {	/* Lenovo Ideapad D330-10IGM (FHD) */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81H3"),
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
+ 		},
+ 		.driver_data = (void *)&lcd1200x1920_rightside_up,
+@@ -225,6 +255,19 @@ static const struct dmi_system_id orientation_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Default string"),
+ 		},
+ 		.driver_data = (void *)&onegx1_pro,
++	}, {	/* Samsung GalaxyBook 10.6 */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Galaxy Book 10.6"),
++		},
++		.driver_data = (void *)&lcd1280x1920_rightside_up,
++	}, {	/* Valve Steam Deck */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Valve"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Jupiter"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "1"),
++		},
++		.driver_data = (void *)&lcd800x1280_rightside_up,
+ 	}, {	/* VIOS LTH17 */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "VIOS"),
+diff --git a/drivers/gpu/drm/drm_plane_helper.c b/drivers/gpu/drm/drm_plane_helper.c
+index 3aae7ea522f23..c3f2292dc93d5 100644
+--- a/drivers/gpu/drm/drm_plane_helper.c
++++ b/drivers/gpu/drm/drm_plane_helper.c
+@@ -123,7 +123,6 @@ static int drm_plane_helper_check_update(struct drm_plane *plane,
+ 		.crtc_w = drm_rect_width(dst),
+ 		.crtc_h = drm_rect_height(dst),
+ 		.rotation = rotation,
+-		.visible = *visible,
+ 	};
+ 	struct drm_crtc_state crtc_state = {
+ 		.crtc = crtc,
+diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c
+index c60a81a81c09c..c6413c5409420 100644
+--- a/drivers/gpu/drm/i915/display/intel_fb.c
++++ b/drivers/gpu/drm/i915/display/intel_fb.c
+@@ -172,8 +172,9 @@ static void intel_fb_plane_dims(const struct intel_framebuffer *fb, int color_pl
+ 
+ 	intel_fb_plane_get_subsampling(&main_hsub, &main_vsub, &fb->base, main_plane);
+ 	intel_fb_plane_get_subsampling(&hsub, &vsub, &fb->base, color_plane);
+-	*w = fb->base.width / main_hsub / hsub;
+-	*h = fb->base.height / main_vsub / vsub;
++
++	*w = DIV_ROUND_UP(fb->base.width, main_hsub * hsub);
++	*h = DIV_ROUND_UP(fb->base.height, main_vsub * vsub);
+ }
+ 
+ static u32 intel_adjust_tile_offset(int *x, int *y,
+diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
+index 76819a8ac37fe..3ccadfa14d6a8 100644
+--- a/drivers/gpu/drm/imx/imx-drm-core.c
++++ b/drivers/gpu/drm/imx/imx-drm-core.c
+@@ -81,7 +81,6 @@ static void imx_drm_atomic_commit_tail(struct drm_atomic_state *state)
+ 	struct drm_plane_state *old_plane_state, *new_plane_state;
+ 	bool plane_disabling = false;
+ 	int i;
+-	bool fence_cookie = dma_fence_begin_signalling();
+ 
+ 	drm_atomic_helper_commit_modeset_disables(dev, state);
+ 
+@@ -112,7 +111,6 @@ static void imx_drm_atomic_commit_tail(struct drm_atomic_state *state)
+ 	}
+ 
+ 	drm_atomic_helper_commit_hw_done(state);
+-	dma_fence_end_signalling(fence_cookie);
+ }
+ 
+ static const struct drm_mode_config_helper_funcs imx_drm_mode_config_helpers = {
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index c95985792076f..f51d392d05d42 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -516,11 +516,11 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
+ 	struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ 	struct platform_device *pdev = to_platform_device(gmu->dev);
+ 	void __iomem *pdcptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc");
+-	void __iomem *seqptr;
++	void __iomem *seqptr = NULL;
+ 	uint32_t pdc_address_offset;
+ 	bool pdc_in_aop = false;
+ 
+-	if (!pdcptr)
++	if (IS_ERR(pdcptr))
+ 		goto err;
+ 
+ 	if (adreno_is_a650(adreno_gpu) || adreno_is_a660(adreno_gpu))
+@@ -532,7 +532,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu)
+ 
+ 	if (!pdc_in_aop) {
+ 		seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq");
+-		if (!seqptr)
++		if (IS_ERR(seqptr))
+ 			goto err;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+index 69eed79324865..f9460672176aa 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+@@ -138,11 +138,13 @@ static int _sspp_subblk_offset(struct dpu_hw_pipe *ctx,
+ 		u32 *idx)
+ {
+ 	int rc = 0;
+-	const struct dpu_sspp_sub_blks *sblk = ctx->cap->sblk;
++	const struct dpu_sspp_sub_blks *sblk;
+ 
+-	if (!ctx)
++	if (!ctx || !ctx->cap || !ctx->cap->sblk)
+ 		return -EINVAL;
+ 
++	sblk = ctx->cap->sblk;
++
+ 	switch (s_id) {
+ 	case DPU_SSPP_SRC:
+ 		*idx = sblk->src_blk.base;
+@@ -419,7 +421,7 @@ static void _dpu_hw_sspp_setup_scaler3(struct dpu_hw_pipe *ctx,
+ 
+ 	(void)pe;
+ 	if (_sspp_subblk_offset(ctx, DPU_SSPP_SCALER_QSEED3, &idx) || !sspp
+-		|| !scaler3_cfg || !ctx || !ctx->cap || !ctx->cap->sblk)
++		|| !scaler3_cfg)
+ 		return;
+ 
+ 	dpu_hw_setup_scaler3(&ctx->hw, scaler3_cfg, idx,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index 4fd913522931b..5489a3ae2bedb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -896,6 +896,10 @@ static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms)
+ 		return 0;
+ 
+ 	mmu = msm_iommu_new(dpu_kms->dev->dev, domain);
++	if (IS_ERR(mmu)) {
++		iommu_domain_free(domain);
++		return PTR_ERR(mmu);
++	}
+ 	aspace = msm_gem_address_space_create(mmu, "dpu1",
+ 		0x1000, 0x100000000 - 0x1000);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 1e8a971a86f29..9533225b5c88c 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -1184,6 +1184,7 @@ static int msm_gem_new_impl(struct drm_device *dev,
+ 	msm_obj->madv = MSM_MADV_WILLNEED;
+ 
+ 	INIT_LIST_HEAD(&msm_obj->submit_entry);
++	INIT_LIST_HEAD(&msm_obj->node);
+ 	INIT_LIST_HEAD(&msm_obj->vmas);
+ 
+ 	*obj = &msm_obj->base;
+@@ -1219,7 +1220,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ 
+ 	ret = msm_gem_new_impl(dev, size, flags, &obj);
+ 	if (ret)
+-		goto fail;
++		return ERR_PTR(ret);
+ 
+ 	msm_obj = to_msm_bo(obj);
+ 
+@@ -1319,7 +1320,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ 
+ 	ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj);
+ 	if (ret)
+-		goto fail;
++		return ERR_PTR(ret);
+ 
+ 	drm_gem_private_object_init(dev, obj, size);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
+index 0ebf7bc6ad097..8236989828ba3 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.c
++++ b/drivers/gpu/drm/msm/msm_gpu.c
+@@ -404,7 +404,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
+ 		state->bos = kcalloc(nr,
+ 			sizeof(struct msm_gpu_state_bo), GFP_KERNEL);
+ 
+-		for (i = 0; i < submit->nr_bos; i++) {
++		for (i = 0; state->bos && i < submit->nr_bos; i++) {
+ 			if (should_dump(submit, i)) {
+ 				msm_gpu_crashstate_get_bo(state, submit->bos[i].obj,
+ 					submit->bos[i].iova, submit->bos[i].flags);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index 8c2ecc2827232..c89d5964148fd 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -56,7 +56,7 @@ static vm_fault_t nouveau_ttm_fault(struct vm_fault *vmf)
+ 
+ 	nouveau_bo_del_io_reserve_lru(bo);
+ 	prot = vm_get_page_prot(vma->vm_flags);
+-	ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, 1);
++	ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
+ 	nouveau_bo_add_io_reserve_lru(bo);
+ 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
+ 		return ret;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index b0c3422cb01fa..9985bfde015a6 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -162,10 +162,14 @@ nouveau_svmm_bind(struct drm_device *dev, void *data,
+ 	 */
+ 
+ 	mm = get_task_mm(current);
++	if (!mm) {
++		return -EINVAL;
++	}
+ 	mmap_read_lock(mm);
+ 
+ 	if (!cli->svm.svmm) {
+ 		mmap_read_unlock(mm);
++		mmput(mm);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/ce/gt215.c b/drivers/gpu/drm/nouveau/nvkm/engine/ce/gt215.c
+index 704df0f2d1f16..09a112af2f893 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/ce/gt215.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/ce/gt215.c
+@@ -78,6 +78,6 @@ int
+ gt215_ce_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst,
+ 	     struct nvkm_engine **pengine)
+ {
+-	return nvkm_falcon_new_(&gt215_ce, device, type, inst,
++	return nvkm_falcon_new_(&gt215_ce, device, type, -1,
+ 				(device->chipset != 0xaf), 0x104000, pengine);
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index ca75c5f6ecaf8..b51d690f375ff 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -3147,8 +3147,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
+ 	WARN_ON(device->chip->ptr.inst & ~((1 << ARRAY_SIZE(device->ptr)) - 1));             \
+ 	for (j = 0; device->chip->ptr.inst && j < ARRAY_SIZE(device->ptr); j++) {            \
+ 		if ((device->chip->ptr.inst & BIT(j)) && (subdev_mask & BIT_ULL(type))) {    \
+-			int inst = (device->chip->ptr.inst == 1) ? -1 : (j);                 \
+-			ret = device->chip->ptr.ctor(device, (type), inst, &device->ptr[j]); \
++			ret = device->chip->ptr.ctor(device, (type), (j), &device->ptr[j]);  \
+ 			subdev = nvkm_device_subdev(device, (type), (j));                    \
+ 			if (ret) {                                                           \
+ 				nvkm_subdev_del(&subdev);                                    \
+diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
+index 458f92a708879..a36a4f2c76b09 100644
+--- a/drivers/gpu/drm/radeon/radeon_gem.c
++++ b/drivers/gpu/drm/radeon/radeon_gem.c
+@@ -61,7 +61,7 @@ static vm_fault_t radeon_gem_fault(struct vm_fault *vmf)
+ 		goto unlock_resv;
+ 
+ 	ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
+-				       TTM_BO_VM_NUM_PREFAULT, 1);
++				       TTM_BO_VM_NUM_PREFAULT);
+ 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
+ 		goto unlock_mclk;
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun8i_csc.h b/drivers/gpu/drm/sun4i/sun8i_csc.h
+index a55a38ad849c1..022cafa6c06cb 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_csc.h
++++ b/drivers/gpu/drm/sun4i/sun8i_csc.h
+@@ -16,8 +16,8 @@ struct sun8i_mixer;
+ #define CCSC10_OFFSET 0xA0000
+ #define CCSC11_OFFSET 0xF0000
+ 
+-#define SUN8I_CSC_CTRL(base)		(base + 0x0)
+-#define SUN8I_CSC_COEFF(base, i)	(base + 0x10 + 4 * i)
++#define SUN8I_CSC_CTRL(base)		((base) + 0x0)
++#define SUN8I_CSC_COEFF(base, i)	((base) + 0x10 + 4 * (i))
+ 
+ #define SUN8I_CSC_CTRL_EN		BIT(0)
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index f56be5bc0861e..4a655ab23c89d 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -171,89 +171,6 @@ vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo,
+ }
+ EXPORT_SYMBOL(ttm_bo_vm_reserve);
+ 
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-/**
+- * ttm_bo_vm_insert_huge - Insert a pfn for PUD or PMD faults
+- * @vmf: Fault data
+- * @bo: The buffer object
+- * @page_offset: Page offset from bo start
+- * @fault_page_size: The size of the fault in pages.
+- * @pgprot: The page protections.
+- * Does additional checking whether it's possible to insert a PUD or PMD
+- * pfn and performs the insertion.
+- *
+- * Return: VM_FAULT_NOPAGE on successful insertion, VM_FAULT_FALLBACK if
+- * a huge fault was not possible, or on insertion error.
+- */
+-static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf,
+-					struct ttm_buffer_object *bo,
+-					pgoff_t page_offset,
+-					pgoff_t fault_page_size,
+-					pgprot_t pgprot)
+-{
+-	pgoff_t i;
+-	vm_fault_t ret;
+-	unsigned long pfn;
+-	pfn_t pfnt;
+-	struct ttm_tt *ttm = bo->ttm;
+-	bool write = vmf->flags & FAULT_FLAG_WRITE;
+-
+-	/* Fault should not cross bo boundary. */
+-	page_offset &= ~(fault_page_size - 1);
+-	if (page_offset + fault_page_size > bo->resource->num_pages)
+-		goto out_fallback;
+-
+-	if (bo->resource->bus.is_iomem)
+-		pfn = ttm_bo_io_mem_pfn(bo, page_offset);
+-	else
+-		pfn = page_to_pfn(ttm->pages[page_offset]);
+-
+-	/* pfn must be fault_page_size aligned. */
+-	if ((pfn & (fault_page_size - 1)) != 0)
+-		goto out_fallback;
+-
+-	/* Check that memory is contiguous. */
+-	if (!bo->resource->bus.is_iomem) {
+-		for (i = 1; i < fault_page_size; ++i) {
+-			if (page_to_pfn(ttm->pages[page_offset + i]) != pfn + i)
+-				goto out_fallback;
+-		}
+-	} else if (bo->bdev->funcs->io_mem_pfn) {
+-		for (i = 1; i < fault_page_size; ++i) {
+-			if (ttm_bo_io_mem_pfn(bo, page_offset + i) != pfn + i)
+-				goto out_fallback;
+-		}
+-	}
+-
+-	pfnt = __pfn_to_pfn_t(pfn, PFN_DEV);
+-	if (fault_page_size == (HPAGE_PMD_SIZE >> PAGE_SHIFT))
+-		ret = vmf_insert_pfn_pmd_prot(vmf, pfnt, pgprot, write);
+-#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+-	else if (fault_page_size == (HPAGE_PUD_SIZE >> PAGE_SHIFT))
+-		ret = vmf_insert_pfn_pud_prot(vmf, pfnt, pgprot, write);
+-#endif
+-	else
+-		WARN_ON_ONCE(ret = VM_FAULT_FALLBACK);
+-
+-	if (ret != VM_FAULT_NOPAGE)
+-		goto out_fallback;
+-
+-	return VM_FAULT_NOPAGE;
+-out_fallback:
+-	count_vm_event(THP_FAULT_FALLBACK);
+-	return VM_FAULT_FALLBACK;
+-}
+-#else
+-static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf,
+-					struct ttm_buffer_object *bo,
+-					pgoff_t page_offset,
+-					pgoff_t fault_page_size,
+-					pgprot_t pgprot)
+-{
+-	return VM_FAULT_FALLBACK;
+-}
+-#endif
+-
+ /**
+  * ttm_bo_vm_fault_reserved - TTM fault helper
+  * @vmf: The struct vm_fault given as argument to the fault callback
+@@ -261,7 +178,6 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf,
+  * @num_prefault: Maximum number of prefault pages. The caller may want to
+  * specify this based on madvice settings and the size of the GPU object
+  * backed by the memory.
+- * @fault_page_size: The size of the fault in pages.
+  *
+  * This function inserts one or more page table entries pointing to the
+  * memory backing the buffer object, and then returns a return code
+@@ -275,8 +191,7 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf,
+  */
+ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
+ 				    pgprot_t prot,
+-				    pgoff_t num_prefault,
+-				    pgoff_t fault_page_size)
++				    pgoff_t num_prefault)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct ttm_buffer_object *bo = vma->vm_private_data;
+@@ -327,11 +242,6 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
+ 		prot = pgprot_decrypted(prot);
+ 	}
+ 
+-	/* We don't prefault on huge faults. Yet. */
+-	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && fault_page_size != 1)
+-		return ttm_bo_vm_insert_huge(vmf, bo, page_offset,
+-					     fault_page_size, prot);
+-
+ 	/*
+ 	 * Speculatively prefault a number of pages. Only error on
+ 	 * first page.
+@@ -429,7 +339,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
+ 
+ 	prot = vma->vm_page_prot;
+ 	if (drm_dev_enter(ddev, &idx)) {
+-		ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, 1);
++		ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
+ 		drm_dev_exit(idx);
+ 	} else {
+ 		ret = ttm_bo_vm_dummy_page(vmf, prot);
+@@ -519,11 +429,6 @@ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	switch (bo->resource->mem_type) {
+ 	case TTM_PL_SYSTEM:
+-		if (unlikely(bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) {
+-			ret = ttm_tt_swapin(bo->ttm);
+-			if (unlikely(ret != 0))
+-				return ret;
+-		}
+ 		fallthrough;
+ 	case TTM_PL_TT:
+ 		ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write);
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index 4eb3542269725..f30d93e0b9820 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -195,8 +195,8 @@ v3d_clean_caches(struct v3d_dev *v3d)
+ 
+ 	V3D_CORE_WRITE(core, V3D_CTL_L2TCACTL, V3D_L2TCACTL_TMUWCF);
+ 	if (wait_for(!(V3D_CORE_READ(core, V3D_CTL_L2TCACTL) &
+-		       V3D_L2TCACTL_L2TFLS), 100)) {
+-		DRM_ERROR("Timeout waiting for L1T write combiner flush\n");
++		       V3D_L2TCACTL_TMUWCF), 100)) {
++		DRM_ERROR("Timeout waiting for TMU write combiner flush\n");
+ 	}
+ 
+ 	mutex_lock(&v3d->cache_clean_lock);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index cf84d382dd41d..5286cf1102088 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -91,9 +91,7 @@ virtio_gpu_get_vbuf(struct virtio_gpu_device *vgdev,
+ {
+ 	struct virtio_gpu_vbuffer *vbuf;
+ 
+-	vbuf = kmem_cache_zalloc(vgdev->vbufs, GFP_KERNEL);
+-	if (!vbuf)
+-		return ERR_PTR(-ENOMEM);
++	vbuf = kmem_cache_zalloc(vgdev->vbufs, GFP_KERNEL | __GFP_NOFAIL);
+ 
+ 	BUG_ON(size > MAX_INLINE_CMD_SIZE ||
+ 	       size < sizeof(struct virtio_gpu_ctrl_hdr));
+@@ -147,10 +145,6 @@ static void *virtio_gpu_alloc_cmd_resp(struct virtio_gpu_device *vgdev,
+ 
+ 	vbuf = virtio_gpu_get_vbuf(vgdev, cmd_size,
+ 				   resp_size, resp_buf, cb);
+-	if (IS_ERR(vbuf)) {
+-		*vbuffer_p = NULL;
+-		return ERR_CAST(vbuf);
+-	}
+ 	*vbuffer_p = vbuf;
+ 	return (struct virtio_gpu_command *)vbuf->buf;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 5652d982b1ce6..c87d74f0afc52 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1526,10 +1526,6 @@ void vmw_bo_dirty_unmap(struct vmw_buffer_object *vbo,
+ 			pgoff_t start, pgoff_t end);
+ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf);
+ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf);
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-vm_fault_t vmw_bo_vm_huge_fault(struct vm_fault *vmf,
+-				enum page_entry_size pe_size);
+-#endif
+ 
+ /* Transparent hugepage support - vmwgfx_thp.c */
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
+index e5a9a5cbd01a7..922317d1acc8a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
+@@ -477,7 +477,7 @@ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf)
+ 	else
+ 		prot = vm_get_page_prot(vma->vm_flags);
+ 
+-	ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault, 1);
++	ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault);
+ 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
+ 		return ret;
+ 
+@@ -486,73 +486,3 @@ out_unlock:
+ 
+ 	return ret;
+ }
+-
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-vm_fault_t vmw_bo_vm_huge_fault(struct vm_fault *vmf,
+-				enum page_entry_size pe_size)
+-{
+-	struct vm_area_struct *vma = vmf->vma;
+-	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
+-	    vma->vm_private_data;
+-	struct vmw_buffer_object *vbo =
+-		container_of(bo, struct vmw_buffer_object, base);
+-	pgprot_t prot;
+-	vm_fault_t ret;
+-	pgoff_t fault_page_size;
+-	bool write = vmf->flags & FAULT_FLAG_WRITE;
+-
+-	switch (pe_size) {
+-	case PE_SIZE_PMD:
+-		fault_page_size = HPAGE_PMD_SIZE >> PAGE_SHIFT;
+-		break;
+-#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+-	case PE_SIZE_PUD:
+-		fault_page_size = HPAGE_PUD_SIZE >> PAGE_SHIFT;
+-		break;
+-#endif
+-	default:
+-		WARN_ON_ONCE(1);
+-		return VM_FAULT_FALLBACK;
+-	}
+-
+-	/* Always do write dirty-tracking and COW on PTE level. */
+-	if (write && (READ_ONCE(vbo->dirty) || is_cow_mapping(vma->vm_flags)))
+-		return VM_FAULT_FALLBACK;
+-
+-	ret = ttm_bo_vm_reserve(bo, vmf);
+-	if (ret)
+-		return ret;
+-
+-	if (vbo->dirty) {
+-		pgoff_t allowed_prefault;
+-		unsigned long page_offset;
+-
+-		page_offset = vmf->pgoff -
+-			drm_vma_node_start(&bo->base.vma_node);
+-		if (page_offset >= bo->resource->num_pages ||
+-		    vmw_resources_clean(vbo, page_offset,
+-					page_offset + PAGE_SIZE,
+-					&allowed_prefault)) {
+-			ret = VM_FAULT_SIGBUS;
+-			goto out_unlock;
+-		}
+-
+-		/*
+-		 * Write protect, so we get a new fault on write, and can
+-		 * split.
+-		 */
+-		prot = vm_get_page_prot(vma->vm_flags & ~VM_SHARED);
+-	} else {
+-		prot = vm_get_page_prot(vma->vm_flags);
+-	}
+-
+-	ret = ttm_bo_vm_fault_reserved(vmf, prot, 1, fault_page_size);
+-	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
+-		return ret;
+-
+-out_unlock:
+-	dma_resv_unlock(bo->base.resv);
+-
+-	return ret;
+-}
+-#endif
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
+index e6b1f98ec99f0..0a4c340252ec4 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
+@@ -61,9 +61,6 @@ int vmw_mmap(struct file *filp, struct vm_area_struct *vma)
+ 		.fault = vmw_bo_vm_fault,
+ 		.open = ttm_bo_vm_open,
+ 		.close = ttm_bo_vm_close,
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-		.huge_fault = vmw_bo_vm_huge_fault,
+-#endif
+ 	};
+ 	struct drm_file *file_priv = filp->private_data;
+ 	struct vmw_private *dev_priv = vmw_priv(file_priv->minor->dev);
+diff --git a/drivers/hid/hid-u2fzero.c b/drivers/hid/hid-u2fzero.c
+index d70cd3d7f583b..67ae2b18e33ac 100644
+--- a/drivers/hid/hid-u2fzero.c
++++ b/drivers/hid/hid-u2fzero.c
+@@ -132,7 +132,7 @@ static int u2fzero_recv(struct u2fzero_device *dev,
+ 
+ 	ret = (wait_for_completion_timeout(
+ 		&ctx.done, msecs_to_jiffies(USB_CTRL_SET_TIMEOUT)));
+-	if (ret < 0) {
++	if (ret == 0) {
+ 		usb_kill_urb(dev->urb);
+ 		hid_err(hdev, "urb submission timed out");
+ 	} else {
+@@ -191,6 +191,8 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
+ 	struct u2f_hid_msg resp;
+ 	int ret;
+ 	size_t actual_length;
++	/* valid packets must have a correct header */
++	int min_length = offsetof(struct u2f_hid_msg, init.data);
+ 
+ 	if (!dev->present) {
+ 		hid_dbg(dev->hdev, "device not present");
+@@ -200,12 +202,12 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
+ 	ret = u2fzero_recv(dev, &req, &resp);
+ 
+ 	/* ignore errors or packets without data */
+-	if (ret < offsetof(struct u2f_hid_msg, init.data))
++	if (ret < min_length)
+ 		return 0;
+ 
+ 	/* only take the minimum amount of data it is safe to take */
+-	actual_length = min3((size_t)ret - offsetof(struct u2f_hid_msg,
+-		init.data), U2F_HID_MSG_LEN(resp), max);
++	actual_length = min3((size_t)ret - min_length,
++		U2F_HID_MSG_LEN(resp), max);
+ 
+ 	memcpy(data, resp.init.data, actual_length);
+ 
+diff --git a/drivers/hid/surface-hid/surface_hid.c b/drivers/hid/surface-hid/surface_hid.c
+index a3a70e4f3f6c9..d4aa8c81903ae 100644
+--- a/drivers/hid/surface-hid/surface_hid.c
++++ b/drivers/hid/surface-hid/surface_hid.c
+@@ -209,7 +209,7 @@ static int surface_hid_probe(struct ssam_device *sdev)
+ 
+ 	shid->notif.base.priority = 1;
+ 	shid->notif.base.fn = ssam_hid_event_fn;
+-	shid->notif.event.reg = SSAM_EVENT_REGISTRY_REG;
++	shid->notif.event.reg = SSAM_EVENT_REGISTRY_REG(sdev->uid.target);
+ 	shid->notif.event.id.target_category = sdev->uid.category;
+ 	shid->notif.event.id.instance = sdev->uid.instance;
+ 	shid->notif.event.mask = SSAM_EVENT_MASK_STRICT;
+@@ -230,7 +230,7 @@ static void surface_hid_remove(struct ssam_device *sdev)
+ }
+ 
+ static const struct ssam_device_id surface_hid_match[] = {
+-	{ SSAM_SDEV(HID, 0x02, SSAM_ANY_IID, 0x00) },
++	{ SSAM_SDEV(HID, SSAM_ANY_TID, SSAM_ANY_IID, 0x00) },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(ssam, surface_hid_match);
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index 42f3d9d123a12..d030577ad6a2c 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -13,6 +13,7 @@
+ #define _HYPERV_VMBUS_H
+ 
+ #include <linux/list.h>
++#include <linux/bitops.h>
+ #include <asm/sync_bitops.h>
+ #include <asm/hyperv-tlfs.h>
+ #include <linux/atomic.h>
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index 8d3b1dae31df1..3501a3ead4ba6 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -796,8 +796,10 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 	dev_set_drvdata(hdev, drvdata);
+ 	dev_set_name(hdev, HWMON_ID_FORMAT, id);
+ 	err = device_register(hdev);
+-	if (err)
+-		goto free_hwmon;
++	if (err) {
++		put_device(hdev);
++		goto ida_remove;
++	}
+ 
+ 	INIT_LIST_HEAD(&hwdev->tzdata);
+ 
+diff --git a/drivers/hwmon/pmbus/lm25066.c b/drivers/hwmon/pmbus/lm25066.c
+index d209e0afc2caa..66d3e88b54172 100644
+--- a/drivers/hwmon/pmbus/lm25066.c
++++ b/drivers/hwmon/pmbus/lm25066.c
+@@ -51,26 +51,31 @@ struct __coeff {
+ #define PSC_CURRENT_IN_L	(PSC_NUM_CLASSES)
+ #define PSC_POWER_L		(PSC_NUM_CLASSES + 1)
+ 
+-static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
++static struct __coeff lm25066_coeff[][PSC_NUM_CLASSES + 2] = {
+ 	[lm25056] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 16296,
++			.b = 1343,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 13797,
++			.b = -1833,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 6726,
++			.b = -537,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 5501,
++			.b = -2908,
+ 			.R = -3,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 26882,
++			.b = -5646,
+ 			.R = -4,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+@@ -82,26 +87,32 @@ static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
+ 	[lm25066] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 22070,
++			.b = -1800,
+ 			.R = -2,
+ 		},
+ 		[PSC_VOLTAGE_OUT] = {
+ 			.m = 22070,
++			.b = -1800,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 13661,
++			.b = -5200,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 6852,
++			.b = -3100,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 736,
++			.b = -3300,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 369,
++			.b = -1900,
+ 			.R = -2,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+@@ -111,26 +122,32 @@ static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
+ 	[lm5064] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 4611,
++			.b = -642,
+ 			.R = -2,
+ 		},
+ 		[PSC_VOLTAGE_OUT] = {
+ 			.m = 4621,
++			.b = 423,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 10742,
++			.b = 1552,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 5456,
++			.b = 2118,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 1204,
++			.b = 8524,
+ 			.R = -3,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 612,
++			.b = 11202,
+ 			.R = -3,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+@@ -140,26 +157,32 @@ static struct __coeff lm25066_coeff[6][PSC_NUM_CLASSES + 2] = {
+ 	[lm5066] = {
+ 		[PSC_VOLTAGE_IN] = {
+ 			.m = 4587,
++			.b = -1200,
+ 			.R = -2,
+ 		},
+ 		[PSC_VOLTAGE_OUT] = {
+ 			.m = 4587,
++			.b = -2400,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN] = {
+ 			.m = 10753,
++			.b = -1200,
+ 			.R = -2,
+ 		},
+ 		[PSC_CURRENT_IN_L] = {
+ 			.m = 5405,
++			.b = -600,
+ 			.R = -2,
+ 		},
+ 		[PSC_POWER] = {
+ 			.m = 1204,
++			.b = -6000,
+ 			.R = -3,
+ 		},
+ 		[PSC_POWER_L] = {
+ 			.m = 605,
++			.b = -8000,
+ 			.R = -3,
+ 		},
+ 		[PSC_TEMPERATURE] = {
+diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
+index e2a3620cbf489..8988b2ed2ea6f 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-core.c
++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
+@@ -175,7 +175,7 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
+ 	coresight_disclaim_device_unlocked(csdev);
+ 	CS_LOCK(drvdata->base);
+ 	spin_unlock(&drvdata->spinlock);
+-	pm_runtime_put(dev);
++	pm_runtime_put(dev->parent);
+ 	return 0;
+ 
+ 	/* not disabled this call */
+diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
+index 1768684968797..7dddb85b90591 100644
+--- a/drivers/hwtracing/coresight/coresight-trbe.c
++++ b/drivers/hwtracing/coresight/coresight-trbe.c
+@@ -366,7 +366,7 @@ static unsigned long __trbe_normal_offset(struct perf_output_handle *handle)
+ 
+ static unsigned long trbe_normal_offset(struct perf_output_handle *handle)
+ {
+-	struct trbe_buf *buf = perf_get_aux(handle);
++	struct trbe_buf *buf = etm_perf_sink_config(handle);
+ 	u64 limit = __trbe_normal_offset(handle);
+ 	u64 head = PERF_IDX2OFF(handle->head, buf);
+ 
+@@ -869,6 +869,10 @@ static void arm_trbe_register_coresight_cpu(struct trbe_drvdata *drvdata, int cp
+ 	if (WARN_ON(trbe_csdev))
+ 		return;
+ 
++	/* If the TRBE was not probed on the CPU, we shouldn't be here */
++	if (WARN_ON(!cpudata->drvdata))
++		return;
++
+ 	dev = &cpudata->drvdata->pdev->dev;
+ 	desc.name = devm_kasprintf(dev, GFP_KERNEL, "trbe%d", cpu);
+ 	if (!desc.name)
+@@ -950,7 +954,9 @@ static int arm_trbe_probe_coresight(struct trbe_drvdata *drvdata)
+ 		return -ENOMEM;
+ 
+ 	for_each_cpu(cpu, &drvdata->supported_cpus) {
+-		smp_call_function_single(cpu, arm_trbe_probe_cpu, drvdata, 1);
++		/* If we fail to probe the CPU, let us defer it to hotplug callbacks */
++		if (smp_call_function_single(cpu, arm_trbe_probe_cpu, drvdata, 1))
++			continue;
+ 		if (cpumask_test_cpu(cpu, &drvdata->supported_cpus))
+ 			arm_trbe_register_coresight_cpu(drvdata, cpu);
+ 		if (cpumask_test_cpu(cpu, &drvdata->supported_cpus))
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 7d4b3eb7077ad..72acda59eb399 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -195,7 +195,7 @@ static const u16 mt_i2c_regs_v2[] = {
+ 	[OFFSET_CLOCK_DIV] = 0x48,
+ 	[OFFSET_SOFTRESET] = 0x50,
+ 	[OFFSET_SCL_MIS_COMP_POINT] = 0x90,
+-	[OFFSET_DEBUGSTAT] = 0xe0,
++	[OFFSET_DEBUGSTAT] = 0xe4,
+ 	[OFFSET_DEBUGCTRL] = 0xe8,
+ 	[OFFSET_FIFO_STAT] = 0xf4,
+ 	[OFFSET_FIFO_THRESH] = 0xf8,
+diff --git a/drivers/i2c/busses/i2c-xlr.c b/drivers/i2c/busses/i2c-xlr.c
+index 126d1393e548b..9ce20652d4942 100644
+--- a/drivers/i2c/busses/i2c-xlr.c
++++ b/drivers/i2c/busses/i2c-xlr.c
+@@ -431,11 +431,15 @@ static int xlr_i2c_probe(struct platform_device *pdev)
+ 	i2c_set_adapdata(&priv->adap, priv);
+ 	ret = i2c_add_numbered_adapter(&priv->adap);
+ 	if (ret < 0)
+-		return ret;
++		goto err_unprepare_clk;
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 	dev_info(&priv->adap.dev, "Added I2C Bus.\n");
+ 	return 0;
++
++err_unprepare_clk:
++	clk_unprepare(clk);
++	return ret;
+ }
+ 
+ static int xlr_i2c_remove(struct platform_device *pdev)
+diff --git a/drivers/iio/accel/st_accel_i2c.c b/drivers/iio/accel/st_accel_i2c.c
+index 95e305b88d5ed..02c823b93ecd4 100644
+--- a/drivers/iio/accel/st_accel_i2c.c
++++ b/drivers/iio/accel/st_accel_i2c.c
+@@ -194,10 +194,10 @@ static int st_accel_i2c_remove(struct i2c_client *client)
+ {
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_accel_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/accel/st_accel_spi.c b/drivers/iio/accel/st_accel_spi.c
+index 83d3308ce5ccc..386ae18d5f269 100644
+--- a/drivers/iio/accel/st_accel_spi.c
++++ b/drivers/iio/accel/st_accel_spi.c
+@@ -143,10 +143,10 @@ static int st_accel_spi_remove(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev = spi_get_drvdata(spi);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_accel_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/adc/ti-tsc2046.c b/drivers/iio/adc/ti-tsc2046.c
+index 170950d5dd499..d84ae6b008c1b 100644
+--- a/drivers/iio/adc/ti-tsc2046.c
++++ b/drivers/iio/adc/ti-tsc2046.c
+@@ -398,7 +398,7 @@ static int tsc2046_adc_update_scan_mode(struct iio_dev *indio_dev,
+ 	priv->xfer.len = size;
+ 	priv->time_per_scan_us = size * 8 * priv->time_per_bit_ns / NSEC_PER_USEC;
+ 
+-	if (priv->scan_interval_us > priv->time_per_scan_us)
++	if (priv->scan_interval_us < priv->time_per_scan_us)
+ 		dev_warn(&priv->spi->dev, "The scan interval (%d) is less then calculated scan time (%d)\n",
+ 			 priv->scan_interval_us, priv->time_per_scan_us);
+ 
+diff --git a/drivers/iio/dac/ad5446.c b/drivers/iio/dac/ad5446.c
+index 488ec69967d67..e50718422411d 100644
+--- a/drivers/iio/dac/ad5446.c
++++ b/drivers/iio/dac/ad5446.c
+@@ -531,8 +531,15 @@ static int ad5622_write(struct ad5446_state *st, unsigned val)
+ {
+ 	struct i2c_client *client = to_i2c_client(st->dev);
+ 	__be16 data = cpu_to_be16(val);
++	int ret;
++
++	ret = i2c_master_send(client, (char *)&data, sizeof(data));
++	if (ret < 0)
++		return ret;
++	if (ret != sizeof(data))
++		return -EIO;
+ 
+-	return i2c_master_send(client, (char *)&data, sizeof(data));
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/iio/dac/ad5766.c b/drivers/iio/dac/ad5766.c
+index 3104ec32dfaca..dafda84fdea35 100644
+--- a/drivers/iio/dac/ad5766.c
++++ b/drivers/iio/dac/ad5766.c
+@@ -503,13 +503,13 @@ static int ad5766_get_output_range(struct ad5766_state *st)
+ 	int i, ret, min, max, tmp[2];
+ 
+ 	ret = device_property_read_u32_array(&st->spi->dev,
+-					     "output-range-voltage",
++					     "output-range-microvolts",
+ 					     tmp, 2);
+ 	if (ret)
+ 		return ret;
+ 
+-	min = tmp[0] / 1000;
+-	max = tmp[1] / 1000;
++	min = tmp[0] / 1000000;
++	max = tmp[1] / 1000000;
+ 	for (i = 0; i < ARRAY_SIZE(ad5766_span_tbl); i++) {
+ 		if (ad5766_span_tbl[i].min != min ||
+ 		    ad5766_span_tbl[i].max != max)
+diff --git a/drivers/iio/dac/ad5770r.c b/drivers/iio/dac/ad5770r.c
+index 8107f7bbbe3c5..7e2fd32e993a6 100644
+--- a/drivers/iio/dac/ad5770r.c
++++ b/drivers/iio/dac/ad5770r.c
+@@ -522,7 +522,7 @@ static int ad5770r_channel_config(struct ad5770r_state *st)
+ 		return -EINVAL;
+ 
+ 	device_for_each_child_node(&st->spi->dev, child) {
+-		ret = fwnode_property_read_u32(child, "num", &num);
++		ret = fwnode_property_read_u32(child, "reg", &num);
+ 		if (ret)
+ 			goto err_child_out;
+ 		if (num >= AD5770R_MAX_CHANNELS) {
+diff --git a/drivers/iio/gyro/st_gyro_i2c.c b/drivers/iio/gyro/st_gyro_i2c.c
+index a25cc0379e163..3ed5779779465 100644
+--- a/drivers/iio/gyro/st_gyro_i2c.c
++++ b/drivers/iio/gyro/st_gyro_i2c.c
+@@ -106,10 +106,10 @@ static int st_gyro_i2c_remove(struct i2c_client *client)
+ {
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_gyro_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/gyro/st_gyro_spi.c b/drivers/iio/gyro/st_gyro_spi.c
+index 18d6a2aeda45a..c04bcf2518c11 100644
+--- a/drivers/iio/gyro/st_gyro_spi.c
++++ b/drivers/iio/gyro/st_gyro_spi.c
+@@ -110,10 +110,10 @@ static int st_gyro_spi_remove(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev = spi_get_drvdata(spi);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_gyro_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
+index b9a06ca29beec..d4e692b187cda 100644
+--- a/drivers/iio/imu/adis.c
++++ b/drivers/iio/imu/adis.c
+@@ -430,6 +430,8 @@ int __adis_initial_startup(struct adis *adis)
+ 	if (ret)
+ 		return ret;
+ 
++	adis_enable_irq(adis, false);
++
+ 	if (!adis->data->prod_id_reg)
+ 		return 0;
+ 
+@@ -526,7 +528,7 @@ int adis_init(struct adis *adis, struct iio_dev *indio_dev,
+ 		adis->current_page = 0;
+ 	}
+ 
+-	return adis_enable_irq(adis, false);
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(adis_init);
+ 
+diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
+index fdd623407b969..1dfd10831f379 100644
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -1311,6 +1311,11 @@ static struct attribute *iio_buffer_wrap_attr(struct iio_buffer *buffer,
+ 	iio_attr->buffer = buffer;
+ 	memcpy(&iio_attr->dev_attr, dattr, sizeof(iio_attr->dev_attr));
+ 	iio_attr->dev_attr.attr.name = kstrdup_const(attr->name, GFP_KERNEL);
++	if (!iio_attr->dev_attr.attr.name) {
++		kfree(iio_attr);
++		return NULL;
++	}
++
+ 	sysfs_attr_init(&iio_attr->dev_attr.attr);
+ 
+ 	list_add(&iio_attr->l, &buffer->buffer_attr_list);
+@@ -1361,10 +1366,10 @@ static int iio_buffer_register_legacy_sysfs_groups(struct iio_dev *indio_dev,
+ 
+ 	return 0;
+ 
+-error_free_buffer_attrs:
+-	kfree(iio_dev_opaque->legacy_buffer_group.attrs);
+ error_free_scan_el_attrs:
+ 	kfree(iio_dev_opaque->legacy_scan_el_group.attrs);
++error_free_buffer_attrs:
++	kfree(iio_dev_opaque->legacy_buffer_group.attrs);
+ 
+ 	return ret;
+ }
+@@ -1530,6 +1535,7 @@ static int __iio_buffer_alloc_sysfs_and_mask(struct iio_buffer *buffer,
+ 		       sizeof(struct attribute *) * buffer_attrcount);
+ 
+ 	buffer_attrcount += ARRAY_SIZE(iio_buffer_attrs);
++	buffer->buffer_group.attrs = attr;
+ 
+ 	for (i = 0; i < buffer_attrcount; i++) {
+ 		struct attribute *wrapped;
+@@ -1537,7 +1543,7 @@ static int __iio_buffer_alloc_sysfs_and_mask(struct iio_buffer *buffer,
+ 		wrapped = iio_buffer_wrap_attr(buffer, attr[i]);
+ 		if (!wrapped) {
+ 			ret = -ENOMEM;
+-			goto error_free_scan_mask;
++			goto error_free_buffer_attrs;
+ 		}
+ 		attr[i] = wrapped;
+ 	}
+@@ -1552,8 +1558,6 @@ static int __iio_buffer_alloc_sysfs_and_mask(struct iio_buffer *buffer,
+ 		goto error_free_buffer_attrs;
+ 	}
+ 
+-	buffer->buffer_group.attrs = attr;
+-
+ 	ret = iio_device_register_sysfs_group(indio_dev, &buffer->buffer_group);
+ 	if (ret)
+ 		goto error_free_buffer_attr_group_name;
+@@ -1582,8 +1586,12 @@ error_cleanup_dynamic:
+ 	return ret;
+ }
+ 
+-static void __iio_buffer_free_sysfs_and_mask(struct iio_buffer *buffer)
++static void __iio_buffer_free_sysfs_and_mask(struct iio_buffer *buffer,
++					     struct iio_dev *indio_dev,
++					     int index)
+ {
++	if (index == 0)
++		iio_buffer_unregister_legacy_sysfs_groups(indio_dev);
+ 	bitmap_free(buffer->scan_mask);
+ 	kfree(buffer->buffer_group.name);
+ 	kfree(buffer->buffer_group.attrs);
+@@ -1615,7 +1623,7 @@ int iio_buffers_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
+ 		buffer = iio_dev_opaque->attached_buffers[i];
+ 		ret = __iio_buffer_alloc_sysfs_and_mask(buffer, indio_dev, i);
+ 		if (ret) {
+-			unwind_idx = i;
++			unwind_idx = i - 1;
+ 			goto error_unwind_sysfs_and_mask;
+ 		}
+ 	}
+@@ -1637,7 +1645,7 @@ int iio_buffers_alloc_sysfs_and_mask(struct iio_dev *indio_dev)
+ error_unwind_sysfs_and_mask:
+ 	for (; unwind_idx >= 0; unwind_idx--) {
+ 		buffer = iio_dev_opaque->attached_buffers[unwind_idx];
+-		__iio_buffer_free_sysfs_and_mask(buffer);
++		__iio_buffer_free_sysfs_and_mask(buffer, indio_dev, unwind_idx);
+ 	}
+ 	return ret;
+ }
+@@ -1654,11 +1662,9 @@ void iio_buffers_free_sysfs_and_mask(struct iio_dev *indio_dev)
+ 	iio_device_ioctl_handler_unregister(iio_dev_opaque->buffer_ioctl_handler);
+ 	kfree(iio_dev_opaque->buffer_ioctl_handler);
+ 
+-	iio_buffer_unregister_legacy_sysfs_groups(indio_dev);
+-
+ 	for (i = iio_dev_opaque->attached_buffers_cnt - 1; i >= 0; i--) {
+ 		buffer = iio_dev_opaque->attached_buffers[i];
+-		__iio_buffer_free_sysfs_and_mask(buffer);
++		__iio_buffer_free_sysfs_and_mask(buffer, indio_dev, i);
+ 	}
+ }
+ 
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index 6d2175eb7af25..cc3236bfff7fe 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1597,6 +1597,7 @@ static void iio_device_unregister_sysfs(struct iio_dev *indio_dev)
+ 	kfree(iio_dev_opaque->chan_attr_group.attrs);
+ 	iio_dev_opaque->chan_attr_group.attrs = NULL;
+ 	kfree(iio_dev_opaque->groups);
++	iio_dev_opaque->groups = NULL;
+ }
+ 
+ static void iio_dev_release(struct device *device)
+@@ -1661,7 +1662,13 @@ struct iio_dev *iio_device_alloc(struct device *parent, int sizeof_priv)
+ 		kfree(iio_dev_opaque);
+ 		return NULL;
+ 	}
+-	dev_set_name(&indio_dev->dev, "iio:device%d", iio_dev_opaque->id);
++
++	if (dev_set_name(&indio_dev->dev, "iio:device%d", iio_dev_opaque->id)) {
++		ida_simple_remove(&iio_ida, iio_dev_opaque->id);
++		kfree(iio_dev_opaque);
++		return NULL;
++	}
++
+ 	INIT_LIST_HEAD(&iio_dev_opaque->buffer_list);
+ 	INIT_LIST_HEAD(&iio_dev_opaque->ioctl_handlers);
+ 
+diff --git a/drivers/iio/magnetometer/st_magn_i2c.c b/drivers/iio/magnetometer/st_magn_i2c.c
+index 3e23c117de8e1..6c93df6926a30 100644
+--- a/drivers/iio/magnetometer/st_magn_i2c.c
++++ b/drivers/iio/magnetometer/st_magn_i2c.c
+@@ -102,10 +102,10 @@ static int st_magn_i2c_remove(struct i2c_client *client)
+ {
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_magn_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/magnetometer/st_magn_spi.c b/drivers/iio/magnetometer/st_magn_spi.c
+index 03c0a737aba6e..4f52c3a21e3c2 100644
+--- a/drivers/iio/magnetometer/st_magn_spi.c
++++ b/drivers/iio/magnetometer/st_magn_spi.c
+@@ -96,10 +96,10 @@ static int st_magn_spi_remove(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev = spi_get_drvdata(spi);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_magn_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/pressure/st_pressure_i2c.c b/drivers/iio/pressure/st_pressure_i2c.c
+index f0a5af314ceb8..8c26ff61e56ad 100644
+--- a/drivers/iio/pressure/st_pressure_i2c.c
++++ b/drivers/iio/pressure/st_pressure_i2c.c
+@@ -118,10 +118,10 @@ static int st_press_i2c_remove(struct i2c_client *client)
+ {
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_press_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/iio/pressure/st_pressure_spi.c b/drivers/iio/pressure/st_pressure_spi.c
+index b48cf7d01cd74..51b3467bd724c 100644
+--- a/drivers/iio/pressure/st_pressure_spi.c
++++ b/drivers/iio/pressure/st_pressure_spi.c
+@@ -102,10 +102,10 @@ static int st_press_spi_remove(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev = spi_get_drvdata(spi);
+ 
+-	st_sensors_power_disable(indio_dev);
+-
+ 	st_press_common_remove(indio_dev);
+ 
++	st_sensors_power_disable(indio_dev);
++
+ 	return 0;
+ }
+ 
+@@ -117,6 +117,10 @@ static const struct spi_device_id st_press_id_table[] = {
+ 	{ LPS33HW_PRESS_DEV_NAME },
+ 	{ LPS35HW_PRESS_DEV_NAME },
+ 	{ LPS22HH_PRESS_DEV_NAME },
++	{ "lps001wp-press" },
++	{ "lps25h-press", },
++	{ "lps331ap-press" },
++	{ "lps22hb-press" },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(spi, st_press_id_table);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 8c8ca7bce3caf..24dd550ceb119 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -837,11 +837,8 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ 		new_mr->device = new_pd->device;
+ 		new_mr->pd = new_pd;
+ 		new_mr->type = IB_MR_TYPE_USER;
+-		new_mr->dm = NULL;
+-		new_mr->sig_attrs = NULL;
+ 		new_mr->uobject = uobj;
+ 		atomic_inc(&new_pd->usecnt);
+-		new_mr->iova = cmd.hca_va;
+ 		new_uobj->object = new_mr;
+ 
+ 		rdma_restrack_new(&new_mr->res, RDMA_RESTRACK_MR);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index d4d4959c2434c..bd153aa7e9ab3 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -707,12 +707,13 @@ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res,
+ 	int rc = 0;
+ 
+ 	RCFW_CMD_PREP(req, QUERY_SRQ, cmd_flags);
+-	req.srq_cid = cpu_to_le32(srq->id);
+ 
+ 	/* Configure the request */
+ 	sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb));
+ 	if (!sbuf)
+ 		return -ENOMEM;
++	req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS;
++	req.srq_cid = cpu_to_le32(srq->id);
+ 	sb = sbuf->sb;
+ 	rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, (void *)&resp,
+ 					  (void *)sbuf, 0);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 0ccb0c453f6a2..e2547e8b4d21c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -3335,7 +3335,7 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
+ 	memset(cq_context, 0, sizeof(*cq_context));
+ 
+ 	hr_reg_write(cq_context, CQC_CQ_ST, V2_CQ_STATE_VALID);
+-	hr_reg_write(cq_context, CQC_ARM_ST, REG_NXT_CEQE);
++	hr_reg_write(cq_context, CQC_ARM_ST, NO_ARMED);
+ 	hr_reg_write(cq_context, CQC_SHIFT, ilog2(hr_cq->cq_depth));
+ 	hr_reg_write(cq_context, CQC_CEQN, hr_cq->vector);
+ 	hr_reg_write(cq_context, CQC_CQN, hr_cq->cqn);
+@@ -4413,8 +4413,8 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	mtu = ib_mtu_enum_to_int(ib_mtu);
+ 	if (WARN_ON(mtu <= 0))
+ 		return -EINVAL;
+-#define MAX_LP_MSG_LEN 65536
+-	/* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
++#define MAX_LP_MSG_LEN 16384
++	/* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 16KB */
+ 	lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
+ 	if (WARN_ON(lp_pktn_ini >= 0xF))
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 4a2ef7daadeda..a8b22c08af5d3 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -1099,8 +1099,10 @@ static int create_qp_common(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
+ 			if (dev->steering_support ==
+ 			    MLX4_STEERING_MODE_DEVICE_MANAGED)
+ 				qp->flags |= MLX4_IB_QP_NETIF;
+-			else
++			else {
++				err = -EINVAL;
+ 				goto err;
++			}
+ 		}
+ 
+ 		err = set_kernel_sq_size(dev, &init_attr->cap, qp_type, qp);
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index fdc47ef7d861f..5a02f0fbc27ec 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -2758,15 +2758,18 @@ int qedr_query_qp(struct ib_qp *ibqp,
+ 	int rc = 0;
+ 
+ 	memset(&params, 0, sizeof(params));
+-
+-	rc = dev->ops->rdma_query_qp(dev->rdma_ctx, qp->qed_qp, &params);
+-	if (rc)
+-		goto err;
+-
+ 	memset(qp_attr, 0, sizeof(*qp_attr));
+ 	memset(qp_init_attr, 0, sizeof(*qp_init_attr));
+ 
+-	qp_attr->qp_state = qedr_get_ibqp_state(params.state);
++	if (qp->qp_type != IB_QPT_GSI) {
++		rc = dev->ops->rdma_query_qp(dev->rdma_ctx, qp->qed_qp, &params);
++		if (rc)
++			goto err;
++		qp_attr->qp_state = qedr_get_ibqp_state(params.state);
++	} else {
++		qp_attr->qp_state = qedr_get_ibqp_state(QED_ROCE_QP_STATE_RTS);
++	}
++
+ 	qp_attr->cur_qp_state = qedr_get_ibqp_state(params.state);
+ 	qp_attr->path_mtu = ib_mtu_int_to_enum(params.mtu);
+ 	qp_attr->path_mig_state = IB_MIG_MIGRATED;
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index 742e6ec93686c..b5a70cbe94aac 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -113,7 +113,7 @@ enum rxe_device_param {
+ /* default/initial rxe port parameters */
+ enum rxe_port_param {
+ 	RXE_PORT_GID_TBL_LEN		= 1024,
+-	RXE_PORT_PORT_CAP_FLAGS		= RDMA_CORE_CAP_PROT_ROCE_UDP_ENCAP,
++	RXE_PORT_PORT_CAP_FLAGS		= IB_PORT_CM_SUP,
+ 	RXE_PORT_MAX_MSG_SZ		= 0x800000,
+ 	RXE_PORT_BAD_PKEY_CNTR		= 0,
+ 	RXE_PORT_QKEY_VIOL_CNTR		= 0,
+diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c
+index 6c554c11a7ac3..ea58805c480fa 100644
+--- a/drivers/input/joystick/iforce/iforce-usb.c
++++ b/drivers/input/joystick/iforce/iforce-usb.c
+@@ -92,7 +92,7 @@ static int iforce_usb_get_id(struct iforce *iforce, u8 id,
+ 				 id,
+ 				 USB_TYPE_VENDOR | USB_DIR_IN |
+ 					USB_RECIP_INTERFACE,
+-				 0, 0, buf, IFORCE_MAX_LENGTH, HZ);
++				 0, 0, buf, IFORCE_MAX_LENGTH, 1000);
+ 	if (status < 0) {
+ 		dev_err(&iforce_usb->intf->dev,
+ 			"usb_submit_urb failed: %d\n", status);
+diff --git a/drivers/input/misc/ariel-pwrbutton.c b/drivers/input/misc/ariel-pwrbutton.c
+index 17bbaac8b80c8..cdc80715b5fd6 100644
+--- a/drivers/input/misc/ariel-pwrbutton.c
++++ b/drivers/input/misc/ariel-pwrbutton.c
+@@ -149,12 +149,19 @@ static const struct of_device_id ariel_pwrbutton_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ariel_pwrbutton_of_match);
+ 
++static const struct spi_device_id ariel_pwrbutton_spi_ids[] = {
++	{ .name = "wyse-ariel-ec-input" },
++	{ }
++};
++MODULE_DEVICE_TABLE(spi, ariel_pwrbutton_spi_ids);
++
+ static struct spi_driver ariel_pwrbutton_driver = {
+ 	.driver = {
+ 		.name = "dell-wyse-ariel-ec-input",
+ 		.of_match_table = ariel_pwrbutton_of_match,
+ 	},
+ 	.probe = ariel_pwrbutton_probe,
++	.id_table = ariel_pwrbutton_spi_ids,
+ };
+ module_spi_driver(ariel_pwrbutton_driver);
+ 
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index 2d0bc029619ff..956d9cd347964 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -517,6 +517,19 @@ static void elantech_report_trackpoint(struct psmouse *psmouse,
+ 	case 0x16008020U:
+ 	case 0x26800010U:
+ 	case 0x36808000U:
++
++		/*
++		 * This firmware misreport coordinates for trackpoint
++		 * occasionally. Discard packets outside of [-127, 127] range
++		 * to prevent cursor jumps.
++		 */
++		if (packet[4] == 0x80 || packet[5] == 0x80 ||
++		    packet[1] >> 7 == packet[4] >> 7 ||
++		    packet[2] >> 7 == packet[5] >> 7) {
++			elantech_debug("discarding packet [%6ph]\n", packet);
++			break;
++
++		}
+ 		x = packet[4] - (int)((packet[1]^0x80) << 1);
+ 		y = (int)((packet[2]^0x80) << 1) - packet[5];
+ 
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index a5a0035536462..aedd055410443 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -272,6 +272,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"),
+ 		},
+ 	},
++	{
++		/* Fujitsu Lifebook T725 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++		},
++	},
+ 	{
+ 		/* Fujitsu Lifebook U745 */
+ 		.matches = {
+@@ -840,6 +847,13 @@ static const struct dmi_system_id __initconst i8042_dmi_notimeout_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"),
+ 		},
+ 	},
++	{
++		/* Fujitsu Lifebook T725 laptop */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK T725"),
++		},
++	},
+ 	{
+ 		/* Fujitsu U574 laptop */
+ 		/* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */
+diff --git a/drivers/input/touchscreen/st1232.c b/drivers/input/touchscreen/st1232.c
+index 6abae665ca71d..9d1dea6996a22 100644
+--- a/drivers/input/touchscreen/st1232.c
++++ b/drivers/input/touchscreen/st1232.c
+@@ -92,7 +92,7 @@ static int st1232_ts_wait_ready(struct st1232_ts_data *ts)
+ 	unsigned int retries;
+ 	int error;
+ 
+-	for (retries = 10; retries; retries--) {
++	for (retries = 100; retries; retries--) {
+ 		error = st1232_ts_read_data(ts, REG_STATUS, 1);
+ 		if (!error) {
+ 			switch (ts->read_buf[0]) {
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 6f0df629353fd..f27f8d20fe68b 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -576,6 +576,9 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
+ 		memset(padding_start, 0, padding_size);
+ 	}
+ 
++	if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
++		arch_sync_dma_for_device(phys, org_size, dir);
++
+ 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
+ 	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+ 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
+@@ -850,14 +853,9 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
+ {
+ 	phys_addr_t phys = page_to_phys(page) + offset;
+ 	bool coherent = dev_is_dma_coherent(dev);
+-	dma_addr_t dma_handle;
+ 
+-	dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
++	return __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
+ 			coherent, dir, attrs);
+-	if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+-	    dma_handle != DMA_MAPPING_ERROR)
+-		arch_sync_dma_for_device(phys, size, dir);
+-	return dma_handle;
+ }
+ 
+ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
+@@ -1000,12 +998,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 	    iommu_deferred_attach(dev, domain))
+ 		return 0;
+ 
+-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
+-		iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
+-
+ 	if (dev_is_untrusted(dev))
+ 		return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs);
+ 
++	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
++		iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
++
+ 	/*
+ 	 * Work out how much IOVA space we need, and align the segments to
+ 	 * IOVA granules for the IOMMU driver to handle. With some clever
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 6f7c69688ce2a..366a9c05642fa 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -561,7 +561,9 @@ static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
+ 	phys_addr_t pa;
+ 
+ 	pa = dom->iop->iova_to_phys(dom->iop, iova);
+-	if (dom->data->enable_4GB && pa >= MTK_IOMMU_4GB_MODE_REMAP_BASE)
++	if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT) &&
++	    dom->data->enable_4GB &&
++	    pa >= MTK_IOMMU_4GB_MODE_REMAP_BASE)
+ 		pa &= ~BIT_ULL(32);
+ 
+ 	return pa;
+diff --git a/drivers/irqchip/irq-bcm6345-l1.c b/drivers/irqchip/irq-bcm6345-l1.c
+index e3483789f4df3..1bd0621c4ce2a 100644
+--- a/drivers/irqchip/irq-bcm6345-l1.c
++++ b/drivers/irqchip/irq-bcm6345-l1.c
+@@ -140,7 +140,7 @@ static void bcm6345_l1_irq_handle(struct irq_desc *desc)
+ 		for_each_set_bit(hwirq, &pending, IRQS_PER_WORD) {
+ 			irq = irq_linear_revmap(intc->domain, base + hwirq);
+ 			if (irq)
+-				do_IRQ(irq);
++				generic_handle_irq(irq);
+ 			else
+ 				spurious_interrupt();
+ 		}
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index cf74cfa820453..259065d271ef0 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -163,7 +163,13 @@ static void plic_irq_eoi(struct irq_data *d)
+ {
+ 	struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
+ 
+-	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++	if (irqd_irq_masked(d)) {
++		plic_irq_unmask(d);
++		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++		plic_irq_mask(d);
++	} else {
++		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++	}
+ }
+ 
+ static struct irq_chip plic_chip = {
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index e501cb03f211d..bd087cca1c1d2 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -1994,14 +1994,14 @@ setup_hw(struct hfc_pci *hc)
+ 	pci_set_master(hc->pdev);
+ 	if (!hc->irq) {
+ 		printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");
+-		return 1;
++		return -EINVAL;
+ 	}
+ 	hc->hw.pci_io =
+ 		(char __iomem *)(unsigned long)hc->pdev->resource[1].start;
+ 
+ 	if (!hc->hw.pci_io) {
+ 		printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 	/* Allocate memory for FIFOS */
+ 	/* the memory needs to be on a 32k boundary within the first 4G */
+@@ -2012,7 +2012,7 @@ setup_hw(struct hfc_pci *hc)
+ 	if (!buffer) {
+ 		printk(KERN_WARNING
+ 		       "HFC-PCI: Error allocating memory for FIFO!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 	hc->hw.fifos = buffer;
+ 	pci_write_config_dword(hc->pdev, 0x80, hc->hw.dmahandle);
+@@ -2022,7 +2022,7 @@ setup_hw(struct hfc_pci *hc)
+ 		       "HFC-PCI: Error in ioremap for PCI!\n");
+ 		dma_free_coherent(&hc->pdev->dev, 0x8000, hc->hw.fifos,
+ 				  hc->hw.dmahandle);
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	printk(KERN_INFO
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 4f907e8f3894b..5e2796db026d2 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -186,7 +186,6 @@ static void cmdq_task_exec_done(struct cmdq_task *task, int sta)
+ 	struct cmdq_task_cb *cb = &task->pkt->async_cb;
+ 	struct cmdq_cb_data data;
+ 
+-	WARN_ON(cb->cb == (cmdq_async_flush_cb)NULL);
+ 	data.sta = sta;
+ 	data.data = cb->data;
+ 	data.pkt = task->pkt;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 6c0c3d0d905aa..e89eb467f1429 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2976,7 +2976,11 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 	 *  -write_error - clears WriteErrorSeen
+ 	 *  {,-}failfast - set/clear FailFast
+ 	 */
++
++	struct mddev *mddev = rdev->mddev;
+ 	int err = -EINVAL;
++	bool need_update_sb = false;
++
+ 	if (cmd_match(buf, "faulty") && rdev->mddev->pers) {
+ 		md_error(rdev->mddev, rdev);
+ 		if (test_bit(Faulty, &rdev->flags))
+@@ -2991,7 +2995,6 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		if (rdev->raid_disk >= 0)
+ 			err = -EBUSY;
+ 		else {
+-			struct mddev *mddev = rdev->mddev;
+ 			err = 0;
+ 			if (mddev_is_clustered(mddev))
+ 				err = md_cluster_ops->remove_disk(mddev, rdev);
+@@ -3008,10 +3011,12 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 	} else if (cmd_match(buf, "writemostly")) {
+ 		set_bit(WriteMostly, &rdev->flags);
+ 		mddev_create_serial_pool(rdev->mddev, rdev, false);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "-writemostly")) {
+ 		mddev_destroy_serial_pool(rdev->mddev, rdev, false);
+ 		clear_bit(WriteMostly, &rdev->flags);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "blocked")) {
+ 		set_bit(Blocked, &rdev->flags);
+@@ -3037,9 +3042,11 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		err = 0;
+ 	} else if (cmd_match(buf, "failfast")) {
+ 		set_bit(FailFast, &rdev->flags);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "-failfast")) {
+ 		clear_bit(FailFast, &rdev->flags);
++		need_update_sb = true;
+ 		err = 0;
+ 	} else if (cmd_match(buf, "-insync") && rdev->raid_disk >= 0 &&
+ 		   !test_bit(Journal, &rdev->flags)) {
+@@ -3118,6 +3125,8 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 		clear_bit(ExternalBbl, &rdev->flags);
+ 		err = 0;
+ 	}
++	if (need_update_sb)
++		md_update_sb(mddev, 1);
+ 	if (!err)
+ 		sysfs_notify_dirent_safe(rdev->sysfs_state);
+ 	return err ? err : len;
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 19598bd38939d..6ba12f0f0f036 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1496,7 +1496,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 		if (!r1_bio->bios[i])
+ 			continue;
+ 
+-		if (first_clone) {
++		if (first_clone && test_bit(WriteMostly, &rdev->flags)) {
+ 			/* do behind I/O ?
+ 			 * Not if there are too many, or cannot
+ 			 * allocate memory, or a reader on WriteMostly
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 508ac295eb06e..033b0c83272fe 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -68,13 +68,13 @@ module_param(debug, int, 0644);
+ 	err;								\
+ })
+ 
+-#define call_ptr_memop(vb, op, args...)					\
++#define call_ptr_memop(op, vb, args...)					\
+ ({									\
+ 	struct vb2_queue *_q = (vb)->vb2_queue;				\
+ 	void *ptr;							\
+ 									\
+ 	log_memop(vb, op);						\
+-	ptr = _q->mem_ops->op ? _q->mem_ops->op(args) : NULL;		\
++	ptr = _q->mem_ops->op ? _q->mem_ops->op(vb, args) : NULL;	\
+ 	if (!IS_ERR_OR_NULL(ptr))					\
+ 		(vb)->cnt_mem_ ## op++;					\
+ 	ptr;								\
+@@ -144,9 +144,9 @@ module_param(debug, int, 0644);
+ 	((vb)->vb2_queue->mem_ops->op ?					\
+ 		(vb)->vb2_queue->mem_ops->op(args) : 0)
+ 
+-#define call_ptr_memop(vb, op, args...)					\
++#define call_ptr_memop(op, vb, args...)					\
+ 	((vb)->vb2_queue->mem_ops->op ?					\
+-		(vb)->vb2_queue->mem_ops->op(args) : NULL)
++		(vb)->vb2_queue->mem_ops->op(vb, args) : NULL)
+ 
+ #define call_void_memop(vb, op, args...)				\
+ 	do {								\
+@@ -230,9 +230,10 @@ static int __vb2_buf_mem_alloc(struct vb2_buffer *vb)
+ 		if (size < vb->planes[plane].length)
+ 			goto free;
+ 
+-		mem_priv = call_ptr_memop(vb, alloc,
+-				q->alloc_devs[plane] ? : q->dev,
+-				q->dma_attrs, size, q->dma_dir, q->gfp_flags);
++		mem_priv = call_ptr_memop(alloc,
++					  vb,
++					  q->alloc_devs[plane] ? : q->dev,
++					  size);
+ 		if (IS_ERR_OR_NULL(mem_priv)) {
+ 			if (mem_priv)
+ 				ret = PTR_ERR(mem_priv);
+@@ -975,7 +976,7 @@ void *vb2_plane_vaddr(struct vb2_buffer *vb, unsigned int plane_no)
+ 	if (plane_no >= vb->num_planes || !vb->planes[plane_no].mem_priv)
+ 		return NULL;
+ 
+-	return call_ptr_memop(vb, vaddr, vb->planes[plane_no].mem_priv);
++	return call_ptr_memop(vaddr, vb, vb->planes[plane_no].mem_priv);
+ 
+ }
+ EXPORT_SYMBOL_GPL(vb2_plane_vaddr);
+@@ -985,7 +986,7 @@ void *vb2_plane_cookie(struct vb2_buffer *vb, unsigned int plane_no)
+ 	if (plane_no >= vb->num_planes || !vb->planes[plane_no].mem_priv)
+ 		return NULL;
+ 
+-	return call_ptr_memop(vb, cookie, vb->planes[plane_no].mem_priv);
++	return call_ptr_memop(cookie, vb, vb->planes[plane_no].mem_priv);
+ }
+ EXPORT_SYMBOL_GPL(vb2_plane_cookie);
+ 
+@@ -1125,10 +1126,11 @@ static int __prepare_userptr(struct vb2_buffer *vb)
+ 		vb->planes[plane].data_offset = 0;
+ 
+ 		/* Acquire each plane's memory */
+-		mem_priv = call_ptr_memop(vb, get_userptr,
+-				q->alloc_devs[plane] ? : q->dev,
+-				planes[plane].m.userptr,
+-				planes[plane].length, q->dma_dir);
++		mem_priv = call_ptr_memop(get_userptr,
++					  vb,
++					  q->alloc_devs[plane] ? : q->dev,
++					  planes[plane].m.userptr,
++					  planes[plane].length);
+ 		if (IS_ERR(mem_priv)) {
+ 			dprintk(q, 1, "failed acquiring userspace memory for plane %d\n",
+ 				plane);
+@@ -1249,9 +1251,11 @@ static int __prepare_dmabuf(struct vb2_buffer *vb)
+ 		vb->planes[plane].data_offset = 0;
+ 
+ 		/* Acquire each plane's memory */
+-		mem_priv = call_ptr_memop(vb, attach_dmabuf,
+-				q->alloc_devs[plane] ? : q->dev,
+-				dbuf, planes[plane].length, q->dma_dir);
++		mem_priv = call_ptr_memop(attach_dmabuf,
++					  vb,
++					  q->alloc_devs[plane] ? : q->dev,
++					  dbuf,
++					  planes[plane].length);
+ 		if (IS_ERR(mem_priv)) {
+ 			dprintk(q, 1, "failed to attach dmabuf\n");
+ 			ret = PTR_ERR(mem_priv);
+@@ -2187,8 +2191,10 @@ int vb2_core_expbuf(struct vb2_queue *q, int *fd, unsigned int type,
+ 
+ 	vb_plane = &vb->planes[plane];
+ 
+-	dbuf = call_ptr_memop(vb, get_dmabuf, vb_plane->mem_priv,
+-				flags & O_ACCMODE);
++	dbuf = call_ptr_memop(get_dmabuf,
++			      vb,
++			      vb_plane->mem_priv,
++			      flags & O_ACCMODE);
+ 	if (IS_ERR_OR_NULL(dbuf)) {
+ 		dprintk(q, 1, "failed to export buffer %d, plane %d\n",
+ 			index, plane);
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+index a7f61ba854405..be376f3011b68 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+@@ -40,6 +40,8 @@ struct vb2_dc_buf {
+ 
+ 	/* DMABUF related */
+ 	struct dma_buf_attachment	*db_attach;
++
++	struct vb2_buffer		*vb;
+ };
+ 
+ /*********************************************/
+@@ -66,14 +68,14 @@ static unsigned long vb2_dc_get_contiguous_size(struct sg_table *sgt)
+ /*         callbacks for all buffers         */
+ /*********************************************/
+ 
+-static void *vb2_dc_cookie(void *buf_priv)
++static void *vb2_dc_cookie(struct vb2_buffer *vb, void *buf_priv)
+ {
+ 	struct vb2_dc_buf *buf = buf_priv;
+ 
+ 	return &buf->dma_addr;
+ }
+ 
+-static void *vb2_dc_vaddr(void *buf_priv)
++static void *vb2_dc_vaddr(struct vb2_buffer *vb, void *buf_priv)
+ {
+ 	struct vb2_dc_buf *buf = buf_priv;
+ 	struct dma_buf_map map;
+@@ -137,9 +139,9 @@ static void vb2_dc_put(void *buf_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_dc_alloc(struct device *dev, unsigned long attrs,
+-			  unsigned long size, enum dma_data_direction dma_dir,
+-			  gfp_t gfp_flags)
++static void *vb2_dc_alloc(struct vb2_buffer *vb,
++			  struct device *dev,
++			  unsigned long size)
+ {
+ 	struct vb2_dc_buf *buf;
+ 
+@@ -150,9 +152,10 @@ static void *vb2_dc_alloc(struct device *dev, unsigned long attrs,
+ 	if (!buf)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	buf->attrs = attrs;
++	buf->attrs = vb->vb2_queue->dma_attrs;
+ 	buf->cookie = dma_alloc_attrs(dev, size, &buf->dma_addr,
+-					GFP_KERNEL | gfp_flags, buf->attrs);
++				      GFP_KERNEL | vb->vb2_queue->gfp_flags,
++				      buf->attrs);
+ 	if (!buf->cookie) {
+ 		dev_err(dev, "dma_alloc_coherent of size %ld failed\n", size);
+ 		kfree(buf);
+@@ -165,11 +168,12 @@ static void *vb2_dc_alloc(struct device *dev, unsigned long attrs,
+ 	/* Prevent the device from being released while the buffer is used */
+ 	buf->dev = get_device(dev);
+ 	buf->size = size;
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 
+ 	buf->handler.refcount = &buf->refcount;
+ 	buf->handler.put = vb2_dc_put;
+ 	buf->handler.arg = buf;
++	buf->vb = vb;
+ 
+ 	refcount_set(&buf->refcount, 1);
+ 
+@@ -397,7 +401,9 @@ static struct sg_table *vb2_dc_get_base_sgt(struct vb2_dc_buf *buf)
+ 	return sgt;
+ }
+ 
+-static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv, unsigned long flags)
++static struct dma_buf *vb2_dc_get_dmabuf(struct vb2_buffer *vb,
++					 void *buf_priv,
++					 unsigned long flags)
+ {
+ 	struct vb2_dc_buf *buf = buf_priv;
+ 	struct dma_buf *dbuf;
+@@ -459,8 +465,8 @@ static void vb2_dc_put_userptr(void *buf_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr,
+-	unsigned long size, enum dma_data_direction dma_dir)
++static void *vb2_dc_get_userptr(struct vb2_buffer *vb, struct device *dev,
++				unsigned long vaddr, unsigned long size)
+ {
+ 	struct vb2_dc_buf *buf;
+ 	struct frame_vector *vec;
+@@ -490,7 +496,8 @@ static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	buf->dev = dev;
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
++	buf->vb = vb;
+ 
+ 	offset = lower_32_bits(offset_in_page(vaddr));
+ 	vec = vb2_create_framevec(vaddr, size);
+@@ -660,8 +667,8 @@ static void vb2_dc_detach_dmabuf(void *mem_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_dc_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+-	unsigned long size, enum dma_data_direction dma_dir)
++static void *vb2_dc_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
++				  struct dma_buf *dbuf, unsigned long size)
+ {
+ 	struct vb2_dc_buf *buf;
+ 	struct dma_buf_attachment *dba;
+@@ -677,6 +684,8 @@ static void *vb2_dc_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	buf->dev = dev;
++	buf->vb = vb;
++
+ 	/* create attachment for the dmabuf with the user device */
+ 	dba = dma_buf_attach(dbuf, buf->dev);
+ 	if (IS_ERR(dba)) {
+@@ -685,7 +694,7 @@ static void *vb2_dc_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+ 		return dba;
+ 	}
+ 
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	buf->size = size;
+ 	buf->db_attach = dba;
+ 
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+index c5b06a5095661..0d6389dd9b0c6 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+@@ -51,6 +51,8 @@ struct vb2_dma_sg_buf {
+ 	struct vb2_vmarea_handler	handler;
+ 
+ 	struct dma_buf_attachment	*db_attach;
++
++	struct vb2_buffer		*vb;
+ };
+ 
+ static void vb2_dma_sg_put(void *buf_priv);
+@@ -96,9 +98,8 @@ static int vb2_dma_sg_alloc_compacted(struct vb2_dma_sg_buf *buf,
+ 	return 0;
+ }
+ 
+-static void *vb2_dma_sg_alloc(struct device *dev, unsigned long dma_attrs,
+-			      unsigned long size, enum dma_data_direction dma_dir,
+-			      gfp_t gfp_flags)
++static void *vb2_dma_sg_alloc(struct vb2_buffer *vb, struct device *dev,
++			      unsigned long size)
+ {
+ 	struct vb2_dma_sg_buf *buf;
+ 	struct sg_table *sgt;
+@@ -113,7 +114,7 @@ static void *vb2_dma_sg_alloc(struct device *dev, unsigned long dma_attrs,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	buf->vaddr = NULL;
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	buf->offset = 0;
+ 	buf->size = size;
+ 	/* size is already page aligned */
+@@ -130,7 +131,7 @@ static void *vb2_dma_sg_alloc(struct device *dev, unsigned long dma_attrs,
+ 	if (!buf->pages)
+ 		goto fail_pages_array_alloc;
+ 
+-	ret = vb2_dma_sg_alloc_compacted(buf, gfp_flags);
++	ret = vb2_dma_sg_alloc_compacted(buf, vb->vb2_queue->gfp_flags);
+ 	if (ret)
+ 		goto fail_pages_alloc;
+ 
+@@ -154,6 +155,7 @@ static void *vb2_dma_sg_alloc(struct device *dev, unsigned long dma_attrs,
+ 	buf->handler.refcount = &buf->refcount;
+ 	buf->handler.put = vb2_dma_sg_put;
+ 	buf->handler.arg = buf;
++	buf->vb = vb;
+ 
+ 	refcount_set(&buf->refcount, 1);
+ 
+@@ -213,9 +215,8 @@ static void vb2_dma_sg_finish(void *buf_priv)
+ 	dma_sync_sgtable_for_cpu(buf->dev, sgt, buf->dma_dir);
+ }
+ 
+-static void *vb2_dma_sg_get_userptr(struct device *dev, unsigned long vaddr,
+-				    unsigned long size,
+-				    enum dma_data_direction dma_dir)
++static void *vb2_dma_sg_get_userptr(struct vb2_buffer *vb, struct device *dev,
++				    unsigned long vaddr, unsigned long size)
+ {
+ 	struct vb2_dma_sg_buf *buf;
+ 	struct sg_table *sgt;
+@@ -230,10 +231,11 @@ static void *vb2_dma_sg_get_userptr(struct device *dev, unsigned long vaddr,
+ 
+ 	buf->vaddr = NULL;
+ 	buf->dev = dev;
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	buf->offset = vaddr & ~PAGE_MASK;
+ 	buf->size = size;
+ 	buf->dma_sgt = &buf->sg_table;
++	buf->vb = vb;
+ 	vec = vb2_create_framevec(vaddr, size);
+ 	if (IS_ERR(vec))
+ 		goto userptr_fail_pfnvec;
+@@ -292,7 +294,7 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_dma_sg_vaddr(void *buf_priv)
++static void *vb2_dma_sg_vaddr(struct vb2_buffer *vb, void *buf_priv)
+ {
+ 	struct vb2_dma_sg_buf *buf = buf_priv;
+ 	struct dma_buf_map map;
+@@ -511,7 +513,9 @@ static const struct dma_buf_ops vb2_dma_sg_dmabuf_ops = {
+ 	.release = vb2_dma_sg_dmabuf_ops_release,
+ };
+ 
+-static struct dma_buf *vb2_dma_sg_get_dmabuf(void *buf_priv, unsigned long flags)
++static struct dma_buf *vb2_dma_sg_get_dmabuf(struct vb2_buffer *vb,
++					     void *buf_priv,
++					     unsigned long flags)
+ {
+ 	struct vb2_dma_sg_buf *buf = buf_priv;
+ 	struct dma_buf *dbuf;
+@@ -605,8 +609,8 @@ static void vb2_dma_sg_detach_dmabuf(void *mem_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_dma_sg_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+-	unsigned long size, enum dma_data_direction dma_dir)
++static void *vb2_dma_sg_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
++				      struct dma_buf *dbuf, unsigned long size)
+ {
+ 	struct vb2_dma_sg_buf *buf;
+ 	struct dma_buf_attachment *dba;
+@@ -630,14 +634,15 @@ static void *vb2_dma_sg_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+ 		return dba;
+ 	}
+ 
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	buf->size = size;
+ 	buf->db_attach = dba;
++	buf->vb = vb;
+ 
+ 	return buf;
+ }
+ 
+-static void *vb2_dma_sg_cookie(void *buf_priv)
++static void *vb2_dma_sg_cookie(struct vb2_buffer *vb, void *buf_priv)
+ {
+ 	struct vb2_dma_sg_buf *buf = buf_priv;
+ 
+diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+index 83f95258ec8c6..ef36abd912dcc 100644
+--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
++++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+@@ -34,13 +34,12 @@ struct vb2_vmalloc_buf {
+ 
+ static void vb2_vmalloc_put(void *buf_priv);
+ 
+-static void *vb2_vmalloc_alloc(struct device *dev, unsigned long attrs,
+-			       unsigned long size, enum dma_data_direction dma_dir,
+-			       gfp_t gfp_flags)
++static void *vb2_vmalloc_alloc(struct vb2_buffer *vb, struct device *dev,
++			       unsigned long size)
+ {
+ 	struct vb2_vmalloc_buf *buf;
+ 
+-	buf = kzalloc(sizeof(*buf), GFP_KERNEL | gfp_flags);
++	buf = kzalloc(sizeof(*buf), GFP_KERNEL | vb->vb2_queue->gfp_flags);
+ 	if (!buf)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -52,7 +51,7 @@ static void *vb2_vmalloc_alloc(struct device *dev, unsigned long attrs,
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	buf->handler.refcount = &buf->refcount;
+ 	buf->handler.put = vb2_vmalloc_put;
+ 	buf->handler.arg = buf;
+@@ -71,9 +70,8 @@ static void vb2_vmalloc_put(void *buf_priv)
+ 	}
+ }
+ 
+-static void *vb2_vmalloc_get_userptr(struct device *dev, unsigned long vaddr,
+-				     unsigned long size,
+-				     enum dma_data_direction dma_dir)
++static void *vb2_vmalloc_get_userptr(struct vb2_buffer *vb, struct device *dev,
++				     unsigned long vaddr, unsigned long size)
+ {
+ 	struct vb2_vmalloc_buf *buf;
+ 	struct frame_vector *vec;
+@@ -84,7 +82,7 @@ static void *vb2_vmalloc_get_userptr(struct device *dev, unsigned long vaddr,
+ 	if (!buf)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	offset = vaddr & ~PAGE_MASK;
+ 	buf->size = size;
+ 	vec = vb2_create_framevec(vaddr, size);
+@@ -147,7 +145,7 @@ static void vb2_vmalloc_put_userptr(void *buf_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_vmalloc_vaddr(void *buf_priv)
++static void *vb2_vmalloc_vaddr(struct vb2_buffer *vb, void *buf_priv)
+ {
+ 	struct vb2_vmalloc_buf *buf = buf_priv;
+ 
+@@ -339,7 +337,9 @@ static const struct dma_buf_ops vb2_vmalloc_dmabuf_ops = {
+ 	.release = vb2_vmalloc_dmabuf_ops_release,
+ };
+ 
+-static struct dma_buf *vb2_vmalloc_get_dmabuf(void *buf_priv, unsigned long flags)
++static struct dma_buf *vb2_vmalloc_get_dmabuf(struct vb2_buffer *vb,
++					      void *buf_priv,
++					      unsigned long flags)
+ {
+ 	struct vb2_vmalloc_buf *buf = buf_priv;
+ 	struct dma_buf *dbuf;
+@@ -403,8 +403,10 @@ static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
+ 	kfree(buf);
+ }
+ 
+-static void *vb2_vmalloc_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+-	unsigned long size, enum dma_data_direction dma_dir)
++static void *vb2_vmalloc_attach_dmabuf(struct vb2_buffer *vb,
++				       struct device *dev,
++				       struct dma_buf *dbuf,
++				       unsigned long size)
+ {
+ 	struct vb2_vmalloc_buf *buf;
+ 
+@@ -416,7 +418,7 @@ static void *vb2_vmalloc_attach_dmabuf(struct device *dev, struct dma_buf *dbuf,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	buf->dbuf = dbuf;
+-	buf->dma_dir = dma_dir;
++	buf->dma_dir = vb->vb2_queue->dma_dir;
+ 	buf->size = size;
+ 
+ 	return buf;
+diff --git a/drivers/media/dvb-frontends/mn88443x.c b/drivers/media/dvb-frontends/mn88443x.c
+index e4528784f8477..fff212c0bf3b5 100644
+--- a/drivers/media/dvb-frontends/mn88443x.c
++++ b/drivers/media/dvb-frontends/mn88443x.c
+@@ -204,11 +204,18 @@ struct mn88443x_priv {
+ 	struct regmap *regmap_t;
+ };
+ 
+-static void mn88443x_cmn_power_on(struct mn88443x_priv *chip)
++static int mn88443x_cmn_power_on(struct mn88443x_priv *chip)
+ {
++	struct device *dev = &chip->client_s->dev;
+ 	struct regmap *r_t = chip->regmap_t;
++	int ret;
+ 
+-	clk_prepare_enable(chip->mclk);
++	ret = clk_prepare_enable(chip->mclk);
++	if (ret) {
++		dev_err(dev, "Failed to prepare and enable mclk: %d\n",
++			ret);
++		return ret;
++	}
+ 
+ 	gpiod_set_value_cansleep(chip->reset_gpio, 1);
+ 	usleep_range(100, 1000);
+@@ -222,6 +229,8 @@ static void mn88443x_cmn_power_on(struct mn88443x_priv *chip)
+ 	} else {
+ 		regmap_write(r_t, HIZSET3, 0x8f);
+ 	}
++
++	return 0;
+ }
+ 
+ static void mn88443x_cmn_power_off(struct mn88443x_priv *chip)
+@@ -738,7 +747,10 @@ static int mn88443x_probe(struct i2c_client *client,
+ 	chip->fe.demodulator_priv = chip;
+ 	i2c_set_clientdata(client, chip);
+ 
+-	mn88443x_cmn_power_on(chip);
++	ret = mn88443x_cmn_power_on(chip);
++	if (ret)
++		goto err_i2c_t;
++
+ 	mn88443x_s_sleep(chip);
+ 	mn88443x_t_sleep(chip);
+ 
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index 588f8eb959844..bde7fb021564a 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -597,6 +597,7 @@ config VIDEO_AK881X
+ config VIDEO_THS8200
+ 	tristate "Texas Instruments THS8200 video encoder"
+ 	depends on VIDEO_V4L2 && I2C
++	select V4L2_ASYNC
+ 	help
+ 	  Support for the Texas Instruments THS8200 video encoder.
+ 
+diff --git a/drivers/media/i2c/imx258.c b/drivers/media/i2c/imx258.c
+index 81cdf37216ca7..c249507aa2dbc 100644
+--- a/drivers/media/i2c/imx258.c
++++ b/drivers/media/i2c/imx258.c
+@@ -1260,18 +1260,18 @@ static int imx258_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 
+ 	imx258->clk = devm_clk_get_optional(&client->dev, NULL);
++	if (IS_ERR(imx258->clk))
++		return dev_err_probe(&client->dev, PTR_ERR(imx258->clk),
++				     "error getting clock\n");
+ 	if (!imx258->clk) {
+ 		dev_dbg(&client->dev,
+ 			"no clock provided, using clock-frequency property\n");
+ 
+ 		device_property_read_u32(&client->dev, "clock-frequency", &val);
+-		if (val != IMX258_INPUT_CLOCK_FREQ)
+-			return -EINVAL;
+-	} else if (IS_ERR(imx258->clk)) {
+-		return dev_err_probe(&client->dev, PTR_ERR(imx258->clk),
+-				     "error getting clock\n");
++	} else {
++		val = clk_get_rate(imx258->clk);
+ 	}
+-	if (clk_get_rate(imx258->clk) != IMX258_INPUT_CLOCK_FREQ) {
++	if (val != IMX258_INPUT_CLOCK_FREQ) {
+ 		dev_err(&client->dev, "input clock frequency not supported\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/media/i2c/ir-kbd-i2c.c b/drivers/media/i2c/ir-kbd-i2c.c
+index 92376592455ee..56674173524fd 100644
+--- a/drivers/media/i2c/ir-kbd-i2c.c
++++ b/drivers/media/i2c/ir-kbd-i2c.c
+@@ -791,6 +791,7 @@ static int ir_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 		rc_proto    = RC_PROTO_BIT_RC5 | RC_PROTO_BIT_RC6_MCE |
+ 							RC_PROTO_BIT_RC6_6A_32;
+ 		ir_codes    = RC_MAP_HAUPPAUGE;
++		ir->polling_interval = 125;
+ 		probe_tx = true;
+ 		break;
+ 	}
+diff --git a/drivers/media/i2c/mt9p031.c b/drivers/media/i2c/mt9p031.c
+index 6eb88ef997836..3ae1b28c8351b 100644
+--- a/drivers/media/i2c/mt9p031.c
++++ b/drivers/media/i2c/mt9p031.c
+@@ -78,7 +78,9 @@
+ #define		MT9P031_PIXEL_CLOCK_INVERT		(1 << 15)
+ #define		MT9P031_PIXEL_CLOCK_SHIFT(n)		((n) << 8)
+ #define		MT9P031_PIXEL_CLOCK_DIVIDE(n)		((n) << 0)
+-#define MT9P031_FRAME_RESTART				0x0b
++#define MT9P031_RESTART					0x0b
++#define		MT9P031_FRAME_PAUSE_RESTART		(1 << 1)
++#define		MT9P031_FRAME_RESTART			(1 << 0)
+ #define MT9P031_SHUTTER_DELAY				0x0c
+ #define MT9P031_RST					0x0d
+ #define		MT9P031_RST_ENABLE			1
+@@ -444,9 +446,23 @@ static int mt9p031_set_params(struct mt9p031 *mt9p031)
+ static int mt9p031_s_stream(struct v4l2_subdev *subdev, int enable)
+ {
+ 	struct mt9p031 *mt9p031 = to_mt9p031(subdev);
++	struct i2c_client *client = v4l2_get_subdevdata(subdev);
++	int val;
+ 	int ret;
+ 
+ 	if (!enable) {
++		/* enable pause restart */
++		val = MT9P031_FRAME_PAUSE_RESTART;
++		ret = mt9p031_write(client, MT9P031_RESTART, val);
++		if (ret < 0)
++			return ret;
++
++		/* enable restart + keep pause restart set */
++		val |= MT9P031_FRAME_RESTART;
++		ret = mt9p031_write(client, MT9P031_RESTART, val);
++		if (ret < 0)
++			return ret;
++
+ 		/* Stop sensor readout */
+ 		ret = mt9p031_set_output_control(mt9p031,
+ 						 MT9P031_OUTPUT_CONTROL_CEN, 0);
+@@ -466,6 +482,16 @@ static int mt9p031_s_stream(struct v4l2_subdev *subdev, int enable)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/*
++	 * - clear pause restart
++	 * - don't clear restart as clearing restart manually can cause
++	 *   undefined behavior
++	 */
++	val = MT9P031_FRAME_RESTART;
++	ret = mt9p031_write(client, MT9P031_RESTART, val);
++	if (ret < 0)
++		return ret;
++
+ 	return mt9p031_pll_enable(mt9p031);
+ }
+ 
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index ef726faee2a4c..c62554fc35e72 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -1247,13 +1247,13 @@ tda1997x_parse_infoframe(struct tda1997x_state *state, u16 addr)
+ {
+ 	struct v4l2_subdev *sd = &state->sd;
+ 	union hdmi_infoframe frame;
+-	u8 buffer[40];
++	u8 buffer[40] = { 0 };
+ 	u8 reg;
+ 	int len, err;
+ 
+ 	/* read data */
+ 	len = io_readn(sd, addr, sizeof(buffer), buffer);
+-	err = hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer));
++	err = hdmi_infoframe_unpack(&frame, buffer, len);
+ 	if (err) {
+ 		v4l_err(state->client,
+ 			"failed parsing %d byte infoframe: 0x%04x/0x%02x\n",
+@@ -1927,13 +1927,13 @@ static int tda1997x_log_infoframe(struct v4l2_subdev *sd, int addr)
+ {
+ 	struct tda1997x_state *state = to_state(sd);
+ 	union hdmi_infoframe frame;
+-	u8 buffer[40];
++	u8 buffer[40] = { 0 };
+ 	int len, err;
+ 
+ 	/* read data */
+ 	len = io_readn(sd, addr, sizeof(buffer), buffer);
+ 	v4l2_dbg(1, debug, sd, "infoframe: addr=%d len=%d\n", addr, len);
+-	err = hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer));
++	err = hdmi_infoframe_unpack(&frame, buffer, len);
+ 	if (err) {
+ 		v4l_err(state->client,
+ 			"failed parsing %d byte infoframe: 0x%04x/0x%02x\n",
+diff --git a/drivers/media/pci/cx23885/cx23885-alsa.c b/drivers/media/pci/cx23885/cx23885-alsa.c
+index ab14d35214aa8..25dc8d4dc5b73 100644
+--- a/drivers/media/pci/cx23885/cx23885-alsa.c
++++ b/drivers/media/pci/cx23885/cx23885-alsa.c
+@@ -550,7 +550,7 @@ struct cx23885_audio_dev *cx23885_audio_register(struct cx23885_dev *dev)
+ 			   SNDRV_DEFAULT_IDX1, SNDRV_DEFAULT_STR1,
+ 			THIS_MODULE, sizeof(struct cx23885_audio_dev), &card);
+ 	if (err < 0)
+-		goto error;
++		goto error_msg;
+ 
+ 	chip = (struct cx23885_audio_dev *) card->private_data;
+ 	chip->dev = dev;
+@@ -576,6 +576,7 @@ struct cx23885_audio_dev *cx23885_audio_register(struct cx23885_dev *dev)
+ 
+ error:
+ 	snd_card_free(card);
++error_msg:
+ 	pr_err("%s(): Failed to register analog audio adapter\n",
+ 	       __func__);
+ 
+diff --git a/drivers/media/pci/ivtv/ivtvfb.c b/drivers/media/pci/ivtv/ivtvfb.c
+index e2d56dca5be40..5ad03b2a50bdb 100644
+--- a/drivers/media/pci/ivtv/ivtvfb.c
++++ b/drivers/media/pci/ivtv/ivtvfb.c
+@@ -36,7 +36,7 @@
+ #include <linux/fb.h>
+ #include <linux/ivtvfb.h>
+ 
+-#ifdef CONFIG_X86_64
++#if defined(CONFIG_X86_64) && !defined(CONFIG_UML)
+ #include <asm/memtype.h>
+ #endif
+ 
+@@ -1157,7 +1157,7 @@ static int ivtvfb_init_card(struct ivtv *itv)
+ {
+ 	int rc;
+ 
+-#ifdef CONFIG_X86_64
++#if defined(CONFIG_X86_64) && !defined(CONFIG_UML)
+ 	if (pat_enabled()) {
+ 		if (ivtvfb_force_pat) {
+ 			pr_info("PAT is enabled. Write-combined framebuffer caching will be disabled.\n");
+diff --git a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+index 6f3125c2d0976..77bae14685513 100644
+--- a/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
++++ b/drivers/media/pci/netup_unidvb/netup_unidvb_core.c
+@@ -258,19 +258,24 @@ static irqreturn_t netup_unidvb_isr(int irq, void *dev_id)
+ 	if ((reg40 & AVL_IRQ_ASSERTED) != 0) {
+ 		/* IRQ is being signaled */
+ 		reg_isr = readw(ndev->bmmio0 + REG_ISR);
+-		if (reg_isr & NETUP_UNIDVB_IRQ_I2C0) {
+-			iret = netup_i2c_interrupt(&ndev->i2c[0]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_I2C1) {
+-			iret = netup_i2c_interrupt(&ndev->i2c[1]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_SPI) {
++		if (reg_isr & NETUP_UNIDVB_IRQ_SPI)
+ 			iret = netup_spi_interrupt(ndev->spi);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA1) {
+-			iret = netup_dma_interrupt(&ndev->dma[0]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA2) {
+-			iret = netup_dma_interrupt(&ndev->dma[1]);
+-		} else if (reg_isr & NETUP_UNIDVB_IRQ_CI) {
+-			iret = netup_ci_interrupt(ndev);
++		else if (!ndev->old_fw) {
++			if (reg_isr & NETUP_UNIDVB_IRQ_I2C0) {
++				iret = netup_i2c_interrupt(&ndev->i2c[0]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_I2C1) {
++				iret = netup_i2c_interrupt(&ndev->i2c[1]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA1) {
++				iret = netup_dma_interrupt(&ndev->dma[0]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_DMA2) {
++				iret = netup_dma_interrupt(&ndev->dma[1]);
++			} else if (reg_isr & NETUP_UNIDVB_IRQ_CI) {
++				iret = netup_ci_interrupt(ndev);
++			} else {
++				goto err;
++			}
+ 		} else {
++err:
+ 			dev_err(&pci_dev->dev,
+ 				"%s(): unknown interrupt 0x%x\n",
+ 				__func__, reg_isr);
+diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c
+index 887b492e4ad1c..14a119b43bca0 100644
+--- a/drivers/media/platform/allegro-dvt/allegro-core.c
++++ b/drivers/media/platform/allegro-dvt/allegro-core.c
+@@ -2185,6 +2185,15 @@ static irqreturn_t allegro_irq_thread(int irq, void *data)
+ {
+ 	struct allegro_dev *dev = data;
+ 
++	/*
++	 * The firmware is initialized after the mailbox is setup. We further
++	 * check the AL5_ITC_CPU_IRQ_STA register, if the firmware actually
++	 * triggered the interrupt. Although this should not happen, make sure
++	 * that we ignore interrupts, if the mailbox is not initialized.
++	 */
++	if (!dev->mbox_status)
++		return IRQ_NONE;
++
+ 	allegro_mbox_notify(dev->mbox_status);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/media/platform/atmel/atmel-isc-base.c b/drivers/media/platform/atmel/atmel-isc-base.c
+index 136ab7cf36edc..ebf264b980f91 100644
+--- a/drivers/media/platform/atmel/atmel-isc-base.c
++++ b/drivers/media/platform/atmel/atmel-isc-base.c
+@@ -123,11 +123,9 @@ static int isc_clk_prepare(struct clk_hw *hw)
+ 	struct isc_clk *isc_clk = to_isc_clk(hw);
+ 	int ret;
+ 
+-	if (isc_clk->id == ISC_ISPCK) {
+-		ret = pm_runtime_resume_and_get(isc_clk->dev);
+-		if (ret < 0)
+-			return ret;
+-	}
++	ret = pm_runtime_resume_and_get(isc_clk->dev);
++	if (ret < 0)
++		return ret;
+ 
+ 	return isc_wait_clk_stable(hw);
+ }
+@@ -138,8 +136,7 @@ static void isc_clk_unprepare(struct clk_hw *hw)
+ 
+ 	isc_wait_clk_stable(hw);
+ 
+-	if (isc_clk->id == ISC_ISPCK)
+-		pm_runtime_put_sync(isc_clk->dev);
++	pm_runtime_put_sync(isc_clk->dev);
+ }
+ 
+ static int isc_clk_enable(struct clk_hw *hw)
+@@ -186,16 +183,13 @@ static int isc_clk_is_enabled(struct clk_hw *hw)
+ 	u32 status;
+ 	int ret;
+ 
+-	if (isc_clk->id == ISC_ISPCK) {
+-		ret = pm_runtime_resume_and_get(isc_clk->dev);
+-		if (ret < 0)
+-			return 0;
+-	}
++	ret = pm_runtime_resume_and_get(isc_clk->dev);
++	if (ret < 0)
++		return 0;
+ 
+ 	regmap_read(isc_clk->regmap, ISC_CLKSR, &status);
+ 
+-	if (isc_clk->id == ISC_ISPCK)
+-		pm_runtime_put_sync(isc_clk->dev);
++	pm_runtime_put_sync(isc_clk->dev);
+ 
+ 	return status & ISC_CLK(isc_clk->id) ? 1 : 0;
+ }
+@@ -325,6 +319,9 @@ static int isc_clk_register(struct isc_device *isc, unsigned int id)
+ 	const char *parent_names[3];
+ 	int num_parents;
+ 
++	if (id == ISC_ISPCK && !isc->ispck_required)
++		return 0;
++
+ 	num_parents = of_clk_get_parent_count(np);
+ 	if (num_parents < 1 || num_parents > 3)
+ 		return -EINVAL;
+diff --git a/drivers/media/platform/atmel/atmel-isc.h b/drivers/media/platform/atmel/atmel-isc.h
+index 19cc60dfcbe0f..2bfcb135ef13b 100644
+--- a/drivers/media/platform/atmel/atmel-isc.h
++++ b/drivers/media/platform/atmel/atmel-isc.h
+@@ -178,6 +178,7 @@ struct isc_reg_offsets {
+  * @hclock:		Hclock clock input (refer datasheet)
+  * @ispck:		iscpck clock (refer datasheet)
+  * @isc_clks:		ISC clocks
++ * @ispck_required:	ISC requires ISP Clock initialization
+  * @dcfg:		DMA master configuration, architecture dependent
+  *
+  * @dev:		Registered device driver
+@@ -252,6 +253,7 @@ struct isc_device {
+ 	struct clk		*hclock;
+ 	struct clk		*ispck;
+ 	struct isc_clk		isc_clks[2];
++	bool			ispck_required;
+ 	u32			dcfg;
+ 
+ 	struct device		*dev;
+diff --git a/drivers/media/platform/atmel/atmel-sama5d2-isc.c b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+index b66f1d174e9d7..e29a9193bac81 100644
+--- a/drivers/media/platform/atmel/atmel-sama5d2-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+@@ -454,6 +454,9 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ 	/* sama5d2-isc - 8 bits per beat */
+ 	isc->dcfg = ISC_DCFG_YMBSIZE_BEATS8 | ISC_DCFG_CMBSIZE_BEATS8;
+ 
++	/* sama5d2-isc : ISPCK is required and mandatory */
++	isc->ispck_required = true;
++
+ 	ret = isc_pipeline_init(isc);
+ 	if (ret)
+ 		return ret;
+@@ -476,22 +479,6 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ 		dev_err(dev, "failed to init isc clock: %d\n", ret);
+ 		goto unprepare_hclk;
+ 	}
+-
+-	isc->ispck = isc->isc_clks[ISC_ISPCK].clk;
+-
+-	ret = clk_prepare_enable(isc->ispck);
+-	if (ret) {
+-		dev_err(dev, "failed to enable ispck: %d\n", ret);
+-		goto unprepare_hclk;
+-	}
+-
+-	/* ispck should be greater or equal to hclock */
+-	ret = clk_set_rate(isc->ispck, clk_get_rate(isc->hclock));
+-	if (ret) {
+-		dev_err(dev, "failed to set ispck rate: %d\n", ret);
+-		goto unprepare_clk;
+-	}
+-
+ 	ret = v4l2_device_register(dev, &isc->v4l2_dev);
+ 	if (ret) {
+ 		dev_err(dev, "unable to register v4l2 device.\n");
+@@ -545,19 +532,35 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	pm_request_idle(dev);
+ 
++	isc->ispck = isc->isc_clks[ISC_ISPCK].clk;
++
++	ret = clk_prepare_enable(isc->ispck);
++	if (ret) {
++		dev_err(dev, "failed to enable ispck: %d\n", ret);
++		goto cleanup_subdev;
++	}
++
++	/* ispck should be greater or equal to hclock */
++	ret = clk_set_rate(isc->ispck, clk_get_rate(isc->hclock));
++	if (ret) {
++		dev_err(dev, "failed to set ispck rate: %d\n", ret);
++		goto unprepare_clk;
++	}
++
+ 	regmap_read(isc->regmap, ISC_VERSION + isc->offsets.version, &ver);
+ 	dev_info(dev, "Microchip ISC version %x\n", ver);
+ 
+ 	return 0;
+ 
++unprepare_clk:
++	clk_disable_unprepare(isc->ispck);
++
+ cleanup_subdev:
+ 	isc_subdev_cleanup(isc);
+ 
+ unregister_v4l2_device:
+ 	v4l2_device_unregister(&isc->v4l2_dev);
+ 
+-unprepare_clk:
+-	clk_disable_unprepare(isc->ispck);
+ unprepare_hclk:
+ 	clk_disable_unprepare(isc->hclock);
+ 
+diff --git a/drivers/media/platform/atmel/atmel-sama7g5-isc.c b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+index f2785131ff569..9c05acafd0724 100644
+--- a/drivers/media/platform/atmel/atmel-sama7g5-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+@@ -447,6 +447,9 @@ static int microchip_xisc_probe(struct platform_device *pdev)
+ 	/* sama7g5-isc RAM access port is full AXI4 - 32 bits per beat */
+ 	isc->dcfg = ISC_DCFG_YMBSIZE_BEATS32 | ISC_DCFG_CMBSIZE_BEATS32;
+ 
++	/* sama7g5-isc : ISPCK does not exist, ISC is clocked by MCK */
++	isc->ispck_required = false;
++
+ 	ret = isc_pipeline_init(isc);
+ 	if (ret)
+ 		return ret;
+@@ -470,25 +473,10 @@ static int microchip_xisc_probe(struct platform_device *pdev)
+ 		goto unprepare_hclk;
+ 	}
+ 
+-	isc->ispck = isc->isc_clks[ISC_ISPCK].clk;
+-
+-	ret = clk_prepare_enable(isc->ispck);
+-	if (ret) {
+-		dev_err(dev, "failed to enable ispck: %d\n", ret);
+-		goto unprepare_hclk;
+-	}
+-
+-	/* ispck should be greater or equal to hclock */
+-	ret = clk_set_rate(isc->ispck, clk_get_rate(isc->hclock));
+-	if (ret) {
+-		dev_err(dev, "failed to set ispck rate: %d\n", ret);
+-		goto unprepare_clk;
+-	}
+-
+ 	ret = v4l2_device_register(dev, &isc->v4l2_dev);
+ 	if (ret) {
+ 		dev_err(dev, "unable to register v4l2 device.\n");
+-		goto unprepare_clk;
++		goto unprepare_hclk;
+ 	}
+ 
+ 	ret = xisc_parse_dt(dev, isc);
+@@ -549,8 +537,6 @@ cleanup_subdev:
+ unregister_v4l2_device:
+ 	v4l2_device_unregister(&isc->v4l2_dev);
+ 
+-unprepare_clk:
+-	clk_disable_unprepare(isc->ispck);
+ unprepare_hclk:
+ 	clk_disable_unprepare(isc->hclock);
+ 
+diff --git a/drivers/media/platform/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+index 755138063ee61..fc905ea78b175 100644
+--- a/drivers/media/platform/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+@@ -575,6 +575,10 @@ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ 
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	if (!dst_buf || !src_buf) {
++		dev_err(dev, "No source or destination buffer.\n");
++		goto job_unlock;
++	}
+ 	jpeg_src_buf = vb2_to_mxc_buf(&src_buf->vb2_buf);
+ 
+ 	if (dec_ret & SLOT_STATUS_ENC_CONFIG_ERR) {
+@@ -2088,6 +2092,8 @@ err_m2m:
+ 	v4l2_device_unregister(&jpeg->v4l2_dev);
+ 
+ err_register:
++	mxc_jpeg_detach_pm_domains(jpeg);
++
+ err_irq:
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/meson/ge2d/ge2d.c b/drivers/media/platform/meson/ge2d/ge2d.c
+index a1393fefa8aea..9b1e973e78da3 100644
+--- a/drivers/media/platform/meson/ge2d/ge2d.c
++++ b/drivers/media/platform/meson/ge2d/ge2d.c
+@@ -779,11 +779,7 @@ static int ge2d_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		 * If the rotation parameter changes the OUTPUT frames
+ 		 * parameters, take them in account
+ 		 */
+-		if (fmt.width != ctx->out.pix_fmt.width ||
+-		    fmt.height != ctx->out.pix_fmt.width ||
+-		    fmt.bytesperline > ctx->out.pix_fmt.bytesperline ||
+-		    fmt.sizeimage > ctx->out.pix_fmt.sizeimage)
+-			ctx->out.pix_fmt = fmt;
++		ctx->out.pix_fmt = fmt;
+ 
+ 		break;
+ 	}
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
+index 416f356af363d..d97a6765693f1 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c
+@@ -793,7 +793,7 @@ static int vb2ops_venc_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct mtk_vcodec_ctx *ctx = vb2_get_drv_priv(q);
+ 	struct venc_enc_param param;
+-	int ret;
++	int ret, pm_ret;
+ 	int i;
+ 
+ 	/* Once state turn into MTK_STATE_ABORT, we need stop_streaming
+@@ -845,9 +845,9 @@ static int vb2ops_venc_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	return 0;
+ 
+ err_set_param:
+-	ret = pm_runtime_put(&ctx->dev->plat_dev->dev);
+-	if (ret < 0)
+-		mtk_v4l2_err("pm_runtime_put fail %d", ret);
++	pm_ret = pm_runtime_put(&ctx->dev->plat_dev->dev);
++	if (pm_ret < 0)
++		mtk_v4l2_err("pm_runtime_put fail %d", pm_ret);
+ 
+ err_start_stream:
+ 	for (i = 0; i < q->num_buffers; ++i) {
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+index ec290dde59cfd..7f1647da0ade0 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+@@ -848,7 +848,8 @@ static int mtk_vpu_probe(struct platform_device *pdev)
+ 	vpu->wdt.wq = create_singlethread_workqueue("vpu_wdt");
+ 	if (!vpu->wdt.wq) {
+ 		dev_err(dev, "initialize wdt workqueue failed\n");
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto clk_unprepare;
+ 	}
+ 	INIT_WORK(&vpu->wdt.ws, vpu_wdt_reset_func);
+ 	mutex_init(&vpu->vpu_mutex);
+@@ -942,6 +943,8 @@ disable_vpu_clk:
+ 	vpu_clock_disable(vpu);
+ workqueue_destroy:
+ 	destroy_workqueue(vpu->wdt.wq);
++clk_unprepare:
++	clk_unprepare(vpu->clk);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index 3e2345eb47f7c..e031fd17f4e75 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -1085,12 +1085,16 @@ static unsigned long calculate_inst_freq(struct venus_inst *inst,
+ 	if (inst->state != INST_START)
+ 		return 0;
+ 
+-	if (inst->session_type == VIDC_SESSION_TYPE_ENC)
++	if (inst->session_type == VIDC_SESSION_TYPE_ENC) {
+ 		vpp_freq_per_mb = inst->flags & VENUS_LOW_POWER ?
+ 			inst->clk_data.low_power_freq :
+ 			inst->clk_data.vpp_freq;
+ 
+-	vpp_freq = mbs_per_sec * vpp_freq_per_mb;
++		vpp_freq = mbs_per_sec * vpp_freq_per_mb;
++	} else {
++		vpp_freq = mbs_per_sec * inst->clk_data.vpp_freq;
++	}
++
+ 	/* 21 / 20 is overhead factor */
+ 	vpp_freq += vpp_freq / 20;
+ 	vsp_freq = mbs_per_sec * inst->clk_data.vsp_freq;
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index e28eff0396888..ba4a380016cc4 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -553,6 +553,8 @@ static int rcsi2_start_receiver(struct rcar_csi2 *priv)
+ 
+ 	/* Code is validated in set_fmt. */
+ 	format = rcsi2_code_to_fmt(priv->mf.code);
++	if (!format)
++		return -EINVAL;
+ 
+ 	/*
+ 	 * Enable all supported CSI-2 channels with virtual channel and
+diff --git a/drivers/media/platform/rcar-vin/rcar-dma.c b/drivers/media/platform/rcar-vin/rcar-dma.c
+index f5f722ab1d4e8..520d044bfb8d5 100644
+--- a/drivers/media/platform/rcar-vin/rcar-dma.c
++++ b/drivers/media/platform/rcar-vin/rcar-dma.c
+@@ -904,7 +904,8 @@ static void rvin_fill_hw_slot(struct rvin_dev *vin, int slot)
+ 				vin->format.sizeimage / 2;
+ 			break;
+ 		}
+-	} else if (vin->state != RUNNING || list_empty(&vin->buf_list)) {
++	} else if ((vin->state != STOPPED && vin->state != RUNNING) ||
++		   list_empty(&vin->buf_list)) {
+ 		vin->buf_hw[slot].buffer = NULL;
+ 		vin->buf_hw[slot].type = FULL;
+ 		phys_addr = vin->scratch_phys;
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index eba2b9f040df0..f336a95432732 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1283,11 +1283,15 @@ static int s5p_mfc_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dev->condlock);
+ 	dev->plat_dev = pdev;
+ 	if (!dev->plat_dev) {
+-		dev_err(&pdev->dev, "No platform data specified\n");
++		mfc_err("No platform data specified\n");
+ 		return -ENODEV;
+ 	}
+ 
+ 	dev->variant = of_device_get_match_data(&pdev->dev);
++	if (!dev->variant) {
++		dev_err(&pdev->dev, "Failed to get device MFC hardware variant information\n");
++		return -ENOENT;
++	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	dev->regs_base = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
+index d914ccef98317..6110718645a4f 100644
+--- a/drivers/media/platform/stm32/stm32-dcmi.c
++++ b/drivers/media/platform/stm32/stm32-dcmi.c
+@@ -128,6 +128,7 @@ struct stm32_dcmi {
+ 	int				sequence;
+ 	struct list_head		buffers;
+ 	struct dcmi_buf			*active;
++	int			irq;
+ 
+ 	struct v4l2_device		v4l2_dev;
+ 	struct video_device		*vdev;
+@@ -1759,6 +1760,14 @@ static int dcmi_graph_notify_complete(struct v4l2_async_notifier *notifier)
+ 		return ret;
+ 	}
+ 
++	ret = devm_request_threaded_irq(dcmi->dev, dcmi->irq, dcmi_irq_callback,
++					dcmi_irq_thread, IRQF_ONESHOT,
++					dev_name(dcmi->dev), dcmi);
++	if (ret) {
++		dev_err(dcmi->dev, "Unable to request irq %d\n", dcmi->irq);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1914,6 +1923,8 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	if (irq <= 0)
+ 		return irq ? irq : -ENXIO;
+ 
++	dcmi->irq = irq;
++
+ 	dcmi->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!dcmi->res) {
+ 		dev_err(&pdev->dev, "Could not get resource\n");
+@@ -1926,14 +1937,6 @@ static int dcmi_probe(struct platform_device *pdev)
+ 		return PTR_ERR(dcmi->regs);
+ 	}
+ 
+-	ret = devm_request_threaded_irq(&pdev->dev, irq, dcmi_irq_callback,
+-					dcmi_irq_thread, IRQF_ONESHOT,
+-					dev_name(&pdev->dev), dcmi);
+-	if (ret) {
+-		dev_err(&pdev->dev, "Unable to request irq %d\n", irq);
+-		return ret;
+-	}
+-
+ 	mclk = devm_clk_get(&pdev->dev, "mclk");
+ 	if (IS_ERR(mclk)) {
+ 		if (PTR_ERR(mclk) != -EPROBE_DEFER)
+diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+index 07b2161392d21..5ba3e29f794fd 100644
+--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+@@ -467,7 +467,7 @@ static const struct v4l2_ioctl_ops sun6i_video_ioctl_ops = {
+ static int sun6i_video_open(struct file *file)
+ {
+ 	struct sun6i_video *video = video_drvdata(file);
+-	int ret;
++	int ret = 0;
+ 
+ 	if (mutex_lock_interruptible(&video->lock))
+ 		return -ERESTARTSYS;
+@@ -481,10 +481,8 @@ static int sun6i_video_open(struct file *file)
+ 		goto fh_release;
+ 
+ 	/* check if already powered */
+-	if (!v4l2_fh_is_singular_file(file)) {
+-		ret = -EBUSY;
++	if (!v4l2_fh_is_singular_file(file))
+ 		goto unlock;
+-	}
+ 
+ 	ret = sun6i_csi_set_power(video->csi, true);
+ 	if (ret < 0)
+diff --git a/drivers/media/radio/radio-wl1273.c b/drivers/media/radio/radio-wl1273.c
+index 1123768731676..484046471c03f 100644
+--- a/drivers/media/radio/radio-wl1273.c
++++ b/drivers/media/radio/radio-wl1273.c
+@@ -1279,7 +1279,7 @@ static int wl1273_fm_vidioc_querycap(struct file *file, void *priv,
+ 
+ 	strscpy(capability->driver, WL1273_FM_DRIVER_NAME,
+ 		sizeof(capability->driver));
+-	strscpy(capability->card, "Texas Instruments Wl1273 FM Radio",
++	strscpy(capability->card, "TI Wl1273 FM Radio",
+ 		sizeof(capability->card));
+ 	strscpy(capability->bus_info, radio->bus_type,
+ 		sizeof(capability->bus_info));
+diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
+index f491420d7b538..a972c0705ac79 100644
+--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
+@@ -11,7 +11,7 @@
+ 
+ /* driver definitions */
+ #define DRIVER_AUTHOR "Joonyoung Shim <jy0922.shim@samsung.com>";
+-#define DRIVER_CARD "Silicon Labs Si470x FM Radio Receiver"
++#define DRIVER_CARD "Silicon Labs Si470x FM Radio"
+ #define DRIVER_DESC "I2C radio driver for Si470x FM Radio Receivers"
+ #define DRIVER_VERSION "1.0.2"
+ 
+diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c
+index fedff68d8c496..3f8634a465730 100644
+--- a/drivers/media/radio/si470x/radio-si470x-usb.c
++++ b/drivers/media/radio/si470x/radio-si470x-usb.c
+@@ -16,7 +16,7 @@
+ 
+ /* driver definitions */
+ #define DRIVER_AUTHOR "Tobias Lorenz <tobias.lorenz@gmx.net>"
+-#define DRIVER_CARD "Silicon Labs Si470x FM Radio Receiver"
++#define DRIVER_CARD "Silicon Labs Si470x FM Radio"
+ #define DRIVER_DESC "USB radio driver for Si470x FM Radio Receivers"
+ #define DRIVER_VERSION "1.0.10"
+ 
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 48d52baec1a1c..1aa7989e756cc 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -310,7 +310,7 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 		buf[i] = cpu_to_be16(v);
+ 	}
+ 
+-	buf[count] = 0xffff;
++	buf[count] = cpu_to_be16(0xffff);
+ 
+ 	irtoy->tx_buf = buf;
+ 	irtoy->tx_len = size;
+diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
+index 5bc23e8c6d91d..4f77d4ebacdc5 100644
+--- a/drivers/media/rc/ite-cir.c
++++ b/drivers/media/rc/ite-cir.c
+@@ -242,7 +242,7 @@ static irqreturn_t ite_cir_isr(int irq, void *data)
+ 	}
+ 
+ 	/* check for the receive interrupt */
+-	if (iflags & ITE_IRQ_RX_FIFO) {
++	if (iflags & (ITE_IRQ_RX_FIFO | ITE_IRQ_RX_FIFO_OVERRUN)) {
+ 		/* read the FIFO bytes */
+ 		rx_bytes = dev->params->get_rx_bytes(dev, rx_buf,
+ 						    ITE_RX_FIFO_LEN);
+diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
+index 5642595a057ec..8870c4e6c5f44 100644
+--- a/drivers/media/rc/mceusb.c
++++ b/drivers/media/rc/mceusb.c
+@@ -1386,6 +1386,7 @@ static void mceusb_dev_recv(struct urb *urb)
+ 	case -ECONNRESET:
+ 	case -ENOENT:
+ 	case -EILSEQ:
++	case -EPROTO:
+ 	case -ESHUTDOWN:
+ 		usb_unlink_urb(urb);
+ 		return;
+diff --git a/drivers/media/spi/cxd2880-spi.c b/drivers/media/spi/cxd2880-spi.c
+index b91a1e845b972..506f52c1af101 100644
+--- a/drivers/media/spi/cxd2880-spi.c
++++ b/drivers/media/spi/cxd2880-spi.c
+@@ -618,7 +618,7 @@ fail_frontend:
+ fail_attach:
+ 	dvb_unregister_adapter(&dvb_spi->adapter);
+ fail_adapter:
+-	if (!dvb_spi->vcc_supply)
++	if (dvb_spi->vcc_supply)
+ 		regulator_disable(dvb_spi->vcc_supply);
+ fail_regulator:
+ 	kfree(dvb_spi);
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_bridge.c b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+index 75617709c8ce2..82620613d56b8 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_bridge.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+@@ -564,6 +564,10 @@ static int vidtv_bridge_remove(struct platform_device *pdev)
+ 
+ static void vidtv_bridge_dev_release(struct device *dev)
+ {
++	struct vidtv_dvb *dvb;
++
++	dvb = dev_get_drvdata(dev);
++	kfree(dvb);
+ }
+ 
+ static struct platform_device vidtv_bridge_dev = {
+diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
+index 1c39b61cde29b..86788771175b7 100644
+--- a/drivers/media/usb/dvb-usb/az6027.c
++++ b/drivers/media/usb/dvb-usb/az6027.c
+@@ -391,6 +391,7 @@ static struct rc_map_table rc_map_az6027_table[] = {
+ /* remote control stuff (does not work with my box) */
+ static int az6027_rc_query(struct dvb_usb_device *d, u32 *event, int *state)
+ {
++	*state = REMOTE_NO_KEY_PRESSED;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb/dibusb-common.c b/drivers/media/usb/dvb-usb/dibusb-common.c
+index 02b51d1a1b67c..aff60c10cb0b2 100644
+--- a/drivers/media/usb/dvb-usb/dibusb-common.c
++++ b/drivers/media/usb/dvb-usb/dibusb-common.c
+@@ -223,7 +223,7 @@ int dibusb_read_eeprom_byte(struct dvb_usb_device *d, u8 offs, u8 *val)
+ 	u8 *buf;
+ 	int rc;
+ 
+-	buf = kmalloc(2, GFP_KERNEL);
++	buf = kzalloc(2, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index c1e0dccb74088..948e22e29b42a 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -4139,8 +4139,11 @@ static void em28xx_usb_disconnect(struct usb_interface *intf)
+ 
+ 	em28xx_close_extension(dev);
+ 
+-	if (dev->dev_next)
++	if (dev->dev_next) {
++		em28xx_close_extension(dev->dev_next);
+ 		em28xx_release_resources(dev->dev_next);
++	}
++
+ 	em28xx_release_resources(dev);
+ 
+ 	if (dev->dev_next) {
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index 584fa400cd7d8..acc0bf7dbe2b1 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -1154,8 +1154,9 @@ int em28xx_suspend_extension(struct em28xx *dev)
+ 	dev_info(&dev->intf->dev, "Suspending extensions\n");
+ 	mutex_lock(&em28xx_devlist_mutex);
+ 	list_for_each_entry(ops, &em28xx_extension_devlist, next) {
+-		if (ops->suspend)
+-			ops->suspend(dev);
++		if (!ops->suspend)
++			continue;
++		ops->suspend(dev);
+ 		if (dev->dev_next)
+ 			ops->suspend(dev->dev_next);
+ 	}
+diff --git a/drivers/media/usb/tm6000/tm6000-video.c b/drivers/media/usb/tm6000/tm6000-video.c
+index 3f650ede0c3dc..e293f6f3d1bc9 100644
+--- a/drivers/media/usb/tm6000/tm6000-video.c
++++ b/drivers/media/usb/tm6000/tm6000-video.c
+@@ -852,8 +852,7 @@ static int vidioc_querycap(struct file *file, void  *priv,
+ 	struct tm6000_core *dev = ((struct tm6000_fh *)priv)->dev;
+ 
+ 	strscpy(cap->driver, "tm6000", sizeof(cap->driver));
+-	strscpy(cap->card, "Trident TVMaster TM5600/6000/6010",
+-		sizeof(cap->card));
++	strscpy(cap->card, "Trident TM5600/6000/6010", sizeof(cap->card));
+ 	usb_make_path(dev->udev, cap->bus_info, sizeof(cap->bus_info));
+ 	cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_READWRITE |
+ 			    V4L2_CAP_DEVICE_CAPS;
+diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+index bfda46a36dc50..38822cedd93a9 100644
+--- a/drivers/media/usb/ttusb-dec/ttusb_dec.c
++++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c
+@@ -327,7 +327,7 @@ static int ttusb_dec_send_command(struct ttusb_dec *dec, const u8 command,
+ 	result = mutex_lock_interruptible(&dec->usb_mutex);
+ 	if (result) {
+ 		printk("%s: Failed to lock usb mutex.\n", __func__);
+-		goto err;
++		goto err_free;
+ 	}
+ 
+ 	b[0] = 0xaa;
+@@ -349,7 +349,7 @@ static int ttusb_dec_send_command(struct ttusb_dec *dec, const u8 command,
+ 	if (result) {
+ 		printk("%s: command bulk message failed: error %d\n",
+ 		       __func__, result);
+-		goto err;
++		goto err_mutex_unlock;
+ 	}
+ 
+ 	result = usb_bulk_msg(dec->udev, dec->result_pipe, b,
+@@ -358,7 +358,7 @@ static int ttusb_dec_send_command(struct ttusb_dec *dec, const u8 command,
+ 	if (result) {
+ 		printk("%s: result bulk message failed: error %d\n",
+ 		       __func__, result);
+-		goto err;
++		goto err_mutex_unlock;
+ 	} else {
+ 		if (debug) {
+ 			printk(KERN_DEBUG "%s: result: %*ph\n",
+@@ -371,9 +371,9 @@ static int ttusb_dec_send_command(struct ttusb_dec *dec, const u8 command,
+ 			memcpy(cmd_result, &b[4], b[3]);
+ 	}
+ 
+-err:
++err_mutex_unlock:
+ 	mutex_unlock(&dec->usb_mutex);
+-
++err_free:
+ 	kfree(b);
+ 	return result;
+ }
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 9a791d8ef200d..c4bc67024534a 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2194,6 +2194,7 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 			      const struct v4l2_file_operations *fops,
+ 			      const struct v4l2_ioctl_ops *ioctl_ops)
+ {
++	const char *name;
+ 	int ret;
+ 
+ 	/* Initialize the video buffers queue. */
+@@ -2222,16 +2223,20 @@ int uvc_register_video_device(struct uvc_device *dev,
+ 	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ 	default:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
++		name = "Video Capture";
+ 		break;
+ 	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ 		vdev->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
++		name = "Video Output";
+ 		break;
+ 	case V4L2_BUF_TYPE_META_CAPTURE:
+ 		vdev->device_caps = V4L2_CAP_META_CAPTURE | V4L2_CAP_STREAMING;
++		name = "Metadata";
+ 		break;
+ 	}
+ 
+-	strscpy(vdev->name, dev->name, sizeof(vdev->name));
++	snprintf(vdev->name, sizeof(vdev->name), "%s %u", name,
++		 stream->header.bTerminalLink);
+ 
+ 	/*
+ 	 * Set the driver data before calling video_register_device, otherwise
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 6acb8013de08b..c9d208677bcd8 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -472,10 +472,13 @@ static int uvc_v4l2_set_streamparm(struct uvc_streaming *stream,
+ 	uvc_simplify_fraction(&timeperframe.numerator,
+ 		&timeperframe.denominator, 8, 333);
+ 
+-	if (parm->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
++	if (parm->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ 		parm->parm.capture.timeperframe = timeperframe;
+-	else
++		parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
++	} else {
+ 		parm->parm.output.timeperframe = timeperframe;
++		parm->parm.output.capability = V4L2_CAP_TIMEPERFRAME;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index e16464606b140..9f37eaf28ce7e 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -115,6 +115,11 @@ int uvc_query_ctrl(struct uvc_device *dev, u8 query, u8 unit,
+ 	case 5: /* Invalid unit */
+ 	case 6: /* Invalid control */
+ 	case 7: /* Invalid Request */
++		/*
++		 * The firmware has not properly implemented
++		 * the control or there has been a HW error.
++		 */
++		return -EIO;
+ 	case 8: /* Invalid value within range */
+ 		return -EINVAL;
+ 	default: /* reserved or unknown */
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 05d5db3d85e58..f4f67b385d00a 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -869,7 +869,7 @@ static void v4l_print_default(const void *arg, bool write_only)
+ 	pr_cont("driver-specific ioctl\n");
+ }
+ 
+-static int check_ext_ctrls(struct v4l2_ext_controls *c, int allow_priv)
++static bool check_ext_ctrls(struct v4l2_ext_controls *c, unsigned long ioctl)
+ {
+ 	__u32 i;
+ 
+@@ -878,23 +878,41 @@ static int check_ext_ctrls(struct v4l2_ext_controls *c, int allow_priv)
+ 	for (i = 0; i < c->count; i++)
+ 		c->controls[i].reserved2[0] = 0;
+ 
+-	/* V4L2_CID_PRIVATE_BASE cannot be used as control class
+-	   when using extended controls.
+-	   Only when passed in through VIDIOC_G_CTRL and VIDIOC_S_CTRL
+-	   is it allowed for backwards compatibility.
+-	 */
+-	if (!allow_priv && c->which == V4L2_CID_PRIVATE_BASE)
+-		return 0;
+-	if (!c->which)
+-		return 1;
++	switch (c->which) {
++	case V4L2_CID_PRIVATE_BASE:
++		/*
++		 * V4L2_CID_PRIVATE_BASE cannot be used as control class
++		 * when using extended controls.
++		 * Only when passed in through VIDIOC_G_CTRL and VIDIOC_S_CTRL
++		 * is it allowed for backwards compatibility.
++		 */
++		if (ioctl == VIDIOC_G_CTRL || ioctl == VIDIOC_S_CTRL)
++			return false;
++		break;
++	case V4L2_CTRL_WHICH_DEF_VAL:
++		/* Default value cannot be changed */
++		if (ioctl == VIDIOC_S_EXT_CTRLS ||
++		    ioctl == VIDIOC_TRY_EXT_CTRLS) {
++			c->error_idx = c->count;
++			return false;
++		}
++		return true;
++	case V4L2_CTRL_WHICH_CUR_VAL:
++		return true;
++	case V4L2_CTRL_WHICH_REQUEST_VAL:
++		c->error_idx = c->count;
++		return false;
++	}
++
+ 	/* Check that all controls are from the same control class. */
+ 	for (i = 0; i < c->count; i++) {
+ 		if (V4L2_CTRL_ID2WHICH(c->controls[i].id) != c->which) {
+-			c->error_idx = i;
+-			return 0;
++			c->error_idx = ioctl == VIDIOC_TRY_EXT_CTRLS ? i :
++								      c->count;
++			return false;
+ 		}
+ 	}
+-	return 1;
++	return true;
+ }
+ 
+ static int check_fmt(struct file *file, enum v4l2_buf_type type)
+@@ -2187,7 +2205,7 @@ static int v4l_g_ctrl(const struct v4l2_ioctl_ops *ops,
+ 	ctrls.controls = &ctrl;
+ 	ctrl.id = p->id;
+ 	ctrl.value = p->value;
+-	if (check_ext_ctrls(&ctrls, 1)) {
++	if (check_ext_ctrls(&ctrls, VIDIOC_G_CTRL)) {
+ 		int ret = ops->vidioc_g_ext_ctrls(file, fh, &ctrls);
+ 
+ 		if (ret == 0)
+@@ -2206,6 +2224,7 @@ static int v4l_s_ctrl(const struct v4l2_ioctl_ops *ops,
+ 		test_bit(V4L2_FL_USES_V4L2_FH, &vfd->flags) ? fh : NULL;
+ 	struct v4l2_ext_controls ctrls;
+ 	struct v4l2_ext_control ctrl;
++	int ret;
+ 
+ 	if (vfh && vfh->ctrl_handler)
+ 		return v4l2_s_ctrl(vfh, vfh->ctrl_handler, p);
+@@ -2221,9 +2240,11 @@ static int v4l_s_ctrl(const struct v4l2_ioctl_ops *ops,
+ 	ctrls.controls = &ctrl;
+ 	ctrl.id = p->id;
+ 	ctrl.value = p->value;
+-	if (check_ext_ctrls(&ctrls, 1))
+-		return ops->vidioc_s_ext_ctrls(file, fh, &ctrls);
+-	return -EINVAL;
++	if (!check_ext_ctrls(&ctrls, VIDIOC_S_CTRL))
++		return -EINVAL;
++	ret = ops->vidioc_s_ext_ctrls(file, fh, &ctrls);
++	p->value = ctrl.value;
++	return ret;
+ }
+ 
+ static int v4l_g_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+@@ -2243,8 +2264,8 @@ static int v4l_g_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+ 					vfd, vfd->v4l2_dev->mdev, p);
+ 	if (ops->vidioc_g_ext_ctrls == NULL)
+ 		return -ENOTTY;
+-	return check_ext_ctrls(p, 0) ? ops->vidioc_g_ext_ctrls(file, fh, p) :
+-					-EINVAL;
++	return check_ext_ctrls(p, VIDIOC_G_EXT_CTRLS) ?
++				ops->vidioc_g_ext_ctrls(file, fh, p) : -EINVAL;
+ }
+ 
+ static int v4l_s_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+@@ -2264,8 +2285,8 @@ static int v4l_s_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+ 					vfd, vfd->v4l2_dev->mdev, p);
+ 	if (ops->vidioc_s_ext_ctrls == NULL)
+ 		return -ENOTTY;
+-	return check_ext_ctrls(p, 0) ? ops->vidioc_s_ext_ctrls(file, fh, p) :
+-					-EINVAL;
++	return check_ext_ctrls(p, VIDIOC_S_EXT_CTRLS) ?
++				ops->vidioc_s_ext_ctrls(file, fh, p) : -EINVAL;
+ }
+ 
+ static int v4l_try_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+@@ -2285,8 +2306,8 @@ static int v4l_try_ext_ctrls(const struct v4l2_ioctl_ops *ops,
+ 					  vfd, vfd->v4l2_dev->mdev, p);
+ 	if (ops->vidioc_try_ext_ctrls == NULL)
+ 		return -ENOTTY;
+-	return check_ext_ctrls(p, 0) ? ops->vidioc_try_ext_ctrls(file, fh, p) :
+-					-EINVAL;
++	return check_ext_ctrls(p, VIDIOC_TRY_EXT_CTRLS) ?
++			ops->vidioc_try_ext_ctrls(file, fh, p) : -EINVAL;
+ }
+ 
+ /*
+diff --git a/drivers/memory/fsl_ifc.c b/drivers/memory/fsl_ifc.c
+index d062c2f8250f4..75a8c38df9394 100644
+--- a/drivers/memory/fsl_ifc.c
++++ b/drivers/memory/fsl_ifc.c
+@@ -263,7 +263,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 
+ 	ret = fsl_ifc_ctrl_init(fsl_ifc_ctrl_dev);
+ 	if (ret < 0)
+-		goto err;
++		goto err_unmap_nandirq;
+ 
+ 	init_waitqueue_head(&fsl_ifc_ctrl_dev->nand_wait);
+ 
+@@ -272,7 +272,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 	if (ret != 0) {
+ 		dev_err(&dev->dev, "failed to install irq (%d)\n",
+ 			fsl_ifc_ctrl_dev->irq);
+-		goto err_irq;
++		goto err_unmap_nandirq;
+ 	}
+ 
+ 	if (fsl_ifc_ctrl_dev->nand_irq) {
+@@ -281,17 +281,16 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 		if (ret != 0) {
+ 			dev_err(&dev->dev, "failed to install irq (%d)\n",
+ 				fsl_ifc_ctrl_dev->nand_irq);
+-			goto err_nandirq;
++			goto err_free_irq;
+ 		}
+ 	}
+ 
+ 	return 0;
+ 
+-err_nandirq:
+-	free_irq(fsl_ifc_ctrl_dev->nand_irq, fsl_ifc_ctrl_dev);
+-	irq_dispose_mapping(fsl_ifc_ctrl_dev->nand_irq);
+-err_irq:
++err_free_irq:
+ 	free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev);
++err_unmap_nandirq:
++	irq_dispose_mapping(fsl_ifc_ctrl_dev->nand_irq);
+ 	irq_dispose_mapping(fsl_ifc_ctrl_dev->irq);
+ err:
+ 	iounmap(fsl_ifc_ctrl_dev->gregs);
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 45eed659b0c6d..77a011d5ff8c1 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -160,10 +160,62 @@ static const struct regmap_access_table rpcif_volatile_table = {
+ 	.n_yes_ranges	= ARRAY_SIZE(rpcif_volatile_ranges),
+ };
+ 
++
++/*
++ * Custom accessor functions to ensure SMRDR0 and SMWDR0 are always accessed
++ * with proper width. Requires SMENR_SPIDE to be correctly set before!
++ */
++static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val)
++{
++	struct rpcif *rpc = context;
++
++	if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
++		u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
++
++		if (spide == 0x8) {
++			*val = readb(rpc->base + reg);
++			return 0;
++		} else if (spide == 0xC) {
++			*val = readw(rpc->base + reg);
++			return 0;
++		} else if (spide != 0xF) {
++			return -EILSEQ;
++		}
++	}
++
++	*val = readl(rpc->base + reg);
++	return 0;
++
++}
++
++static int rpcif_reg_write(void *context, unsigned int reg, unsigned int val)
++{
++	struct rpcif *rpc = context;
++
++	if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
++		u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
++
++		if (spide == 0x8) {
++			writeb(val, rpc->base + reg);
++			return 0;
++		} else if (spide == 0xC) {
++			writew(val, rpc->base + reg);
++			return 0;
++		} else if (spide != 0xF) {
++			return -EILSEQ;
++		}
++	}
++
++	writel(val, rpc->base + reg);
++	return 0;
++}
++
+ static const struct regmap_config rpcif_regmap_config = {
+ 	.reg_bits	= 32,
+ 	.val_bits	= 32,
+ 	.reg_stride	= 4,
++	.reg_read	= rpcif_reg_read,
++	.reg_write	= rpcif_reg_write,
+ 	.fast_io	= true,
+ 	.max_register	= RPCIF_PHYINT,
+ 	.volatile_table	= &rpcif_volatile_table,
+@@ -173,17 +225,15 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct resource *res;
+-	void __iomem *base;
+ 
+ 	rpc->dev = dev;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
+-	base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
++	rpc->base = devm_ioremap_resource(&pdev->dev, res);
++	if (IS_ERR(rpc->base))
++		return PTR_ERR(rpc->base);
+ 
+-	rpc->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+-					    &rpcif_regmap_config);
++	rpc->regmap = devm_regmap_init(&pdev->dev, NULL, rpc, &rpcif_regmap_config);
+ 	if (IS_ERR(rpc->regmap)) {
+ 		dev_err(&pdev->dev,
+ 			"failed to init regmap for rpcif, error %ld\n",
+@@ -354,20 +404,16 @@ void rpcif_prepare(struct rpcif *rpc, const struct rpcif_op *op, u64 *offs,
+ 			nbytes = op->data.nbytes;
+ 		rpc->xferlen = nbytes;
+ 
+-		rpc->enable |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes)) |
+-			RPCIF_SMENR_SPIDB(rpcif_bit_size(op->data.buswidth));
++		rpc->enable |= RPCIF_SMENR_SPIDB(rpcif_bit_size(op->data.buswidth));
+ 	}
+ }
+ EXPORT_SYMBOL(rpcif_prepare);
+ 
+ int rpcif_manual_xfer(struct rpcif *rpc)
+ {
+-	u32 smenr, smcr, pos = 0, max = 4;
++	u32 smenr, smcr, pos = 0, max = rpc->bus_size == 2 ? 8 : 4;
+ 	int ret = 0;
+ 
+-	if (rpc->bus_size == 2)
+-		max = 8;
+-
+ 	pm_runtime_get_sync(rpc->dev);
+ 
+ 	regmap_update_bits(rpc->regmap, RPCIF_PHYCNT,
+@@ -378,37 +424,36 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 	regmap_write(rpc->regmap, RPCIF_SMOPR, rpc->option);
+ 	regmap_write(rpc->regmap, RPCIF_SMDMCR, rpc->dummy);
+ 	regmap_write(rpc->regmap, RPCIF_SMDRENR, rpc->ddr);
++	regmap_write(rpc->regmap, RPCIF_SMADR, rpc->smadr);
+ 	smenr = rpc->enable;
+ 
+ 	switch (rpc->dir) {
+ 	case RPCIF_DATA_OUT:
+ 		while (pos < rpc->xferlen) {
+-			u32 nbytes = rpc->xferlen - pos;
+-			u32 data[2];
++			u32 bytes_left = rpc->xferlen - pos;
++			u32 nbytes, data[2];
+ 
+ 			smcr = rpc->smcr | RPCIF_SMCR_SPIE;
+-			if (nbytes > max) {
+-				nbytes = max;
++
++			/* nbytes may only be 1, 2, 4, or 8 */
++			nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left));
++			if (bytes_left > nbytes)
+ 				smcr |= RPCIF_SMCR_SSLKP;
+-			}
++
++			smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes));
++			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 
+ 			memcpy(data, rpc->buffer + pos, nbytes);
+-			if (nbytes > 4) {
++			if (nbytes == 8) {
+ 				regmap_write(rpc->regmap, RPCIF_SMWDR1,
+ 					     data[0]);
+ 				regmap_write(rpc->regmap, RPCIF_SMWDR0,
+ 					     data[1]);
+-			} else if (nbytes > 2) {
++			} else {
+ 				regmap_write(rpc->regmap, RPCIF_SMWDR0,
+ 					     data[0]);
+-			} else	{
+-				regmap_write(rpc->regmap, RPCIF_SMWDR0,
+-					     data[0] << 16);
+ 			}
+ 
+-			regmap_write(rpc->regmap, RPCIF_SMADR,
+-				     rpc->smadr + pos);
+-			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 			regmap_write(rpc->regmap, RPCIF_SMCR, smcr);
+ 			ret = wait_msg_xfer_end(rpc);
+ 			if (ret)
+@@ -448,14 +493,16 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 			break;
+ 		}
+ 		while (pos < rpc->xferlen) {
+-			u32 nbytes = rpc->xferlen - pos;
+-			u32 data[2];
++			u32 bytes_left = rpc->xferlen - pos;
++			u32 nbytes, data[2];
+ 
+-			if (nbytes > max)
+-				nbytes = max;
++			/* nbytes may only be 1, 2, 4, or 8 */
++			nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left));
+ 
+ 			regmap_write(rpc->regmap, RPCIF_SMADR,
+ 				     rpc->smadr + pos);
++			smenr &= ~RPCIF_SMENR_SPIDE(0xF);
++			smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes));
+ 			regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ 			regmap_write(rpc->regmap, RPCIF_SMCR,
+ 				     rpc->smcr | RPCIF_SMCR_SPIE);
+@@ -463,18 +510,14 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ 			if (ret)
+ 				goto err_out;
+ 
+-			if (nbytes > 4) {
++			if (nbytes == 8) {
+ 				regmap_read(rpc->regmap, RPCIF_SMRDR1,
+ 					    &data[0]);
+ 				regmap_read(rpc->regmap, RPCIF_SMRDR0,
+ 					    &data[1]);
+-			} else if (nbytes > 2) {
+-				regmap_read(rpc->regmap, RPCIF_SMRDR0,
+-					    &data[0]);
+-			} else	{
++			} else {
+ 				regmap_read(rpc->regmap, RPCIF_SMRDR0,
+ 					    &data[0]);
+-				data[0] >>= 16;
+ 			}
+ 			memcpy(rpc->buffer + pos, data, nbytes);
+ 
+diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
+index 4a4573fa7b0fa..7c51602a8ff18 100644
+--- a/drivers/memstick/core/ms_block.c
++++ b/drivers/memstick/core/ms_block.c
+@@ -1736,7 +1736,7 @@ static int msb_init_card(struct memstick_dev *card)
+ 	msb->pages_in_block = boot_block->attr.block_size * 2;
+ 	msb->block_size = msb->page_size * msb->pages_in_block;
+ 
+-	if (msb->page_size > PAGE_SIZE) {
++	if ((size_t)msb->page_size > PAGE_SIZE) {
+ 		/* this isn't supported by linux at all, anyway*/
+ 		dbg("device page %d size isn't supported", msb->page_size);
+ 		return -EINVAL;
+diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c
+index f9a93b0565e15..435d4c058b20e 100644
+--- a/drivers/memstick/host/jmb38x_ms.c
++++ b/drivers/memstick/host/jmb38x_ms.c
+@@ -882,7 +882,7 @@ static struct memstick_host *jmb38x_ms_alloc_host(struct jmb38x_ms *jm, int cnt)
+ 
+ 	iounmap(host->addr);
+ err_out_free:
+-	kfree(msh);
++	memstick_free_host(msh);
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index 615a83782e55d..7aba0fdeba177 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -839,15 +839,15 @@ static void r592_remove(struct pci_dev *pdev)
+ 	}
+ 	memstick_remove_host(dev->host);
+ 
++	if (dev->dummy_dma_page)
++		dma_free_coherent(&pdev->dev, PAGE_SIZE, dev->dummy_dma_page,
++			dev->dummy_dma_page_physical_address);
++
+ 	free_irq(dev->irq, dev);
+ 	iounmap(dev->mmio);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+ 	memstick_free_host(dev->host);
+-
+-	if (dev->dummy_dma_page)
+-		dma_free_coherent(&pdev->dev, PAGE_SIZE, dev->dummy_dma_page,
+-			dev->dummy_dma_page_physical_address);
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/mfd/altera-sysmgr.c b/drivers/mfd/altera-sysmgr.c
+index 20cb294c75122..5d3715a28b28e 100644
+--- a/drivers/mfd/altera-sysmgr.c
++++ b/drivers/mfd/altera-sysmgr.c
+@@ -153,7 +153,7 @@ static int sysmgr_probe(struct platform_device *pdev)
+ 		if (!base)
+ 			return -ENOMEM;
+ 
+-		sysmgr_config.max_register = resource_size(res) - 3;
++		sysmgr_config.max_register = resource_size(res) - 4;
+ 		regmap = devm_regmap_init_mmio(dev, base, &sysmgr_config);
+ 	}
+ 
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 83e676a096dc1..852129ea07666 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -50,6 +50,7 @@ enum dln2_handle {
+ 	DLN2_HANDLE_GPIO,
+ 	DLN2_HANDLE_I2C,
+ 	DLN2_HANDLE_SPI,
++	DLN2_HANDLE_ADC,
+ 	DLN2_HANDLES
+ };
+ 
+@@ -653,6 +654,7 @@ enum {
+ 	DLN2_ACPI_MATCH_GPIO	= 0,
+ 	DLN2_ACPI_MATCH_I2C	= 1,
+ 	DLN2_ACPI_MATCH_SPI	= 2,
++	DLN2_ACPI_MATCH_ADC	= 3,
+ };
+ 
+ static struct dln2_platform_data dln2_pdata_gpio = {
+@@ -683,6 +685,16 @@ static struct mfd_cell_acpi_match dln2_acpi_match_spi = {
+ 	.adr = DLN2_ACPI_MATCH_SPI,
+ };
+ 
++/* Only one ADC port supported */
++static struct dln2_platform_data dln2_pdata_adc = {
++	.handle = DLN2_HANDLE_ADC,
++	.port = 0,
++};
++
++static struct mfd_cell_acpi_match dln2_acpi_match_adc = {
++	.adr = DLN2_ACPI_MATCH_ADC,
++};
++
+ static const struct mfd_cell dln2_devs[] = {
+ 	{
+ 		.name = "dln2-gpio",
+@@ -702,6 +714,12 @@ static const struct mfd_cell dln2_devs[] = {
+ 		.platform_data = &dln2_pdata_spi,
+ 		.pdata_size = sizeof(struct dln2_platform_data),
+ 	},
++	{
++		.name = "dln2-adc",
++		.acpi_match = &dln2_acpi_match_adc,
++		.platform_data = &dln2_pdata_adc,
++		.pdata_size = sizeof(struct dln2_platform_data),
++	},
+ };
+ 
+ static void dln2_stop(struct dln2_dev *dln2)
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index 79f5c6a18815a..684a011a63968 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -198,6 +198,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 			if (of_device_is_compatible(np, cell->of_compatible)) {
+ 				/* Ignore 'disabled' devices error free */
+ 				if (!of_device_is_available(np)) {
++					of_node_put(np);
+ 					ret = 0;
+ 					goto fail_alias;
+ 				}
+@@ -205,6 +206,7 @@ static int mfd_add_device(struct device *parent, int id,
+ 				ret = mfd_match_of_node_to_dev(pdev, np, cell);
+ 				if (ret == -EAGAIN)
+ 					continue;
++				of_node_put(np);
+ 				if (ret)
+ 					goto fail_alias;
+ 
+diff --git a/drivers/mfd/motorola-cpcap.c b/drivers/mfd/motorola-cpcap.c
+index 6fb206da27298..265464b5d7cc5 100644
+--- a/drivers/mfd/motorola-cpcap.c
++++ b/drivers/mfd/motorola-cpcap.c
+@@ -202,6 +202,13 @@ static const struct of_device_id cpcap_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, cpcap_of_match);
+ 
++static const struct spi_device_id cpcap_spi_ids[] = {
++	{ .name = "cpcap", },
++	{ .name = "6556002", },
++	{},
++};
++MODULE_DEVICE_TABLE(spi, cpcap_spi_ids);
++
+ static const struct regmap_config cpcap_regmap_config = {
+ 	.reg_bits = 16,
+ 	.reg_stride = 4,
+@@ -342,6 +349,7 @@ static struct spi_driver cpcap_driver = {
+ 		.pm = &cpcap_pm,
+ 	},
+ 	.probe = cpcap_probe,
++	.id_table = cpcap_spi_ids,
+ };
+ module_spi_driver(cpcap_driver);
+ 
+diff --git a/drivers/mfd/sprd-sc27xx-spi.c b/drivers/mfd/sprd-sc27xx-spi.c
+index 6b7956604a0f0..9890882db1ed3 100644
+--- a/drivers/mfd/sprd-sc27xx-spi.c
++++ b/drivers/mfd/sprd-sc27xx-spi.c
+@@ -236,6 +236,12 @@ static const struct of_device_id sprd_pmic_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, sprd_pmic_match);
+ 
++static const struct spi_device_id sprd_pmic_spi_ids[] = {
++	{ .name = "sc2731", .driver_data = (unsigned long)&sc2731_data },
++	{},
++};
++MODULE_DEVICE_TABLE(spi, sprd_pmic_spi_ids);
++
+ static struct spi_driver sprd_pmic_driver = {
+ 	.driver = {
+ 		.name = "sc27xx-pmic",
+@@ -243,6 +249,7 @@ static struct spi_driver sprd_pmic_driver = {
+ 		.pm = &sprd_pmic_pm_ops,
+ 	},
+ 	.probe = sprd_pmic_probe,
++	.id_table = sprd_pmic_spi_ids,
+ };
+ 
+ static int __init sprd_pmic_init(void)
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index 71313961cc54d..8acf76ea431b1 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -506,7 +506,7 @@ config MMC_OMAP_HS
+ 
+ config MMC_WBSD
+ 	tristate "Winbond W83L51xD SD/MMC Card Interface support"
+-	depends on ISA_DMA_API
++	depends on ISA_DMA_API && !M68K
+ 	help
+ 	  This selects the Winbond(R) W83L51xD Secure digital and
+ 	  Multimedia card Interface.
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 33cb70aa02aa8..efcaff479404a 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2014,7 +2014,8 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
+ 				 * delayed. Allowing the transfer to take place
+ 				 * avoids races and keeps things simple.
+ 				 */
+-				if (err != -ETIMEDOUT) {
++				if (err != -ETIMEDOUT &&
++				    host->dir_status == DW_MCI_RECV_STATUS) {
+ 					state = STATE_SENDING_DATA;
+ 					continue;
+ 				}
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index 6c9d38132f74c..16d1c7a43d331 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -566,37 +566,37 @@ static int moxart_probe(struct platform_device *pdev)
+ 	if (!mmc) {
+ 		dev_err(dev, "mmc_alloc_host failed\n");
+ 		ret = -ENOMEM;
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &res_mmc);
+ 	if (ret) {
+ 		dev_err(dev, "of_address_to_resource failed\n");
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(node, 0);
+ 	if (irq <= 0) {
+ 		dev_err(dev, "irq_of_parse_and_map failed\n");
+ 		ret = -EINVAL;
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	clk = devm_clk_get(dev, NULL);
+ 	if (IS_ERR(clk)) {
+ 		ret = PTR_ERR(clk);
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	reg_mmc = devm_ioremap_resource(dev, &res_mmc);
+ 	if (IS_ERR(reg_mmc)) {
+ 		ret = PTR_ERR(reg_mmc);
+-		goto out;
++		goto out_mmc;
+ 	}
+ 
+ 	ret = mmc_of_parse(mmc);
+ 	if (ret)
+-		goto out;
++		goto out_mmc;
+ 
+ 	host = mmc_priv(mmc);
+ 	host->mmc = mmc;
+@@ -621,6 +621,14 @@ static int moxart_probe(struct platform_device *pdev)
+ 			ret = -EPROBE_DEFER;
+ 			goto out;
+ 		}
++		if (!IS_ERR(host->dma_chan_tx)) {
++			dma_release_channel(host->dma_chan_tx);
++			host->dma_chan_tx = NULL;
++		}
++		if (!IS_ERR(host->dma_chan_rx)) {
++			dma_release_channel(host->dma_chan_rx);
++			host->dma_chan_rx = NULL;
++		}
+ 		dev_dbg(dev, "PIO mode transfer enabled\n");
+ 		host->have_dma = false;
+ 	} else {
+@@ -675,6 +683,11 @@ static int moxart_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ out:
++	if (!IS_ERR_OR_NULL(host->dma_chan_tx))
++		dma_release_channel(host->dma_chan_tx);
++	if (!IS_ERR_OR_NULL(host->dma_chan_rx))
++		dma_release_channel(host->dma_chan_rx);
++out_mmc:
+ 	if (mmc)
+ 		mmc_free_host(mmc);
+ 	return ret;
+@@ -687,9 +700,9 @@ static int moxart_remove(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(&pdev->dev, NULL);
+ 
+-	if (!IS_ERR(host->dma_chan_tx))
++	if (!IS_ERR_OR_NULL(host->dma_chan_tx))
+ 		dma_release_channel(host->dma_chan_tx);
+-	if (!IS_ERR(host->dma_chan_rx))
++	if (!IS_ERR_OR_NULL(host->dma_chan_rx))
+ 		dma_release_channel(host->dma_chan_rx);
+ 	mmc_remove_host(mmc);
+ 	mmc_free_host(mmc);
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index b06b4dcb7c782..9e6dab7e34242 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -8,6 +8,7 @@
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/dma-mapping.h>
++#include <linux/iopoll.h>
+ #include <linux/ioport.h>
+ #include <linux/irq.h>
+ #include <linux/of_address.h>
+@@ -2330,6 +2331,7 @@ static void msdc_cqe_enable(struct mmc_host *mmc)
+ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
+ {
+ 	struct msdc_host *host = mmc_priv(mmc);
++	unsigned int val = 0;
+ 
+ 	/* disable cmdq irq */
+ 	sdr_clr_bits(host->base + MSDC_INTEN, MSDC_INT_CMDQ);
+@@ -2339,6 +2341,9 @@ static void msdc_cqe_disable(struct mmc_host *mmc, bool recovery)
+ 	if (recovery) {
+ 		sdr_set_field(host->base + MSDC_DMA_CTRL,
+ 			      MSDC_DMA_CTRL_STOP, 1);
++		if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CFG, val,
++			!(val & MSDC_DMA_CFG_STS), 1, 3000)))
++			return;
+ 		msdc_reset_hw(host);
+ 	}
+ }
+diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
+index 947581de78601..8c3655d3be961 100644
+--- a/drivers/mmc/host/mxs-mmc.c
++++ b/drivers/mmc/host/mxs-mmc.c
+@@ -552,6 +552,11 @@ static const struct of_device_id mxs_mmc_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, mxs_mmc_dt_ids);
+ 
++static void mxs_mmc_regulator_disable(void *regulator)
++{
++	regulator_disable(regulator);
++}
++
+ static int mxs_mmc_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+@@ -591,6 +596,11 @@ static int mxs_mmc_probe(struct platform_device *pdev)
+ 				"Failed to enable vmmc regulator: %d\n", ret);
+ 			goto out_mmc_free;
+ 		}
++
++		ret = devm_add_action_or_reset(&pdev->dev, mxs_mmc_regulator_disable,
++					       reg_vmmc);
++		if (ret)
++			goto out_mmc_free;
+ 	}
+ 
+ 	ssp->clk = devm_clk_get(&pdev->dev, NULL);
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 8f4d1f003f656..fd188b6d88f49 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -62,6 +62,8 @@
+ #define SDHCI_OMAP_IE		0x234
+ #define INT_CC_EN		BIT(0)
+ 
++#define SDHCI_OMAP_ISE		0x238
++
+ #define SDHCI_OMAP_AC12		0x23c
+ #define AC12_V1V8_SIGEN		BIT(19)
+ #define AC12_SCLK_SEL		BIT(23)
+@@ -113,6 +115,8 @@ struct sdhci_omap_host {
+ 	u32			hctl;
+ 	u32			sysctl;
+ 	u32			capa;
++	u32			ie;
++	u32			ise;
+ };
+ 
+ static void sdhci_omap_start_clock(struct sdhci_omap_host *omap_host);
+@@ -682,7 +686,8 @@ static void sdhci_omap_set_power(struct sdhci_host *host, unsigned char mode,
+ {
+ 	struct mmc_host *mmc = host->mmc;
+ 
+-	mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
++	if (!IS_ERR(mmc->supply.vmmc))
++		mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
+ }
+ 
+ static int sdhci_omap_enable_dma(struct sdhci_host *host)
+@@ -1244,14 +1249,23 @@ static void sdhci_omap_context_save(struct sdhci_omap_host *omap_host)
+ {
+ 	omap_host->con = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON);
+ 	omap_host->hctl = sdhci_omap_readl(omap_host, SDHCI_OMAP_HCTL);
++	omap_host->sysctl = sdhci_omap_readl(omap_host, SDHCI_OMAP_SYSCTL);
+ 	omap_host->capa = sdhci_omap_readl(omap_host, SDHCI_OMAP_CAPA);
++	omap_host->ie = sdhci_omap_readl(omap_host, SDHCI_OMAP_IE);
++	omap_host->ise = sdhci_omap_readl(omap_host, SDHCI_OMAP_ISE);
+ }
+ 
++/* Order matters here, HCTL must be restored in two phases */
+ static void sdhci_omap_context_restore(struct sdhci_omap_host *omap_host)
+ {
+-	sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, omap_host->con);
+ 	sdhci_omap_writel(omap_host, SDHCI_OMAP_HCTL, omap_host->hctl);
+ 	sdhci_omap_writel(omap_host, SDHCI_OMAP_CAPA, omap_host->capa);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_HCTL, omap_host->hctl);
++
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_SYSCTL, omap_host->sysctl);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, omap_host->con);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_IE, omap_host->ie);
++	sdhci_omap_writel(omap_host, SDHCI_OMAP_ISE, omap_host->ise);
+ }
+ 
+ static int __maybe_unused sdhci_omap_suspend(struct device *dev)
+diff --git a/drivers/most/most_usb.c b/drivers/most/most_usb.c
+index 2640c5b326a49..acabb7715b423 100644
+--- a/drivers/most/most_usb.c
++++ b/drivers/most/most_usb.c
+@@ -149,7 +149,8 @@ static inline int drci_rd_reg(struct usb_device *dev, u16 reg, u16 *buf)
+ 	retval = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 				 DRCI_READ_REQ, req_type,
+ 				 0x0000,
+-				 reg, dma_buf, sizeof(*dma_buf), 5 * HZ);
++				 reg, dma_buf, sizeof(*dma_buf),
++				 USB_CTRL_GET_TIMEOUT);
+ 	*buf = le16_to_cpu(*dma_buf);
+ 	kfree(dma_buf);
+ 
+@@ -176,7 +177,7 @@ static inline int drci_wr_reg(struct usb_device *dev, u16 reg, u16 data)
+ 			       reg,
+ 			       NULL,
+ 			       0,
+-			       5 * HZ);
++			       USB_CTRL_SET_TIMEOUT);
+ }
+ 
+ static inline int start_sync_ep(struct usb_device *usb_dev, u16 ep)
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index c8fd7f758938b..1532291989471 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -724,8 +724,6 @@ int del_mtd_device(struct mtd_info *mtd)
+ 
+ 	mutex_lock(&mtd_table_mutex);
+ 
+-	debugfs_remove_recursive(mtd->dbg.dfs_dir);
+-
+ 	if (idr_find(&mtd_idr, mtd->index) != mtd) {
+ 		ret = -ENODEV;
+ 		goto out_error;
+@@ -741,6 +739,8 @@ int del_mtd_device(struct mtd_info *mtd)
+ 		       mtd->index, mtd->name, mtd->usecount);
+ 		ret = -EBUSY;
+ 	} else {
++		debugfs_remove_recursive(mtd->dbg.dfs_dir);
++
+ 		/* Try to remove the NVMEM provider */
+ 		if (mtd->nvmem)
+ 			nvmem_unregister(mtd->nvmem);
+diff --git a/drivers/mtd/nand/raw/ams-delta.c b/drivers/mtd/nand/raw/ams-delta.c
+index ff1697f899ba6..13de39aa3288f 100644
+--- a/drivers/mtd/nand/raw/ams-delta.c
++++ b/drivers/mtd/nand/raw/ams-delta.c
+@@ -217,9 +217,8 @@ static int gpio_nand_setup_interface(struct nand_chip *this, int csline,
+ 
+ static int gpio_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -370,6 +369,13 @@ static int gpio_nand_probe(struct platform_device *pdev)
+ 	/* Release write protection */
+ 	gpiod_set_value(priv->gpiod_nwp, 0);
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(this, 1);
+ 	if (err)
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 9cbcc698c64d8..53bd10738418b 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -973,6 +973,21 @@ static int anfc_setup_interface(struct nand_chip *chip, int target,
+ 		nvddr = nand_get_nvddr_timings(conf);
+ 		if (IS_ERR(nvddr))
+ 			return PTR_ERR(nvddr);
++
++		/*
++		 * The controller only supports data payload requests which are
++		 * a multiple of 4. In practice, most data accesses are 4-byte
++		 * aligned and this is not an issue. However, rounding up will
++		 * simply be refused by the controller if we reached the end of
++		 * the device *and* we are using the NV-DDR interface(!). In
++		 * this situation, unaligned data requests ending at the device
++		 * boundary will confuse the controller and cannot be performed.
++		 *
++		 * This is something that happens in nand_read_subpage() when
++		 * selecting software ECC support and must be avoided.
++		 */
++		if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT)
++			return -ENOTSUPP;
+ 	} else {
+ 		sdr = nand_get_sdr_timings(conf);
+ 		if (IS_ERR(sdr))
+diff --git a/drivers/mtd/nand/raw/au1550nd.c b/drivers/mtd/nand/raw/au1550nd.c
+index 99116896cfd6c..5aa3a06d740c7 100644
+--- a/drivers/mtd/nand/raw/au1550nd.c
++++ b/drivers/mtd/nand/raw/au1550nd.c
+@@ -239,9 +239,8 @@ static int au1550nd_exec_op(struct nand_chip *this,
+ 
+ static int au1550nd_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -310,6 +309,13 @@ static int au1550nd_probe(struct platform_device *pdev)
+ 	if (pd->devwidth)
+ 		this->options |= NAND_BUSWIDTH_16;
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	ret = nand_scan(this, 1);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "NAND scan failed with %d\n", ret);
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index a3e66155ae405..658f0cbe7ce8c 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -438,8 +438,10 @@ static int fsmc_correct_ecc1(struct nand_chip *chip,
+ 			     unsigned char *read_ecc,
+ 			     unsigned char *calc_ecc)
+ {
++	bool sm_order = chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER;
++
+ 	return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc,
+-				      chip->ecc.size, false);
++				      chip->ecc.size, sm_order);
+ }
+ 
+ /* Count the number of 0's in buff upto a max of max_bits */
+diff --git a/drivers/mtd/nand/raw/gpio.c b/drivers/mtd/nand/raw/gpio.c
+index fb7a086de35e5..fdf073d2e1b6c 100644
+--- a/drivers/mtd/nand/raw/gpio.c
++++ b/drivers/mtd/nand/raw/gpio.c
+@@ -163,9 +163,8 @@ static int gpio_nand_exec_op(struct nand_chip *chip,
+ 
+ static int gpio_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -365,6 +364,13 @@ static int gpio_nand_probe(struct platform_device *pdev)
+ 	if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp))
+ 		gpiod_direction_output(gpiomtd->nwp, 1);
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	ret = nand_scan(chip, 1);
+ 	if (ret)
+ 		goto err_wp;
+diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c
+index 29e8a546dcd60..e8476cd5147fe 100644
+--- a/drivers/mtd/nand/raw/intel-nand-controller.c
++++ b/drivers/mtd/nand/raw/intel-nand-controller.c
+@@ -609,6 +609,11 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ 		dev_err(dev, "failed to get chip select: %d\n", ret);
+ 		return ret;
+ 	}
++	if (cs >= MAX_CS) {
++		dev_err(dev, "got invalid chip select: %d\n", cs);
++		return -EINVAL;
++	}
++
+ 	ebu_host->cs_num = cs;
+ 
+ 	resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs);
+diff --git a/drivers/mtd/nand/raw/mpc5121_nfc.c b/drivers/mtd/nand/raw/mpc5121_nfc.c
+index bcd4a556c959c..cb293c50acb87 100644
+--- a/drivers/mtd/nand/raw/mpc5121_nfc.c
++++ b/drivers/mtd/nand/raw/mpc5121_nfc.c
+@@ -605,9 +605,8 @@ static void mpc5121_nfc_free(struct device *dev, struct mtd_info *mtd)
+ 
+ static int mpc5121_nfc_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -772,6 +771,13 @@ static int mpc5121_nfc_probe(struct platform_device *op)
+ 		goto error;
+ 	}
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Detect NAND chips */
+ 	retval = nand_scan(chip, be32_to_cpup(chips_no));
+ 	if (retval) {
+diff --git a/drivers/mtd/nand/raw/orion_nand.c b/drivers/mtd/nand/raw/orion_nand.c
+index 66211c9311d2f..2c87c7d892058 100644
+--- a/drivers/mtd/nand/raw/orion_nand.c
++++ b/drivers/mtd/nand/raw/orion_nand.c
+@@ -85,9 +85,8 @@ static void orion_nand_read_buf(struct nand_chip *chip, uint8_t *buf, int len)
+ 
+ static int orion_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -190,6 +189,13 @@ static int __init orion_nand_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	nc->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	ret = nand_scan(nc, 1);
+ 	if (ret)
+ 		goto no_dev;
+diff --git a/drivers/mtd/nand/raw/pasemi_nand.c b/drivers/mtd/nand/raw/pasemi_nand.c
+index 789f33312c15f..c176036453ed9 100644
+--- a/drivers/mtd/nand/raw/pasemi_nand.c
++++ b/drivers/mtd/nand/raw/pasemi_nand.c
+@@ -75,9 +75,8 @@ static int pasemi_device_ready(struct nand_chip *chip)
+ 
+ static int pasemi_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -154,6 +153,13 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
+ 	/* Enable the following for a flash based bad block table */
+ 	chip->bbt_options = NAND_BBT_USE_FLASH;
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(chip, 1);
+ 	if (err)
+diff --git a/drivers/mtd/nand/raw/plat_nand.c b/drivers/mtd/nand/raw/plat_nand.c
+index 7711e1020c21c..0ee08c42cc35b 100644
+--- a/drivers/mtd/nand/raw/plat_nand.c
++++ b/drivers/mtd/nand/raw/plat_nand.c
+@@ -21,9 +21,8 @@ struct plat_nand_data {
+ 
+ static int plat_nand_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -94,6 +93,13 @@ static int plat_nand_probe(struct platform_device *pdev)
+ 			goto out;
+ 	}
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(&data->chip, pdata->chip.nr_chips);
+ 	if (err)
+diff --git a/drivers/mtd/nand/raw/socrates_nand.c b/drivers/mtd/nand/raw/socrates_nand.c
+index 70f8305c9b6e1..fb39cc7ebce03 100644
+--- a/drivers/mtd/nand/raw/socrates_nand.c
++++ b/drivers/mtd/nand/raw/socrates_nand.c
+@@ -119,9 +119,8 @@ static int socrates_nand_device_ready(struct nand_chip *nand_chip)
+ 
+ static int socrates_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -175,6 +174,13 @@ static int socrates_nand_probe(struct platform_device *ofdev)
+ 	/* TODO: I have no idea what real delay is. */
+ 	nand_chip->legacy.chip_delay = 20;	/* 20us command delay time */
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	nand_chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	dev_set_drvdata(&ofdev->dev, host);
+ 
+ 	res = nand_scan(nand_chip, 1);
+diff --git a/drivers/mtd/nand/raw/xway_nand.c b/drivers/mtd/nand/raw/xway_nand.c
+index 26751976e5026..236fd8c5a958f 100644
+--- a/drivers/mtd/nand/raw/xway_nand.c
++++ b/drivers/mtd/nand/raw/xway_nand.c
+@@ -148,9 +148,8 @@ static void xway_write_buf(struct nand_chip *chip, const u_char *buf, int len)
+ 
+ static int xway_attach_chip(struct nand_chip *chip)
+ {
+-	chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
+-
+-	if (chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
++	    chip->ecc.algo == NAND_ECC_ALGO_UNKNOWN)
+ 		chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
+ 
+ 	return 0;
+@@ -219,6 +218,13 @@ static int xway_nand_probe(struct platform_device *pdev)
+ 		    | NAND_CON_SE_P | NAND_CON_WP_P | NAND_CON_PRE_P
+ 		    | cs_flag, EBU_NAND_CON);
+ 
++	/*
++	 * This driver assumes that the default ECC engine should be TYPE_SOFT.
++	 * Set ->engine_type before registering the NAND devices in order to
++	 * provide a driver specific default value.
++	 */
++	data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
++
+ 	/* Scan to find existence of the device */
+ 	err = nand_scan(&data->chip, 1);
+ 	if (err)
+diff --git a/drivers/mtd/spi-nor/controllers/hisi-sfc.c b/drivers/mtd/spi-nor/controllers/hisi-sfc.c
+index 47fbf1d1e5573..516e502694780 100644
+--- a/drivers/mtd/spi-nor/controllers/hisi-sfc.c
++++ b/drivers/mtd/spi-nor/controllers/hisi-sfc.c
+@@ -477,7 +477,6 @@ static int hisi_spi_nor_remove(struct platform_device *pdev)
+ 
+ 	hisi_spi_nor_unregister_all(host);
+ 	mutex_destroy(&host->lock);
+-	clk_disable_unprepare(host->clk);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 6977f8248df7e..8fb6396e99004 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -150,7 +150,7 @@ config NET_FC
+ 
+ config IFB
+ 	tristate "Intermediate Functional Block support"
+-	depends on NET_CLS_ACT
++	depends on NET_ACT_MIRRED || NFT_FWD_NETDEV
+ 	select NET_REDIRECT
+ 	help
+ 	  This is an intermediate driver that allows sharing of
+diff --git a/drivers/net/bonding/bond_sysfs_slave.c b/drivers/net/bonding/bond_sysfs_slave.c
+index fd07561da0348..6a6cdd0bb2585 100644
+--- a/drivers/net/bonding/bond_sysfs_slave.c
++++ b/drivers/net/bonding/bond_sysfs_slave.c
+@@ -108,15 +108,15 @@ static ssize_t ad_partner_oper_port_state_show(struct slave *slave, char *buf)
+ }
+ static SLAVE_ATTR_RO(ad_partner_oper_port_state);
+ 
+-static const struct slave_attribute *slave_attrs[] = {
+-	&slave_attr_state,
+-	&slave_attr_mii_status,
+-	&slave_attr_link_failure_count,
+-	&slave_attr_perm_hwaddr,
+-	&slave_attr_queue_id,
+-	&slave_attr_ad_aggregator_id,
+-	&slave_attr_ad_actor_oper_port_state,
+-	&slave_attr_ad_partner_oper_port_state,
++static const struct attribute *slave_attrs[] = {
++	&slave_attr_state.attr,
++	&slave_attr_mii_status.attr,
++	&slave_attr_link_failure_count.attr,
++	&slave_attr_perm_hwaddr.attr,
++	&slave_attr_queue_id.attr,
++	&slave_attr_ad_aggregator_id.attr,
++	&slave_attr_ad_actor_oper_port_state.attr,
++	&slave_attr_ad_partner_oper_port_state.attr,
+ 	NULL
+ };
+ 
+@@ -137,24 +137,10 @@ const struct sysfs_ops slave_sysfs_ops = {
+ 
+ int bond_sysfs_slave_add(struct slave *slave)
+ {
+-	const struct slave_attribute **a;
+-	int err;
+-
+-	for (a = slave_attrs; *a; ++a) {
+-		err = sysfs_create_file(&slave->kobj, &((*a)->attr));
+-		if (err) {
+-			kobject_put(&slave->kobj);
+-			return err;
+-		}
+-	}
+-
+-	return 0;
++	return sysfs_create_files(&slave->kobj, slave_attrs);
+ }
+ 
+ void bond_sysfs_slave_del(struct slave *slave)
+ {
+-	const struct slave_attribute **a;
+-
+-	for (a = slave_attrs; *a; ++a)
+-		sysfs_remove_file(&slave->kobj, &((*a)->attr));
++	sysfs_remove_files(&slave->kobj, slave_attrs);
+ }
+diff --git a/drivers/net/can/dev/bittiming.c b/drivers/net/can/dev/bittiming.c
+index f49170eadd547..b1b5a82f08299 100644
+--- a/drivers/net/can/dev/bittiming.c
++++ b/drivers/net/can/dev/bittiming.c
+@@ -209,7 +209,7 @@ static int can_fixup_bittiming(struct net_device *dev, struct can_bittiming *bt,
+ 			       const struct can_bittiming_const *btc)
+ {
+ 	struct can_priv *priv = netdev_priv(dev);
+-	int tseg1, alltseg;
++	unsigned int tseg1, alltseg;
+ 	u64 brp64;
+ 
+ 	tseg1 = bt->prop_seg + bt->phase_seg1;
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 9ae48072b6c6e..9dc5231680b79 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -1092,7 +1092,7 @@ static int mcp251xfd_chip_start(struct mcp251xfd_priv *priv)
+ 
+ 	err = mcp251xfd_chip_rx_int_enable(priv);
+ 	if (err)
+-		return err;
++		goto out_chip_stop;
+ 
+ 	err = mcp251xfd_chip_ecc_init(priv);
+ 	if (err)
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index 8e9102482c52a..c672320e0f15a 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -664,7 +664,7 @@ int es58x_rx_err_msg(struct net_device *netdev, enum es58x_err error,
+ 	struct can_device_stats *can_stats = &can->can_stats;
+ 	struct can_frame *cf = NULL;
+ 	struct sk_buff *skb;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!netif_running(netdev)) {
+ 		if (net_ratelimit())
+@@ -823,8 +823,6 @@ int es58x_rx_err_msg(struct net_device *netdev, enum es58x_err error,
+ 			can->state = CAN_STATE_BUS_OFF;
+ 			can_bus_off(netdev);
+ 			ret = can->do_set_mode(netdev, CAN_MODE_STOP);
+-			if (ret)
+-				return ret;
+ 		}
+ 		break;
+ 
+@@ -881,7 +879,7 @@ int es58x_rx_err_msg(struct net_device *netdev, enum es58x_err error,
+ 					ES58X_EVENT_BUSOFF, timestamp);
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index f5b2e5e87da43..55fe83b88737d 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -639,7 +639,10 @@ static void mv88e6393x_phylink_validate(struct mv88e6xxx_chip *chip, int port,
+ 					unsigned long *mask,
+ 					struct phylink_link_state *state)
+ {
+-	if (port == 0 || port == 9 || port == 10) {
++	bool is_6191x =
++		chip->info->prod_num == MV88E6XXX_PORT_SWITCH_ID_PROD_6191X;
++
++	if (((port == 0 || port == 9) && !is_6191x) || port == 10) {
+ 		phylink_set(mask, 10000baseT_Full);
+ 		phylink_set(mask, 10000baseKR_Full);
+ 		phylink_set(mask, 10000baseCR_Full);
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 0ba3762d5c219..0b81b5eb36bd1 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -1367,12 +1367,12 @@ out:
+ static bool felix_rxtstamp(struct dsa_switch *ds, int port,
+ 			   struct sk_buff *skb, unsigned int type)
+ {
+-	u8 *extraction = skb->data - ETH_HLEN - OCELOT_TAG_LEN;
++	u32 tstamp_lo = OCELOT_SKB_CB(skb)->tstamp_lo;
+ 	struct skb_shared_hwtstamps *shhwtstamps;
+ 	struct ocelot *ocelot = ds->priv;
+-	u32 tstamp_lo, tstamp_hi;
+ 	struct timespec64 ts;
+-	u64 tstamp, val;
++	u32 tstamp_hi;
++	u64 tstamp;
+ 
+ 	/* If the "no XTR IRQ" workaround is in use, tell DSA to defer this skb
+ 	 * for RX timestamping. Then free it, and poll for its copy through
+@@ -1387,9 +1387,6 @@ static bool felix_rxtstamp(struct dsa_switch *ds, int port,
+ 	ocelot_ptp_gettime64(&ocelot->ptp_info, &ts);
+ 	tstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+ 
+-	ocelot_xfh_get_rew_val(extraction, &val);
+-	tstamp_lo = (u32)val;
+-
+ 	tstamp_hi = tstamp >> 32;
+ 	if ((tstamp & 0xffffffff) < tstamp_lo)
+ 		tstamp_hi--;
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index 75897a3690969..ffbe5b6b2655b 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -457,7 +457,7 @@ int rtl8366_vlan_del(struct dsa_switch *ds, int port,
+ 			 * anymore then clear the whole member
+ 			 * config so it can be reused.
+ 			 */
+-			if (!vlanmc.member && vlanmc.untag) {
++			if (!vlanmc.member) {
+ 				vlanmc.vid = 0;
+ 				vlanmc.priority = 0;
+ 				vlanmc.fid = 0;
+diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c
+index a89093bc6c6ad..9e3b572ed999e 100644
+--- a/drivers/net/dsa/rtl8366rb.c
++++ b/drivers/net/dsa/rtl8366rb.c
+@@ -1350,7 +1350,7 @@ static int rtl8366rb_set_mc_index(struct realtek_smi *smi, int port, int index)
+ 
+ static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
+ {
+-	unsigned int max = RTL8366RB_NUM_VLANS;
++	unsigned int max = RTL8366RB_NUM_VLANS - 1;
+ 
+ 	if (smi->vlan4k_enabled)
+ 		max = RTL8366RB_NUM_VIDS - 1;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index b2cd3bdba9f89..533b8519ec352 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -1331,6 +1331,10 @@
+ #define MDIO_VEND2_PMA_CDR_CONTROL	0x8056
+ #endif
+ 
++#ifndef MDIO_VEND2_PMA_MISC_CTRL0
++#define MDIO_VEND2_PMA_MISC_CTRL0	0x8090
++#endif
++
+ #ifndef MDIO_CTRL1_SPEED1G
+ #define MDIO_CTRL1_SPEED1G		(MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
+ #endif
+@@ -1389,6 +1393,10 @@
+ #define XGBE_PMA_RX_RST_0_RESET_ON	0x10
+ #define XGBE_PMA_RX_RST_0_RESET_OFF	0x00
+ 
++#define XGBE_PMA_PLL_CTRL_MASK		BIT(15)
++#define XGBE_PMA_PLL_CTRL_ENABLE	BIT(15)
++#define XGBE_PMA_PLL_CTRL_DISABLE	0x0000
++
+ /* Bit setting and getting macros
+  *  The get macro will extract the current bit field value from within
+  *  the variable
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 18e48b3bc402b..213769054391c 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -1977,12 +1977,26 @@ static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata)
+ 	}
+ }
+ 
++static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable)
++{
++	XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0,
++			 XGBE_PMA_PLL_CTRL_MASK,
++			 enable ? XGBE_PMA_PLL_CTRL_ENABLE
++				: XGBE_PMA_PLL_CTRL_DISABLE);
++
++	/* Wait for command to complete */
++	usleep_range(100, 200);
++}
++
+ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 					unsigned int cmd, unsigned int sub_cmd)
+ {
+ 	unsigned int s0 = 0;
+ 	unsigned int wait;
+ 
++	/* Disable PLL re-initialization during FW command processing */
++	xgbe_phy_pll_ctrl(pdata, false);
++
+ 	/* Log if a previous command did not complete */
+ 	if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS)) {
+ 		netif_dbg(pdata, link, pdata->netdev,
+@@ -2003,7 +2017,7 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 	wait = XGBE_RATECHANGE_COUNT;
+ 	while (wait--) {
+ 		if (!XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS))
+-			return;
++			goto reenable_pll;
+ 
+ 		usleep_range(1000, 2000);
+ 	}
+@@ -2013,6 +2027,10 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata,
+ 
+ 	/* Reset on error */
+ 	xgbe_phy_rx_reset(pdata);
++
++reenable_pll:
++	/* Enable PLL re-initialization */
++	xgbe_phy_pll_ctrl(pdata, true);
+ }
+ 
+ static void xgbe_phy_rrc(struct xgbe_prv_data *pdata)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index f20b57b8cd70e..6bbf99e9273d5 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -13359,7 +13359,9 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	}
+ 
+ 	bnxt_inv_fw_health_reg(bp);
+-	bnxt_dl_register(bp);
++	rc = bnxt_dl_register(bp);
++	if (rc)
++		goto init_err_dl;
+ 
+ 	rc = register_netdev(dev);
+ 	if (rc)
+@@ -13379,6 +13381,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ init_err_cleanup:
+ 	bnxt_dl_unregister(bp);
++init_err_dl:
+ 	bnxt_shutdown_tc(bp);
+ 	bnxt_clear_int_mode(bp);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+index bb228619ec641..56ee46fae0ac6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+@@ -133,7 +133,7 @@ void bnxt_dl_fw_reporters_create(struct bnxt *bp)
+ {
+ 	struct bnxt_fw_health *health = bp->fw_health;
+ 
+-	if (!bp->dl || !health)
++	if (!health)
+ 		return;
+ 
+ 	if (!(bp->fw_cap & BNXT_FW_CAP_HOT_RESET) || health->fw_reset_reporter)
+@@ -187,7 +187,7 @@ void bnxt_dl_fw_reporters_destroy(struct bnxt *bp, bool all)
+ {
+ 	struct bnxt_fw_health *health = bp->fw_health;
+ 
+-	if (!bp->dl || !health)
++	if (!health)
+ 		return;
+ 
+ 	if ((all || !(bp->fw_cap & BNXT_FW_CAP_HOT_RESET)) &&
+@@ -744,6 +744,7 @@ static void bnxt_dl_params_unregister(struct bnxt *bp)
+ int bnxt_dl_register(struct bnxt *bp)
+ {
+ 	struct devlink_port_attrs attrs = {};
++	struct bnxt_dl *bp_dl;
+ 	struct devlink *dl;
+ 	int rc;
+ 
+@@ -756,7 +757,9 @@ int bnxt_dl_register(struct bnxt *bp)
+ 		return -ENOMEM;
+ 	}
+ 
+-	bnxt_link_bp_to_dl(bp, dl);
++	bp->dl = dl;
++	bp_dl = devlink_priv(dl);
++	bp_dl->bp = bp;
+ 
+ 	/* Add switchdev eswitch mode setting, if SRIOV supported */
+ 	if (pci_find_ext_capability(bp->pdev, PCI_EXT_CAP_ID_SRIOV) &&
+@@ -794,7 +797,6 @@ err_dl_port_unreg:
+ err_dl_unreg:
+ 	devlink_unregister(dl);
+ err_dl_free:
+-	bnxt_link_bp_to_dl(bp, NULL);
+ 	devlink_free(dl);
+ 	return rc;
+ }
+@@ -803,9 +805,6 @@ void bnxt_dl_unregister(struct bnxt *bp)
+ {
+ 	struct devlink *dl = bp->dl;
+ 
+-	if (!dl)
+-		return;
+-
+ 	if (BNXT_PF(bp)) {
+ 		bnxt_dl_params_unregister(bp);
+ 		devlink_port_unregister(&bp->dl_port);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
+index d22cab5d6856a..365f1e50f5959 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
+@@ -20,19 +20,6 @@ static inline struct bnxt *bnxt_get_bp_from_dl(struct devlink *dl)
+ 	return ((struct bnxt_dl *)devlink_priv(dl))->bp;
+ }
+ 
+-/* To clear devlink pointer from bp, pass NULL dl */
+-static inline void bnxt_link_bp_to_dl(struct bnxt *bp, struct devlink *dl)
+-{
+-	bp->dl = dl;
+-
+-	/* add a back pointer in dl to bp */
+-	if (dl) {
+-		struct bnxt_dl *bp_dl = devlink_priv(dl);
+-
+-		bp_dl->bp = bp;
+-	}
+-}
+-
+ #define NVM_OFF_MSIX_VEC_PER_PF_MAX	108
+ #define NVM_OFF_MSIX_VEC_PER_PF_MIN	114
+ #define NVM_OFF_IGNORE_ARI		164
+diff --git a/drivers/net/ethernet/cavium/thunder/nic_main.c b/drivers/net/ethernet/cavium/thunder/nic_main.c
+index 9361f964bb9b2..816453a4f8d6c 100644
+--- a/drivers/net/ethernet/cavium/thunder/nic_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nic_main.c
+@@ -1193,7 +1193,7 @@ static int nic_register_interrupts(struct nicpf *nic)
+ 		dev_err(&nic->pdev->dev,
+ 			"Request for #%d msix vectors failed, returned %d\n",
+ 			   nic->num_vec, ret);
+-		return 1;
++		return ret;
+ 	}
+ 
+ 	/* Register mailbox interrupt handler */
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index e2b290135fd97..a61107e05216c 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1224,7 +1224,7 @@ static int nicvf_register_misc_interrupt(struct nicvf *nic)
+ 	if (ret < 0) {
+ 		netdev_err(nic->netdev,
+ 			   "Req for #%d msix vectors failed\n", nic->num_vec);
+-		return 1;
++		return ret;
+ 	}
+ 
+ 	sprintf(nic->irq_name[irq], "%s Mbox", "NICVF");
+@@ -1243,7 +1243,7 @@ static int nicvf_register_misc_interrupt(struct nicvf *nic)
+ 	if (!nicvf_check_pf_ready(nic)) {
+ 		nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0);
+ 		nicvf_unregister_interrupts(nic);
+-		return 1;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+index 83ed10ac86606..7080cb6c83e4a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+@@ -2011,12 +2011,15 @@ static int cxgb4_get_module_info(struct net_device *dev,
+ 		if (ret)
+ 			return ret;
+ 
+-		if (!sff8472_comp || (sff_diag_type & 4)) {
++		if (!sff8472_comp || (sff_diag_type & SFP_DIAG_ADDRMODE)) {
+ 			modinfo->type = ETH_MODULE_SFF_8079;
+ 			modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
+ 		} else {
+ 			modinfo->type = ETH_MODULE_SFF_8472;
+-			modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++			if (sff_diag_type & SFP_DIAG_IMPLEMENTED)
++				modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++			else
++				modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN / 2;
+ 		}
+ 		break;
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+index 002fc62ea7262..63bc956d20376 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+@@ -293,6 +293,8 @@ enum {
+ #define I2C_PAGE_SIZE		0x100
+ #define SFP_DIAG_TYPE_ADDR	0x5c
+ #define SFP_DIAG_TYPE_LEN	0x1
++#define SFP_DIAG_ADDRMODE	BIT(2)
++#define SFP_DIAG_IMPLEMENTED	BIT(6)
+ #define SFF_8472_COMP_ADDR	0x5e
+ #define SFF_8472_COMP_LEN	0x1
+ #define SFF_REV_ADDR		0x1
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+index bcad69c480740..4af5561cbfc54 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+@@ -870,7 +870,7 @@ static void do_abort_syn_rcv(struct sock *child, struct sock *parent)
+ 		 * created only after 3 way handshake is done.
+ 		 */
+ 		sock_orphan(child);
+-		percpu_counter_inc((child)->sk_prot->orphan_count);
++		INC_ORPHAN_COUNT(child);
+ 		chtls_release_resources(child);
+ 		chtls_conn_done(child);
+ 	} else {
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
+index b1161bdeda4dc..f61ca657601ca 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h
+@@ -95,7 +95,7 @@ struct deferred_skb_cb {
+ #define WSCALE_OK(tp) ((tp)->rx_opt.wscale_ok)
+ #define TSTAMP_OK(tp) ((tp)->rx_opt.tstamp_ok)
+ #define SACK_OK(tp) ((tp)->rx_opt.sack_ok)
+-#define INC_ORPHAN_COUNT(sk) percpu_counter_inc((sk)->sk_prot->orphan_count)
++#define INC_ORPHAN_COUNT(sk) this_cpu_inc(*(sk)->sk_prot->orphan_count)
+ 
+ /* TLS SKB */
+ #define skb_ulp_tls_inline(skb)      (ULP_SKB_CB(skb)->ulp.tls.ofld)
+diff --git a/drivers/net/ethernet/dec/tulip/winbond-840.c b/drivers/net/ethernet/dec/tulip/winbond-840.c
+index 1876f15dd8279..1bd76dd975379 100644
+--- a/drivers/net/ethernet/dec/tulip/winbond-840.c
++++ b/drivers/net/ethernet/dec/tulip/winbond-840.c
+@@ -877,7 +877,7 @@ static void init_registers(struct net_device *dev)
+ 		8000	16 longwords		0200 2 longwords	2000 32 longwords
+ 		C000	32  longwords		0400 4 longwords */
+ 
+-#if defined (__i386__) && !defined(MODULE)
++#if defined (__i386__) && !defined(MODULE) && !defined(CONFIG_UML)
+ 	/* When not a module we can work around broken '486 PCI boards. */
+ 	if (boot_cpu_data.x86 <= 4) {
+ 		i |= 0x4800;
+diff --git a/drivers/net/ethernet/fealnx.c b/drivers/net/ethernet/fealnx.c
+index 0f141c14d72df..a417f0c072e9b 100644
+--- a/drivers/net/ethernet/fealnx.c
++++ b/drivers/net/ethernet/fealnx.c
+@@ -857,7 +857,7 @@ static int netdev_open(struct net_device *dev)
+ 	np->bcrvalue |= 0x04;	/* big-endian */
+ #endif
+ 
+-#if defined(__i386__) && !defined(MODULE)
++#if defined(__i386__) && !defined(MODULE) && !defined(CONFIG_UML)
+ 	if (boot_cpu_data.x86 <= 4)
+ 		np->crvalue = 0xa00;
+ 	else
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 4577226d3c6ad..0536d2c76fbc4 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -486,14 +486,16 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 
+ 	data_size = sizeof(struct streamid_data);
+ 	si_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
++	if (!si_data)
++		return -ENOMEM;
+ 	cbd.length = cpu_to_le16(data_size);
+ 
+ 	dma = dma_map_single(&priv->si->pdev->dev, si_data,
+ 			     data_size, DMA_FROM_DEVICE);
+ 	if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+ 		netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+-		kfree(si_data);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto out;
+ 	}
+ 
+ 	cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+@@ -512,12 +514,10 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 
+ 	err = enetc_send_cmd(priv->si, &cbd);
+ 	if (err)
+-		return -EINVAL;
++		goto out;
+ 
+-	if (!enable) {
+-		kfree(si_data);
+-		return 0;
+-	}
++	if (!enable)
++		goto out;
+ 
+ 	/* Enable the entry overwrite again incase space flushed by hardware */
+ 	memset(&cbd, 0, sizeof(cbd));
+@@ -560,6 +560,10 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ 	}
+ 
+ 	err = enetc_send_cmd(priv->si, &cbd);
++out:
++	if (!dma_mapping_error(&priv->si->pdev->dev, dma))
++		dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_FROM_DEVICE);
++
+ 	kfree(si_data);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index 92dc18a4bcc41..c1d4042671f9f 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -30,7 +30,7 @@
+ #define GVE_MIN_MSIX 3
+ 
+ /* Numbers of gve tx/rx stats in stats report. */
+-#define GVE_TX_STATS_REPORT_NUM	5
++#define GVE_TX_STATS_REPORT_NUM	6
+ #define GVE_RX_STATS_REPORT_NUM	2
+ 
+ /* Interval to schedule a stats report update, 20000ms. */
+@@ -224,11 +224,6 @@ struct gve_tx_iovec {
+ 	u32 iov_padding; /* padding associated with this segment */
+ };
+ 
+-struct gve_tx_dma_buf {
+-	DEFINE_DMA_UNMAP_ADDR(dma);
+-	DEFINE_DMA_UNMAP_LEN(len);
+-};
+-
+ /* Tracks the memory in the fifo occupied by the skb. Mapped 1:1 to a desc
+  * ring entry but only used for a pkt_desc not a seg_desc
+  */
+@@ -236,7 +231,10 @@ struct gve_tx_buffer_state {
+ 	struct sk_buff *skb; /* skb for this pkt */
+ 	union {
+ 		struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
+-		struct gve_tx_dma_buf buf;
++		struct {
++			DEFINE_DMA_UNMAP_ADDR(dma);
++			DEFINE_DMA_UNMAP_LEN(len);
++		};
+ 	};
+ };
+ 
+@@ -280,7 +278,8 @@ struct gve_tx_pending_packet_dqo {
+ 	 * All others correspond to `skb`'s frags and should be unmapped with
+ 	 * `dma_unmap_page`.
+ 	 */
+-	struct gve_tx_dma_buf bufs[MAX_SKB_FRAGS + 1];
++	DEFINE_DMA_UNMAP_ADDR(dma[MAX_SKB_FRAGS + 1]);
++	DEFINE_DMA_UNMAP_LEN(len[MAX_SKB_FRAGS + 1]);
+ 	u16 num_bufs;
+ 
+ 	/* Linked list index to next element in the list, or -1 if none */
+@@ -414,7 +413,9 @@ struct gve_tx_ring {
+ 	u32 q_num ____cacheline_aligned; /* queue idx */
+ 	u32 stop_queue; /* count of queue stops */
+ 	u32 wake_queue; /* count of queue wakes */
++	u32 queue_timeout; /* count of queue timeouts */
+ 	u32 ntfy_id; /* notification block index */
++	u32 last_kick_msec; /* Last time the queue was kicked */
+ 	dma_addr_t bus; /* dma address of the descr ring */
+ 	dma_addr_t q_resources_bus; /* dma address of the queue resources */
+ 	dma_addr_t complq_bus_dqo; /* dma address of the dqo.compl_ring */
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
+index 47c3d8f313fcf..3953f6f7a4273 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.h
++++ b/drivers/net/ethernet/google/gve/gve_adminq.h
+@@ -270,6 +270,7 @@ enum gve_stat_names {
+ 	TX_LAST_COMPLETION_PROCESSED	= 5,
+ 	RX_NEXT_EXPECTED_SEQUENCE	= 6,
+ 	RX_BUFFERS_POSTED		= 7,
++	TX_TIMEOUT_CNT			= 8,
+ 	// stats from NIC
+ 	RX_QUEUE_DROP_CNT		= 65,
+ 	RX_NO_BUFFERS_POSTED		= 66,
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index bf8a4a7c43f78..959352fceead7 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -24,6 +24,9 @@
+ #define GVE_VERSION		"1.0.0"
+ #define GVE_VERSION_PREFIX	"GVE-"
+ 
++// Minimum amount of time between queue kicks in msec (10 seconds)
++#define MIN_TX_TIMEOUT_GAP (1000 * 10)
++
+ const char gve_version_str[] = GVE_VERSION;
+ static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
+ 
+@@ -1116,9 +1119,47 @@ static void gve_turnup(struct gve_priv *priv)
+ 
+ static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ {
+-	struct gve_priv *priv = netdev_priv(dev);
++	struct gve_notify_block *block;
++	struct gve_tx_ring *tx = NULL;
++	struct gve_priv *priv;
++	u32 last_nic_done;
++	u32 current_time;
++	u32 ntfy_idx;
++
++	netdev_info(dev, "Timeout on tx queue, %d", txqueue);
++	priv = netdev_priv(dev);
++	if (txqueue > priv->tx_cfg.num_queues)
++		goto reset;
++
++	ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue);
++	if (ntfy_idx >= priv->num_ntfy_blks)
++		goto reset;
++
++	block = &priv->ntfy_blocks[ntfy_idx];
++	tx = block->tx;
+ 
++	current_time = jiffies_to_msecs(jiffies);
++	if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
++		goto reset;
++
++	/* Check to see if there are missed completions, which will allow us to
++	 * kick the queue.
++	 */
++	last_nic_done = gve_tx_load_event_counter(priv, tx);
++	if (last_nic_done - tx->done) {
++		netdev_info(dev, "Kicking queue %d", txqueue);
++		iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
++		napi_schedule(&block->napi);
++		tx->last_kick_msec = current_time;
++		goto out;
++	} // Else reset.
++
++reset:
+ 	gve_schedule_reset(priv);
++
++out:
++	if (tx)
++		tx->queue_timeout++;
+ 	priv->tx_timeo_cnt++;
+ }
+ 
+@@ -1247,6 +1288,11 @@ void gve_handle_report_stats(struct gve_priv *priv)
+ 				.value = cpu_to_be64(last_completion),
+ 				.queue_id = cpu_to_be32(idx),
+ 			};
++			stats[stats_idx++] = (struct stats) {
++				.stat_name = cpu_to_be32(TX_TIMEOUT_CNT),
++				.value = cpu_to_be64(priv->tx[idx].queue_timeout),
++				.queue_id = cpu_to_be32(idx),
++			};
+ 		}
+ 	}
+ 	/* rx stats */
+diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
+index 94941d4e47449..16169f291ad9f 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx.c
++++ b/drivers/net/ethernet/google/gve/gve_rx.c
+@@ -514,8 +514,13 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx)
+ 
+ 				gve_rx_free_buffer(dev, page_info, data_slot);
+ 				page_info->page = NULL;
+-				if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot))
++				if (gve_rx_alloc_buffer(priv, dev, page_info,
++							data_slot)) {
++					u64_stats_update_begin(&rx->statss);
++					rx->rx_buf_alloc_fail++;
++					u64_stats_update_end(&rx->statss);
+ 					break;
++				}
+ 			}
+ 		}
+ 		fill_cnt++;
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index 665ac795a1adf..9922ce46a6351 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -303,15 +303,15 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx,
+ static void gve_tx_unmap_buf(struct device *dev, struct gve_tx_buffer_state *info)
+ {
+ 	if (info->skb) {
+-		dma_unmap_single(dev, dma_unmap_addr(&info->buf, dma),
+-				 dma_unmap_len(&info->buf, len),
++		dma_unmap_single(dev, dma_unmap_addr(info, dma),
++				 dma_unmap_len(info, len),
+ 				 DMA_TO_DEVICE);
+-		dma_unmap_len_set(&info->buf, len, 0);
++		dma_unmap_len_set(info, len, 0);
+ 	} else {
+-		dma_unmap_page(dev, dma_unmap_addr(&info->buf, dma),
+-			       dma_unmap_len(&info->buf, len),
++		dma_unmap_page(dev, dma_unmap_addr(info, dma),
++			       dma_unmap_len(info, len),
+ 			       DMA_TO_DEVICE);
+-		dma_unmap_len_set(&info->buf, len, 0);
++		dma_unmap_len_set(info, len, 0);
+ 	}
+ }
+ 
+@@ -491,7 +491,6 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
+ 	struct gve_tx_buffer_state *info;
+ 	bool is_gso = skb_is_gso(skb);
+ 	u32 idx = tx->req & tx->mask;
+-	struct gve_tx_dma_buf *buf;
+ 	u64 addr;
+ 	u32 len;
+ 	int i;
+@@ -515,9 +514,8 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
+ 		tx->dma_mapping_error++;
+ 		goto drop;
+ 	}
+-	buf = &info->buf;
+-	dma_unmap_len_set(buf, len, len);
+-	dma_unmap_addr_set(buf, dma, addr);
++	dma_unmap_len_set(info, len, len);
++	dma_unmap_addr_set(info, dma, addr);
+ 
+ 	payload_nfrags = shinfo->nr_frags;
+ 	if (hlen < len) {
+@@ -549,10 +547,9 @@ static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx,
+ 			tx->dma_mapping_error++;
+ 			goto unmap_drop;
+ 		}
+-		buf = &tx->info[idx].buf;
+ 		tx->info[idx].skb = NULL;
+-		dma_unmap_len_set(buf, len, len);
+-		dma_unmap_addr_set(buf, dma, addr);
++		dma_unmap_len_set(&tx->info[idx], len, len);
++		dma_unmap_addr_set(&tx->info[idx], dma, addr);
+ 
+ 		gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr);
+ 	}
+diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+index 05ddb6a75c38f..ec394d9916681 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+@@ -85,18 +85,16 @@ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx)
+ 		int j;
+ 
+ 		for (j = 0; j < cur_state->num_bufs; j++) {
+-			struct gve_tx_dma_buf *buf = &cur_state->bufs[j];
+-
+ 			if (j == 0) {
+ 				dma_unmap_single(tx->dev,
+-						 dma_unmap_addr(buf, dma),
+-						 dma_unmap_len(buf, len),
+-						 DMA_TO_DEVICE);
++					dma_unmap_addr(cur_state, dma[j]),
++					dma_unmap_len(cur_state, len[j]),
++					DMA_TO_DEVICE);
+ 			} else {
+ 				dma_unmap_page(tx->dev,
+-					       dma_unmap_addr(buf, dma),
+-					       dma_unmap_len(buf, len),
+-					       DMA_TO_DEVICE);
++					dma_unmap_addr(cur_state, dma[j]),
++					dma_unmap_len(cur_state, len[j]),
++					DMA_TO_DEVICE);
+ 			}
+ 		}
+ 		if (cur_state->skb) {
+@@ -457,15 +455,15 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx,
+ 	const bool is_gso = skb_is_gso(skb);
+ 	u32 desc_idx = tx->dqo_tx.tail;
+ 
+-	struct gve_tx_pending_packet_dqo *pending_packet;
++	struct gve_tx_pending_packet_dqo *pkt;
+ 	struct gve_tx_metadata_dqo metadata;
+ 	s16 completion_tag;
+ 	int i;
+ 
+-	pending_packet = gve_alloc_pending_packet(tx);
+-	pending_packet->skb = skb;
+-	pending_packet->num_bufs = 0;
+-	completion_tag = pending_packet - tx->dqo.pending_packets;
++	pkt = gve_alloc_pending_packet(tx);
++	pkt->skb = skb;
++	pkt->num_bufs = 0;
++	completion_tag = pkt - tx->dqo.pending_packets;
+ 
+ 	gve_extract_tx_metadata_dqo(skb, &metadata);
+ 	if (is_gso) {
+@@ -493,8 +491,6 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx,
+ 
+ 	/* Map the linear portion of skb */
+ 	{
+-		struct gve_tx_dma_buf *buf =
+-			&pending_packet->bufs[pending_packet->num_bufs];
+ 		u32 len = skb_headlen(skb);
+ 		dma_addr_t addr;
+ 
+@@ -502,9 +498,9 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx,
+ 		if (unlikely(dma_mapping_error(tx->dev, addr)))
+ 			goto err;
+ 
+-		dma_unmap_len_set(buf, len, len);
+-		dma_unmap_addr_set(buf, dma, addr);
+-		++pending_packet->num_bufs;
++		dma_unmap_len_set(pkt, len[pkt->num_bufs], len);
++		dma_unmap_addr_set(pkt, dma[pkt->num_bufs], addr);
++		++pkt->num_bufs;
+ 
+ 		gve_tx_fill_pkt_desc_dqo(tx, &desc_idx, skb, len, addr,
+ 					 completion_tag,
+@@ -512,8 +508,6 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx,
+ 	}
+ 
+ 	for (i = 0; i < shinfo->nr_frags; i++) {
+-		struct gve_tx_dma_buf *buf =
+-			&pending_packet->bufs[pending_packet->num_bufs];
+ 		const skb_frag_t *frag = &shinfo->frags[i];
+ 		bool is_eop = i == (shinfo->nr_frags - 1);
+ 		u32 len = skb_frag_size(frag);
+@@ -523,9 +517,9 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx,
+ 		if (unlikely(dma_mapping_error(tx->dev, addr)))
+ 			goto err;
+ 
+-		dma_unmap_len_set(buf, len, len);
+-		dma_unmap_addr_set(buf, dma, addr);
+-		++pending_packet->num_bufs;
++		dma_unmap_len_set(pkt, len[pkt->num_bufs], len);
++		dma_unmap_addr_set(pkt, dma[pkt->num_bufs], addr);
++		++pkt->num_bufs;
+ 
+ 		gve_tx_fill_pkt_desc_dqo(tx, &desc_idx, skb, len, addr,
+ 					 completion_tag, is_eop, is_gso);
+@@ -552,22 +546,23 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx,
+ 	return 0;
+ 
+ err:
+-	for (i = 0; i < pending_packet->num_bufs; i++) {
+-		struct gve_tx_dma_buf *buf = &pending_packet->bufs[i];
+-
++	for (i = 0; i < pkt->num_bufs; i++) {
+ 		if (i == 0) {
+-			dma_unmap_single(tx->dev, dma_unmap_addr(buf, dma),
+-					 dma_unmap_len(buf, len),
++			dma_unmap_single(tx->dev,
++					 dma_unmap_addr(pkt, dma[i]),
++					 dma_unmap_len(pkt, len[i]),
+ 					 DMA_TO_DEVICE);
+ 		} else {
+-			dma_unmap_page(tx->dev, dma_unmap_addr(buf, dma),
+-				       dma_unmap_len(buf, len), DMA_TO_DEVICE);
++			dma_unmap_page(tx->dev,
++				       dma_unmap_addr(pkt, dma[i]),
++				       dma_unmap_len(pkt, len[i]),
++				       DMA_TO_DEVICE);
+ 		}
+ 	}
+ 
+-	pending_packet->skb = NULL;
+-	pending_packet->num_bufs = 0;
+-	gve_free_pending_packet(tx, pending_packet);
++	pkt->skb = NULL;
++	pkt->num_bufs = 0;
++	gve_free_pending_packet(tx, pkt);
+ 
+ 	return -1;
+ }
+@@ -725,12 +720,12 @@ static void add_to_list(struct gve_tx_ring *tx, struct gve_index_list *list,
+ 
+ static void remove_from_list(struct gve_tx_ring *tx,
+ 			     struct gve_index_list *list,
+-			     struct gve_tx_pending_packet_dqo *pending_packet)
++			     struct gve_tx_pending_packet_dqo *pkt)
+ {
+ 	s16 prev_index, next_index;
+ 
+-	prev_index = pending_packet->prev;
+-	next_index = pending_packet->next;
++	prev_index = pkt->prev;
++	next_index = pkt->next;
+ 
+ 	if (prev_index == -1) {
+ 		/* Node is head */
+@@ -747,21 +742,18 @@ static void remove_from_list(struct gve_tx_ring *tx,
+ }
+ 
+ static void gve_unmap_packet(struct device *dev,
+-			     struct gve_tx_pending_packet_dqo *pending_packet)
++			     struct gve_tx_pending_packet_dqo *pkt)
+ {
+-	struct gve_tx_dma_buf *buf;
+ 	int i;
+ 
+ 	/* SKB linear portion is guaranteed to be mapped */
+-	buf = &pending_packet->bufs[0];
+-	dma_unmap_single(dev, dma_unmap_addr(buf, dma),
+-			 dma_unmap_len(buf, len), DMA_TO_DEVICE);
+-	for (i = 1; i < pending_packet->num_bufs; i++) {
+-		buf = &pending_packet->bufs[i];
+-		dma_unmap_page(dev, dma_unmap_addr(buf, dma),
+-			       dma_unmap_len(buf, len), DMA_TO_DEVICE);
++	dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
++			 dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
++	for (i = 1; i < pkt->num_bufs; i++) {
++		dma_unmap_page(dev, dma_unmap_addr(pkt, dma[i]),
++			       dma_unmap_len(pkt, len[i]), DMA_TO_DEVICE);
+ 	}
+-	pending_packet->num_bufs = 0;
++	pkt->num_bufs = 0;
+ }
+ 
+ /* Completion types and expected behavior:
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index c60d0626062cf..f517cc334ebed 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -125,7 +125,7 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 	if (ret)
+ 		return ret;
+ 
+-	for (i = 0; i < hdev->tc_max; i++) {
++	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		switch (ets->tc_tsa[i]) {
+ 		case IEEE_8021QAZ_TSA_STRICT:
+ 			if (hdev->tm_info.tc_info[i].tc_sch_mode !=
+@@ -265,28 +265,24 @@ err_out:
+ 
+ static int hclge_ieee_getpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+ {
+-	u64 requests[HNAE3_MAX_TC], indications[HNAE3_MAX_TC];
+ 	struct hclge_vport *vport = hclge_get_vport(h);
+ 	struct hclge_dev *hdev = vport->back;
+ 	int ret;
+-	u8 i;
+ 
+ 	memset(pfc, 0, sizeof(*pfc));
+ 	pfc->pfc_cap = hdev->pfc_max;
+ 	pfc->pfc_en = hdev->tm_info.pfc_en;
+ 
+-	ret = hclge_pfc_tx_stats_get(hdev, requests);
+-	if (ret)
++	ret = hclge_mac_update_stats(hdev);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"failed to update MAC stats, ret = %d.\n", ret);
+ 		return ret;
++	}
+ 
+-	ret = hclge_pfc_rx_stats_get(hdev, indications);
+-	if (ret)
+-		return ret;
++	hclge_pfc_tx_stats_get(hdev, pfc->requests);
++	hclge_pfc_rx_stats_get(hdev, pfc->indications);
+ 
+-	for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
+-		pfc->requests[i] = requests[i];
+-		pfc->indications[i] = indications[i];
+-	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 721eb4e92f618..a066b9f5ba11c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -25,8 +25,6 @@
+ #include "hnae3.h"
+ 
+ #define HCLGE_NAME			"hclge"
+-#define HCLGE_STATS_READ(p, offset) (*(u64 *)((u8 *)(p) + (offset)))
+-#define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f))
+ 
+ #define HCLGE_BUF_SIZE_UNIT	256U
+ #define HCLGE_BUF_MUL_BY	2
+@@ -547,7 +545,7 @@ static int hclge_mac_query_reg_num(struct hclge_dev *hdev, u32 *desc_num)
+ 	return 0;
+ }
+ 
+-static int hclge_mac_update_stats(struct hclge_dev *hdev)
++int hclge_mac_update_stats(struct hclge_dev *hdev)
+ {
+ 	u32 desc_num;
+ 	int ret;
+@@ -2497,7 +2495,7 @@ static int hclge_init_roce_base_info(struct hclge_vport *vport)
+ 	if (hdev->num_msi < hdev->num_nic_msi + hdev->num_roce_msi)
+ 		return -EINVAL;
+ 
+-	roce->rinfo.base_vector = hdev->roce_base_vector;
++	roce->rinfo.base_vector = hdev->num_nic_msi;
+ 
+ 	roce->rinfo.netdev = nic->kinfo.netdev;
+ 	roce->rinfo.roce_io_base = hdev->hw.io_base;
+@@ -2533,10 +2531,6 @@ static int hclge_init_msi(struct hclge_dev *hdev)
+ 	hdev->num_msi = vectors;
+ 	hdev->num_msi_left = vectors;
+ 
+-	hdev->base_msi_vector = pdev->irq;
+-	hdev->roce_base_vector = hdev->base_msi_vector +
+-				hdev->num_nic_msi;
+-
+ 	hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi,
+ 					   sizeof(u16), GFP_KERNEL);
+ 	if (!hdev->vector_status) {
+@@ -2846,33 +2840,29 @@ static void hclge_mbx_task_schedule(struct hclge_dev *hdev)
+ {
+ 	if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
+ 	    !test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state))
+-		mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
+-				    hclge_wq, &hdev->service_task, 0);
++		mod_delayed_work(hclge_wq, &hdev->service_task, 0);
+ }
+ 
+ static void hclge_reset_task_schedule(struct hclge_dev *hdev)
+ {
+ 	if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
++	    test_bit(HCLGE_STATE_SERVICE_INITED, &hdev->state) &&
+ 	    !test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state))
+-		mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
+-				    hclge_wq, &hdev->service_task, 0);
++		mod_delayed_work(hclge_wq, &hdev->service_task, 0);
+ }
+ 
+ static void hclge_errhand_task_schedule(struct hclge_dev *hdev)
+ {
+ 	if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
+ 	    !test_and_set_bit(HCLGE_STATE_ERR_SERVICE_SCHED, &hdev->state))
+-		mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
+-				    hclge_wq, &hdev->service_task, 0);
++		mod_delayed_work(hclge_wq, &hdev->service_task, 0);
+ }
+ 
+ void hclge_task_schedule(struct hclge_dev *hdev, unsigned long delay_time)
+ {
+ 	if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
+ 	    !test_bit(HCLGE_STATE_RST_FAIL, &hdev->state))
+-		mod_delayed_work_on(cpumask_first(&hdev->affinity_mask),
+-				    hclge_wq, &hdev->service_task,
+-				    delay_time);
++		mod_delayed_work(hclge_wq, &hdev->service_task, delay_time);
+ }
+ 
+ static int hclge_get_mac_link_status(struct hclge_dev *hdev, int *link_status)
+@@ -3490,33 +3480,14 @@ static void hclge_get_misc_vector(struct hclge_dev *hdev)
+ 	hdev->num_msi_used += 1;
+ }
+ 
+-static void hclge_irq_affinity_notify(struct irq_affinity_notify *notify,
+-				      const cpumask_t *mask)
+-{
+-	struct hclge_dev *hdev = container_of(notify, struct hclge_dev,
+-					      affinity_notify);
+-
+-	cpumask_copy(&hdev->affinity_mask, mask);
+-}
+-
+-static void hclge_irq_affinity_release(struct kref *ref)
+-{
+-}
+-
+ static void hclge_misc_affinity_setup(struct hclge_dev *hdev)
+ {
+ 	irq_set_affinity_hint(hdev->misc_vector.vector_irq,
+ 			      &hdev->affinity_mask);
+-
+-	hdev->affinity_notify.notify = hclge_irq_affinity_notify;
+-	hdev->affinity_notify.release = hclge_irq_affinity_release;
+-	irq_set_affinity_notifier(hdev->misc_vector.vector_irq,
+-				  &hdev->affinity_notify);
+ }
+ 
+ static void hclge_misc_affinity_teardown(struct hclge_dev *hdev)
+ {
+-	irq_set_affinity_notifier(hdev->misc_vector.vector_irq, NULL);
+ 	irq_set_affinity_hint(hdev->misc_vector.vector_irq, NULL);
+ }
+ 
+@@ -13040,7 +13011,7 @@ static int hclge_init(void)
+ {
+ 	pr_info("%s is initializing\n", HCLGE_NAME);
+ 
+-	hclge_wq = alloc_workqueue("%s", 0, 0, HCLGE_NAME);
++	hclge_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, HCLGE_NAME);
+ 	if (!hclge_wq) {
+ 		pr_err("%s: failed to create workqueue\n", HCLGE_NAME);
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index e446b839a3715..3a4d04884cd34 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -822,6 +822,9 @@ struct hclge_vf_vlan_cfg {
+ 		(y) = (_k_ ^ ~_v_) & (_k_); \
+ 	} while (0)
+ 
++#define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f))
++#define HCLGE_STATS_READ(p, offset) (*(u64 *)((u8 *)(p) + (offset)))
++
+ #define HCLGE_MAC_TNL_LOG_SIZE	8
+ #define HCLGE_VPORT_NUM 256
+ struct hclge_dev {
+@@ -874,12 +877,10 @@ struct hclge_dev {
+ 	u16 num_msi;
+ 	u16 num_msi_left;
+ 	u16 num_msi_used;
+-	u32 base_msi_vector;
+ 	u16 *vector_status;
+ 	int *vector_irq;
+ 	u16 num_nic_msi;	/* Num of nic vectors for this PF */
+ 	u16 num_roce_msi;	/* Num of roce vectors for this PF */
+-	int roce_base_vector;
+ 
+ 	unsigned long service_timer_period;
+ 	unsigned long service_timer_previous;
+@@ -942,7 +943,6 @@ struct hclge_dev {
+ 
+ 	/* affinity mask and notify for misc interrupt */
+ 	cpumask_t affinity_mask;
+-	struct irq_affinity_notify affinity_notify;
+ 	struct hclge_ptp *ptp;
+ };
+ 
+@@ -1131,4 +1131,5 @@ void hclge_inform_vf_promisc_info(struct hclge_vport *vport);
+ int hclge_dbg_dump_rst_info(struct hclge_dev *hdev, char *buf, int len);
+ int hclge_push_vf_link_status(struct hclge_vport *vport);
+ int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en);
++int hclge_mac_update_stats(struct hclge_dev *hdev);
+ #endif
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 124791e4bfeed..e948b6558de59 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -113,50 +113,50 @@ static int hclge_shaper_para_calc(u32 ir, u8 shaper_level,
+ 	return 0;
+ }
+ 
+-static int hclge_pfc_stats_get(struct hclge_dev *hdev,
+-			       enum hclge_opcode_type opcode, u64 *stats)
+-{
+-	struct hclge_desc desc[HCLGE_TM_PFC_PKT_GET_CMD_NUM];
+-	int ret, i, j;
+-
+-	if (!(opcode == HCLGE_OPC_QUERY_PFC_RX_PKT_CNT ||
+-	      opcode == HCLGE_OPC_QUERY_PFC_TX_PKT_CNT))
+-		return -EINVAL;
+-
+-	for (i = 0; i < HCLGE_TM_PFC_PKT_GET_CMD_NUM - 1; i++) {
+-		hclge_cmd_setup_basic_desc(&desc[i], opcode, true);
+-		desc[i].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
+-	}
+-
+-	hclge_cmd_setup_basic_desc(&desc[i], opcode, true);
++static const u16 hclge_pfc_tx_stats_offset[] = {
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri0_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri1_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri2_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri3_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri4_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri5_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri6_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_tx_pfc_pri7_pkt_num)
++};
+ 
+-	ret = hclge_cmd_send(&hdev->hw, desc, HCLGE_TM_PFC_PKT_GET_CMD_NUM);
+-	if (ret)
+-		return ret;
++static const u16 hclge_pfc_rx_stats_offset[] = {
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri0_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri1_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri2_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri3_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri4_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri5_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri6_pkt_num),
++	HCLGE_MAC_STATS_FIELD_OFF(mac_rx_pfc_pri7_pkt_num)
++};
+ 
+-	for (i = 0; i < HCLGE_TM_PFC_PKT_GET_CMD_NUM; i++) {
+-		struct hclge_pfc_stats_cmd *pfc_stats =
+-				(struct hclge_pfc_stats_cmd *)desc[i].data;
++static void hclge_pfc_stats_get(struct hclge_dev *hdev, bool tx, u64 *stats)
++{
++	const u16 *offset;
++	int i;
+ 
+-		for (j = 0; j < HCLGE_TM_PFC_NUM_GET_PER_CMD; j++) {
+-			u32 index = i * HCLGE_TM_PFC_PKT_GET_CMD_NUM + j;
++	if (tx)
++		offset = hclge_pfc_tx_stats_offset;
++	else
++		offset = hclge_pfc_rx_stats_offset;
+ 
+-			if (index < HCLGE_MAX_TC_NUM)
+-				stats[index] =
+-					le64_to_cpu(pfc_stats->pkt_num[j]);
+-		}
+-	}
+-	return 0;
++	for (i = 0; i < HCLGE_MAX_TC_NUM; i++)
++		stats[i] = HCLGE_STATS_READ(&hdev->mac_stats, offset[i]);
+ }
+ 
+-int hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats)
++void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats)
+ {
+-	return hclge_pfc_stats_get(hdev, HCLGE_OPC_QUERY_PFC_RX_PKT_CNT, stats);
++	hclge_pfc_stats_get(hdev, false, stats);
+ }
+ 
+-int hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats)
++void hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats)
+ {
+-	return hclge_pfc_stats_get(hdev, HCLGE_OPC_QUERY_PFC_TX_PKT_CNT, stats);
++	hclge_pfc_stats_get(hdev, true, stats);
+ }
+ 
+ int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx)
+@@ -1123,7 +1123,6 @@ static int hclge_tm_pri_tc_base_dwrr_cfg(struct hclge_dev *hdev)
+ 
+ static int hclge_tm_ets_tc_dwrr_cfg(struct hclge_dev *hdev)
+ {
+-#define DEFAULT_TC_WEIGHT	1
+ #define DEFAULT_TC_OFFSET	14
+ 
+ 	struct hclge_ets_tc_weight_cmd *ets_weight;
+@@ -1136,13 +1135,7 @@ static int hclge_tm_ets_tc_dwrr_cfg(struct hclge_dev *hdev)
+ 	for (i = 0; i < HNAE3_MAX_TC; i++) {
+ 		struct hclge_pg_info *pg_info;
+ 
+-		ets_weight->tc_weight[i] = DEFAULT_TC_WEIGHT;
+-
+-		if (!(hdev->hw_tc_map & BIT(i)))
+-			continue;
+-
+-		pg_info =
+-			&hdev->tm_info.pg_info[hdev->tm_info.tc_info[i].pgid];
++		pg_info = &hdev->tm_info.pg_info[hdev->tm_info.tc_info[i].pgid];
+ 		ets_weight->tc_weight[i] = pg_info->tc_dwrr[i];
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+index 4b2c3a7889800..72563cd0d1e26 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+@@ -228,8 +228,8 @@ int hclge_tm_dwrr_cfg(struct hclge_dev *hdev);
+ int hclge_tm_init_hw(struct hclge_dev *hdev, bool init);
+ int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx);
+ int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr);
+-int hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats);
+-int hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats);
++void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats);
++void hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats);
+ int hclge_tm_qs_shaper_cfg(struct hclge_vport *vport, int max_tx_rate);
+ int hclge_tm_get_qset_num(struct hclge_dev *hdev, u16 *qset_num);
+ int hclge_tm_get_pri_num(struct hclge_dev *hdev, u8 *pri_num);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index b8414f684e89d..b2f2056357a18 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2230,6 +2230,7 @@ static void hclgevf_get_misc_vector(struct hclgevf_dev *hdev)
+ void hclgevf_reset_task_schedule(struct hclgevf_dev *hdev)
+ {
+ 	if (!test_bit(HCLGEVF_STATE_REMOVING, &hdev->state) &&
++	    test_bit(HCLGEVF_STATE_SERVICE_INITED, &hdev->state) &&
+ 	    !test_and_set_bit(HCLGEVF_STATE_RST_SERVICE_SCHED,
+ 			      &hdev->state))
+ 		mod_delayed_work(hclgevf_wq, &hdev->service_task, 0);
+@@ -2554,7 +2555,7 @@ static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev)
+ 	    hdev->num_msi_left == 0)
+ 		return -EINVAL;
+ 
+-	roce->rinfo.base_vector = hdev->roce_base_vector;
++	roce->rinfo.base_vector = hdev->roce_base_msix_offset;
+ 
+ 	roce->rinfo.netdev = nic->kinfo.netdev;
+ 	roce->rinfo.roce_io_base = hdev->hw.io_base;
+@@ -2820,9 +2821,6 @@ static int hclgevf_init_msi(struct hclgevf_dev *hdev)
+ 	hdev->num_msi = vectors;
+ 	hdev->num_msi_left = vectors;
+ 
+-	hdev->base_msi_vector = pdev->irq;
+-	hdev->roce_base_vector = pdev->irq + hdev->roce_base_msix_offset;
+-
+ 	hdev->vector_status = devm_kcalloc(&pdev->dev, hdev->num_msi,
+ 					   sizeof(u16), GFP_KERNEL);
+ 	if (!hdev->vector_status) {
+@@ -3010,7 +3008,10 @@ static void hclgevf_uninit_client_instance(struct hnae3_client *client,
+ 
+ 	/* un-init roce, if it exists */
+ 	if (hdev->roce_client) {
++		while (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state))
++			msleep(HCLGEVF_WAIT_RESET_DONE);
+ 		clear_bit(HCLGEVF_STATE_ROCE_REGISTERED, &hdev->state);
++
+ 		hdev->roce_client->ops->uninit_instance(&hdev->roce, 0);
+ 		hdev->roce_client = NULL;
+ 		hdev->roce.client = NULL;
+@@ -3019,6 +3020,8 @@ static void hclgevf_uninit_client_instance(struct hnae3_client *client,
+ 	/* un-init nic/unic, if this was not called by roce client */
+ 	if (client->ops->uninit_instance && hdev->nic_client &&
+ 	    client->type != HNAE3_CLIENT_ROCE) {
++		while (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state))
++			msleep(HCLGEVF_WAIT_RESET_DONE);
+ 		clear_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state);
+ 
+ 		client->ops->uninit_instance(&hdev->nic, 0);
+@@ -3443,6 +3446,8 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ 
+ 	hclgevf_init_rxd_adv_layout(hdev);
+ 
++	set_bit(HCLGEVF_STATE_SERVICE_INITED, &hdev->state);
++
+ 	hdev->last_reset_time = jiffies;
+ 	dev_info(&hdev->pdev->dev, "finished initializing %s driver\n",
+ 		 HCLGEVF_DRIVER_NAME);
+@@ -3890,7 +3895,7 @@ static int hclgevf_init(void)
+ {
+ 	pr_info("%s is initializing\n", HCLGEVF_NAME);
+ 
+-	hclgevf_wq = alloc_workqueue("%s", 0, 0, HCLGEVF_NAME);
++	hclgevf_wq = alloc_workqueue("%s", WQ_UNBOUND, 0, HCLGEVF_NAME);
+ 	if (!hclgevf_wq) {
+ 		pr_err("%s: failed to create workqueue\n", HCLGEVF_NAME);
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index e8013be055f89..5809ce2a81014 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -107,6 +107,8 @@
+ #define HCLGEVF_VF_RST_ING		0x07008
+ #define HCLGEVF_VF_RST_ING_BIT		BIT(16)
+ 
++#define HCLGEVF_WAIT_RESET_DONE		100
++
+ #define HCLGEVF_RSS_IND_TBL_SIZE		512
+ #define HCLGEVF_RSS_SET_BITMAP_MSK	0xffff
+ #define HCLGEVF_RSS_KEY_SIZE		40
+@@ -144,6 +146,7 @@ enum hclgevf_states {
+ 	HCLGEVF_STATE_REMOVING,
+ 	HCLGEVF_STATE_NIC_REGISTERED,
+ 	HCLGEVF_STATE_ROCE_REGISTERED,
++	HCLGEVF_STATE_SERVICE_INITED,
+ 	/* task states */
+ 	HCLGEVF_STATE_RST_SERVICE_SCHED,
+ 	HCLGEVF_STATE_RST_HANDLING,
+@@ -305,8 +308,6 @@ struct hclgevf_dev {
+ 	u16 num_nic_msix;	/* Num of nic vectors for this VF */
+ 	u16 num_roce_msix;	/* Num of roce vectors for this VF */
+ 	u16 roce_base_msix_offset;
+-	int roce_base_vector;
+-	u32 base_msi_vector;
+ 	u16 *vector_status;
+ 	int *vector_irq;
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 6aa6ff89a7651..352ffe982d849 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1724,8 +1724,6 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	ind_bufp = &tx_scrq->ind_buf;
+ 
+ 	if (test_bit(0, &adapter->resetting)) {
+-		if (!netif_subqueue_stopped(netdev, skb))
+-			netif_stop_subqueue(netdev, queue_num);
+ 		dev_kfree_skb_any(skb);
+ 
+ 		tx_send_failed++;
+@@ -2567,7 +2565,7 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
+ 
+ 	if (adapter->state == VNIC_PROBING) {
+ 		netdev_warn(netdev, "Adapter reset during probe\n");
+-		adapter->init_done_rc = EAGAIN;
++		adapter->init_done_rc = -EAGAIN;
+ 		ret = EAGAIN;
+ 		goto err;
+ 	}
+@@ -5069,11 +5067,6 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 			 */
+ 			adapter->login_pending = false;
+ 
+-			if (!completion_done(&adapter->init_done)) {
+-				complete(&adapter->init_done);
+-				adapter->init_done_rc = -EIO;
+-			}
+-
+ 			if (adapter->state == VNIC_DOWN)
+ 				rc = ibmvnic_reset(adapter, VNIC_RESET_PASSIVE_INIT);
+ 			else
+@@ -5094,6 +5087,13 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 					   rc);
+ 				adapter->failover_pending = false;
+ 			}
++
++			if (!completion_done(&adapter->init_done)) {
++				complete(&adapter->init_done);
++				if (!adapter->init_done_rc)
++					adapter->init_done_rc = -EAGAIN;
++			}
++
+ 			break;
+ 		case IBMVNIC_CRQ_INIT_COMPLETE:
+ 			dev_info(dev, "Partner initialization complete\n");
+@@ -5414,6 +5414,9 @@ static int init_crq_queue(struct ibmvnic_adapter *adapter)
+ 	crq->cur = 0;
+ 	spin_lock_init(&crq->lock);
+ 
++	/* process any CRQs that were queued before we enabled interrupts */
++	tasklet_schedule(&adapter->tasklet);
++
+ 	return retrc;
+ 
+ req_irq_failed:
+@@ -5558,7 +5561,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 		}
+ 
+ 		rc = ibmvnic_reset_init(adapter, false);
+-	} while (rc == EAGAIN);
++	} while (rc == -EAGAIN);
+ 
+ 	/* We are ignoring the error from ibmvnic_reset_init() assuming that the
+ 	 * partner is not ready. CRQ is not active. When the partner becomes
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 3c4f08d20414e..8b23fbf3cdf4c 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -306,10 +306,6 @@ struct ice_vsi {
+ 	spinlock_t arfs_lock;	/* protects aRFS hash table and filter state */
+ 	atomic_t *arfs_last_fltr_id;
+ 
+-	/* devlink port data */
+-	struct devlink_port devlink_port;
+-	bool devlink_port_registered;
+-
+ 	u16 max_frame;
+ 	u16 rx_buf_len;
+ 
+@@ -421,6 +417,9 @@ struct ice_pf {
+ 	struct devlink_region *nvm_region;
+ 	struct devlink_region *devcaps_region;
+ 
++	/* devlink port data */
++	struct devlink_port devlink_port;
++
+ 	/* OS reserved IRQ details */
+ 	struct msix_entry *msix_entries;
+ 	struct ice_res_tracker *irq_tracker;
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index c36057efc7ae3..f74610442bda7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -909,7 +909,7 @@ ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
+ 	} else if (status == ICE_ERR_DOES_NOT_EXIST) {
+ 		dev_dbg(ice_pf_to_dev(vsi->back), "LAN Tx queues do not exist, nothing to disable\n");
+ 	} else if (status) {
+-		dev_err(ice_pf_to_dev(vsi->back), "Failed to disable LAN Tx queues, error: %s\n",
++		dev_dbg(ice_pf_to_dev(vsi->back), "Failed to disable LAN Tx queues, error: %s\n",
+ 			ice_stat_str(status));
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
+index 64bea7659cf7e..44921f380e873 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
++++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
+@@ -526,60 +526,115 @@ void ice_devlink_unregister(struct ice_pf *pf)
+ }
+ 
+ /**
+- * ice_devlink_create_port - Create a devlink port for this VSI
+- * @vsi: the VSI to create a port for
++ * ice_devlink_create_pf_port - Create a devlink port for this PF
++ * @pf: the PF to create a devlink port for
+  *
+- * Create and register a devlink_port for this VSI.
++ * Create and register a devlink_port for this PF.
+  *
+  * Return: zero on success or an error code on failure.
+  */
+-int ice_devlink_create_port(struct ice_vsi *vsi)
++int ice_devlink_create_pf_port(struct ice_pf *pf)
+ {
+ 	struct devlink_port_attrs attrs = {};
+-	struct ice_port_info *pi;
++	struct devlink_port *devlink_port;
+ 	struct devlink *devlink;
++	struct ice_vsi *vsi;
+ 	struct device *dev;
+-	struct ice_pf *pf;
+ 	int err;
+ 
+-	/* Currently we only create devlink_port instances for PF VSIs */
+-	if (vsi->type != ICE_VSI_PF)
+-		return -EINVAL;
+-
+-	pf = vsi->back;
+-	devlink = priv_to_devlink(pf);
+ 	dev = ice_pf_to_dev(pf);
+-	pi = pf->hw.port_info;
++
++	devlink_port = &pf->devlink_port;
++
++	vsi = ice_get_main_vsi(pf);
++	if (!vsi)
++		return -EIO;
+ 
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+-	attrs.phys.port_number = pi->lport;
+-	devlink_port_attrs_set(&vsi->devlink_port, &attrs);
+-	err = devlink_port_register(devlink, &vsi->devlink_port, vsi->idx);
++	attrs.phys.port_number = pf->hw.bus.func;
++	devlink_port_attrs_set(devlink_port, &attrs);
++	devlink = priv_to_devlink(pf);
++
++	err = devlink_port_register(devlink, devlink_port, vsi->idx);
+ 	if (err) {
+-		dev_err(dev, "devlink_port_register failed: %d\n", err);
++		dev_err(dev, "Failed to create devlink port for PF %d, error %d\n",
++			pf->hw.pf_id, err);
+ 		return err;
+ 	}
+ 
+-	vsi->devlink_port_registered = true;
++	return 0;
++}
++
++/**
++ * ice_devlink_destroy_pf_port - Destroy the devlink_port for this PF
++ * @pf: the PF to cleanup
++ *
++ * Unregisters the devlink_port structure associated with this PF.
++ */
++void ice_devlink_destroy_pf_port(struct ice_pf *pf)
++{
++	struct devlink_port *devlink_port;
++
++	devlink_port = &pf->devlink_port;
++
++	devlink_port_type_clear(devlink_port);
++	devlink_port_unregister(devlink_port);
++}
++
++/**
++ * ice_devlink_create_vf_port - Create a devlink port for this VF
++ * @vf: the VF to create a port for
++ *
++ * Create and register a devlink_port for this VF.
++ *
++ * Return: zero on success or an error code on failure.
++ */
++int ice_devlink_create_vf_port(struct ice_vf *vf)
++{
++	struct devlink_port_attrs attrs = {};
++	struct devlink_port *devlink_port;
++	struct devlink *devlink;
++	struct ice_vsi *vsi;
++	struct device *dev;
++	struct ice_pf *pf;
++	int err;
++
++	pf = vf->pf;
++	dev = ice_pf_to_dev(pf);
++	vsi = ice_get_vf_vsi(vf);
++	devlink_port = &vf->devlink_port;
++
++	attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF;
++	attrs.pci_vf.pf = pf->hw.bus.func;
++	attrs.pci_vf.vf = vf->vf_id;
++
++	devlink_port_attrs_set(devlink_port, &attrs);
++	devlink = priv_to_devlink(pf);
++
++	err = devlink_port_register(devlink, devlink_port, vsi->idx);
++	if (err) {
++		dev_err(dev, "Failed to create devlink port for VF %d, error %d\n",
++			vf->vf_id, err);
++		return err;
++	}
+ 
+ 	return 0;
+ }
+ 
+ /**
+- * ice_devlink_destroy_port - Destroy the devlink_port for this VSI
+- * @vsi: the VSI to cleanup
++ * ice_devlink_destroy_vf_port - Destroy the devlink_port for this VF
++ * @vf: the VF to cleanup
+  *
+- * Unregisters the devlink_port structure associated with this VSI.
++ * Unregisters the devlink_port structure associated with this VF.
+  */
+-void ice_devlink_destroy_port(struct ice_vsi *vsi)
++void ice_devlink_destroy_vf_port(struct ice_vf *vf)
+ {
+-	if (!vsi->devlink_port_registered)
+-		return;
++	struct devlink_port *devlink_port;
+ 
+-	devlink_port_type_clear(&vsi->devlink_port);
+-	devlink_port_unregister(&vsi->devlink_port);
++	devlink_port = &vf->devlink_port;
+ 
+-	vsi->devlink_port_registered = false;
++	devlink_port_type_clear(devlink_port);
++	devlink_port_unregister(devlink_port);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.h b/drivers/net/ethernet/intel/ice/ice_devlink.h
+index e07e74426bde8..e30284ccbed4c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devlink.h
++++ b/drivers/net/ethernet/intel/ice/ice_devlink.h
+@@ -8,8 +8,10 @@ struct ice_pf *ice_allocate_pf(struct device *dev);
+ 
+ int ice_devlink_register(struct ice_pf *pf);
+ void ice_devlink_unregister(struct ice_pf *pf);
+-int ice_devlink_create_port(struct ice_vsi *vsi);
+-void ice_devlink_destroy_port(struct ice_vsi *vsi);
++int ice_devlink_create_pf_port(struct ice_pf *pf);
++void ice_devlink_destroy_pf_port(struct ice_pf *pf);
++int ice_devlink_create_vf_port(struct ice_vf *vf);
++void ice_devlink_destroy_vf_port(struct ice_vf *vf);
+ 
+ void ice_devlink_init_regions(struct ice_pf *pf);
+ void ice_devlink_destroy_regions(struct ice_pf *pf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index b718e196af2a4..e47920fe73b88 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2860,7 +2860,8 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ 		clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
+ 	}
+ 
+-	ice_devlink_destroy_port(vsi);
++	if (vsi->type == ICE_VSI_PF)
++		ice_devlink_destroy_pf_port(pf);
+ 
+ 	if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
+ 		ice_rss_clean(vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index ed087fde20668..2a20881d07e84 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4170,11 +4170,11 @@ static int ice_register_netdev(struct ice_pf *pf)
+ 	set_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state);
+ 	netif_carrier_off(vsi->netdev);
+ 	netif_tx_stop_all_queues(vsi->netdev);
+-	err = ice_devlink_create_port(vsi);
++	err = ice_devlink_create_pf_port(pf);
+ 	if (err)
+ 		goto err_devlink_create;
+ 
+-	devlink_port_type_eth_set(&vsi->devlink_port, vsi->netdev);
++	devlink_port_type_eth_set(&pf->devlink_port, vsi->netdev);
+ 
+ 	return 0;
+ err_devlink_create:
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index e93430ab37f1e..7e3ae4cc17a39 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -251,7 +251,7 @@ ice_vc_hash_field_match_type ice_vc_hash_field_list_comms[] = {
+  * ice_get_vf_vsi - get VF's VSI based on the stored index
+  * @vf: VF used to get VSI
+  */
+-static struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf)
++struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf)
+ {
+ 	return vf->pf->vsi[vf->lan_vsi_idx];
+ }
+@@ -634,8 +634,7 @@ void ice_free_vfs(struct ice_pf *pf)
+ 
+ 	/* Avoid wait time by stopping all VFs at the same time */
+ 	ice_for_each_vf(pf, i)
+-		if (test_bit(ICE_VF_STATE_QS_ENA, pf->vf[i].vf_states))
+-			ice_dis_vf_qs(&pf->vf[i]);
++		ice_dis_vf_qs(&pf->vf[i]);
+ 
+ 	tmp = pf->num_alloc_vfs;
+ 	pf->num_qps_per_vf = 0;
+@@ -1645,8 +1644,7 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
+ 
+ 	vsi = ice_get_vf_vsi(vf);
+ 
+-	if (test_bit(ICE_VF_STATE_QS_ENA, vf->vf_states))
+-		ice_dis_vf_qs(vf);
++	ice_dis_vf_qs(vf);
+ 
+ 	/* Call Disable LAN Tx queue AQ whether or not queues are
+ 	 * enabled. This is needed for successful completion of VFR.
+@@ -3762,6 +3760,7 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi,
+ 	struct device *dev = ice_pf_to_dev(vf->pf);
+ 	u8 *mac_addr = vc_ether_addr->addr;
+ 	enum ice_status status;
++	int ret = 0;
+ 
+ 	/* device MAC already added */
+ 	if (ether_addr_equal(mac_addr, vf->dev_lan_addr.addr))
+@@ -3774,20 +3773,23 @@ ice_vc_add_mac_addr(struct ice_vf *vf, struct ice_vsi *vsi,
+ 
+ 	status = ice_fltr_add_mac(vsi, mac_addr, ICE_FWD_TO_VSI);
+ 	if (status == ICE_ERR_ALREADY_EXISTS) {
+-		dev_err(dev, "MAC %pM already exists for VF %d\n", mac_addr,
++		dev_dbg(dev, "MAC %pM already exists for VF %d\n", mac_addr,
+ 			vf->vf_id);
+-		return -EEXIST;
++		/* don't return since we might need to update
++		 * the primary MAC in ice_vfhw_mac_add() below
++		 */
++		ret = -EEXIST;
+ 	} else if (status) {
+ 		dev_err(dev, "Failed to add MAC %pM for VF %d\n, error %s\n",
+ 			mac_addr, vf->vf_id, ice_stat_str(status));
+ 		return -EIO;
++	} else {
++		vf->num_mac++;
+ 	}
+ 
+ 	ice_vfhw_mac_add(vf, vc_ether_addr);
+ 
+-	vf->num_mac++;
+-
+-	return 0;
++	return ret;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+index 842cb077df861..38b4dc82c5c18 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+@@ -111,9 +111,13 @@ struct ice_vf {
+ 	struct ice_mdd_vf_events mdd_rx_events;
+ 	struct ice_mdd_vf_events mdd_tx_events;
+ 	DECLARE_BITMAP(opcodes_allowlist, VIRTCHNL_OP_MAX);
++
++	/* devlink port data */
++	struct devlink_port devlink_port;
+ };
+ 
+ #ifdef CONFIG_PCI_IOV
++struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf);
+ void ice_process_vflr_event(struct ice_pf *pf);
+ int ice_sriov_configure(struct pci_dev *pdev, int num_vfs);
+ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac);
+@@ -171,6 +175,11 @@ static inline void ice_print_vfs_mdd_events(struct ice_pf *pf) { }
+ static inline void ice_print_vf_rx_mdd_event(struct ice_vf *vf) { }
+ static inline void ice_restore_all_vfs_msi_state(struct pci_dev *pdev) { }
+ 
++static inline struct ice_vsi *ice_get_vf_vsi(struct ice_vf *vf)
++{
++	return NULL;
++}
++
+ static inline bool
+ ice_is_malicious_vf(struct ice_pf __always_unused *pf,
+ 		    struct ice_rq_event_info __always_unused *event,
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 3229bafa2a2c7..3e673e40e878e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1605,7 +1605,7 @@ static void mvpp22_gop_fca_set_periodic_timer(struct mvpp2_port *port)
+ 	mvpp22_gop_fca_enable_periodic(port, true);
+ }
+ 
+-static int mvpp22_gop_init(struct mvpp2_port *port)
++static int mvpp22_gop_init(struct mvpp2_port *port, phy_interface_t interface)
+ {
+ 	struct mvpp2 *priv = port->priv;
+ 	u32 val;
+@@ -1613,7 +1613,7 @@ static int mvpp22_gop_init(struct mvpp2_port *port)
+ 	if (!priv->sysctrl_base)
+ 		return 0;
+ 
+-	switch (port->phy_interface) {
++	switch (interface) {
+ 	case PHY_INTERFACE_MODE_RGMII:
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
+ 	case PHY_INTERFACE_MODE_RGMII_RXID:
+@@ -1743,15 +1743,15 @@ static void mvpp22_gop_setup_irq(struct mvpp2_port *port)
+  * lanes by the physical layer. This is why configurations like
+  * "PPv2 (2500BaseX) - COMPHY (2500SGMII)" are valid.
+  */
+-static int mvpp22_comphy_init(struct mvpp2_port *port)
++static int mvpp22_comphy_init(struct mvpp2_port *port,
++			      phy_interface_t interface)
+ {
+ 	int ret;
+ 
+ 	if (!port->comphy)
+ 		return 0;
+ 
+-	ret = phy_set_mode_ext(port->comphy, PHY_MODE_ETHERNET,
+-			       port->phy_interface);
++	ret = phy_set_mode_ext(port->comphy, PHY_MODE_ETHERNET, interface);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2172,7 +2172,8 @@ static void mvpp22_pcs_reset_assert(struct mvpp2_port *port)
+ 	writel(val & ~MVPP22_XPCS_CFG0_RESET_DIS, xpcs + MVPP22_XPCS_CFG0);
+ }
+ 
+-static void mvpp22_pcs_reset_deassert(struct mvpp2_port *port)
++static void mvpp22_pcs_reset_deassert(struct mvpp2_port *port,
++				      phy_interface_t interface)
+ {
+ 	struct mvpp2 *priv = port->priv;
+ 	void __iomem *mpcs, *xpcs;
+@@ -2184,7 +2185,7 @@ static void mvpp22_pcs_reset_deassert(struct mvpp2_port *port)
+ 	mpcs = priv->iface_base + MVPP22_MPCS_BASE(port->gop_id);
+ 	xpcs = priv->iface_base + MVPP22_XPCS_BASE(port->gop_id);
+ 
+-	switch (port->phy_interface) {
++	switch (interface) {
+ 	case PHY_INTERFACE_MODE_10GBASER:
+ 		val = readl(mpcs + MVPP22_MPCS_CLK_RESET);
+ 		val |= MAC_CLK_RESET_MAC | MAC_CLK_RESET_SD_RX |
+@@ -4529,7 +4530,8 @@ static int mvpp2_poll(struct napi_struct *napi, int budget)
+ 	return rx_done;
+ }
+ 
+-static void mvpp22_mode_reconfigure(struct mvpp2_port *port)
++static void mvpp22_mode_reconfigure(struct mvpp2_port *port,
++				    phy_interface_t interface)
+ {
+ 	u32 ctrl3;
+ 
+@@ -4540,18 +4542,18 @@ static void mvpp22_mode_reconfigure(struct mvpp2_port *port)
+ 	mvpp22_pcs_reset_assert(port);
+ 
+ 	/* comphy reconfiguration */
+-	mvpp22_comphy_init(port);
++	mvpp22_comphy_init(port, interface);
+ 
+ 	/* gop reconfiguration */
+-	mvpp22_gop_init(port);
++	mvpp22_gop_init(port, interface);
+ 
+-	mvpp22_pcs_reset_deassert(port);
++	mvpp22_pcs_reset_deassert(port, interface);
+ 
+ 	if (mvpp2_port_supports_xlg(port)) {
+ 		ctrl3 = readl(port->base + MVPP22_XLG_CTRL3_REG);
+ 		ctrl3 &= ~MVPP22_XLG_CTRL3_MACMODESELECT_MASK;
+ 
+-		if (mvpp2_is_xlg(port->phy_interface))
++		if (mvpp2_is_xlg(interface))
+ 			ctrl3 |= MVPP22_XLG_CTRL3_MACMODESELECT_10G;
+ 		else
+ 			ctrl3 |= MVPP22_XLG_CTRL3_MACMODESELECT_GMAC;
+@@ -4559,7 +4561,7 @@ static void mvpp22_mode_reconfigure(struct mvpp2_port *port)
+ 		writel(ctrl3, port->base + MVPP22_XLG_CTRL3_REG);
+ 	}
+ 
+-	if (mvpp2_port_supports_xlg(port) && mvpp2_is_xlg(port->phy_interface))
++	if (mvpp2_port_supports_xlg(port) && mvpp2_is_xlg(interface))
+ 		mvpp2_xlg_max_rx_size_set(port);
+ 	else
+ 		mvpp2_gmac_max_rx_size_set(port);
+@@ -4579,7 +4581,7 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 	mvpp2_interrupts_enable(port);
+ 
+ 	if (port->priv->hw_version >= MVPP22)
+-		mvpp22_mode_reconfigure(port);
++		mvpp22_mode_reconfigure(port, port->phy_interface);
+ 
+ 	if (port->phylink) {
+ 		phylink_start(port->phylink);
+@@ -6462,6 +6464,9 @@ static int mvpp2__mac_prepare(struct phylink_config *config, unsigned int mode,
+ 			mvpp22_gop_mask_irq(port);
+ 
+ 			phy_power_off(port->comphy);
++
++			/* Reconfigure the serdes lanes */
++			mvpp22_mode_reconfigure(port, interface);
+ 		}
+ 	}
+ 
+@@ -6516,9 +6521,6 @@ static int mvpp2_mac_finish(struct phylink_config *config, unsigned int mode,
+ 	    port->phy_interface != interface) {
+ 		port->phy_interface = interface;
+ 
+-		/* Reconfigure the serdes lanes */
+-		mvpp22_mode_reconfigure(port);
+-
+ 		/* Unmask interrupts */
+ 		mvpp22_gop_unmask_irq(port);
+ 	}
+@@ -6945,7 +6947,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
+ 	 * driver does this, we can remove this code.
+ 	 */
+ 	if (port->comphy) {
+-		err = mvpp22_comphy_init(port);
++		err = mvpp22_comphy_init(port, port->phy_interface);
+ 		if (err == 0)
+ 			phy_power_off(port->comphy);
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 2c24944a4dba2..105b32221d91b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1496,6 +1496,44 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
+ 	mutex_unlock(&mbox->lock);
+ }
+ 
++static void otx2_do_set_rx_mode(struct otx2_nic *pf)
++{
++	struct net_device *netdev = pf->netdev;
++	struct nix_rx_mode *req;
++	bool promisc = false;
++
++	if (!(netdev->flags & IFF_UP))
++		return;
++
++	if ((netdev->flags & IFF_PROMISC) ||
++	    (netdev_uc_count(netdev) > OTX2_MAX_UNICAST_FLOWS)) {
++		promisc = true;
++	}
++
++	/* Write unicast address to mcam entries or del from mcam */
++	if (!promisc && netdev->priv_flags & IFF_UNICAST_FLT)
++		__dev_uc_sync(netdev, otx2_add_macfilter, otx2_del_macfilter);
++
++	mutex_lock(&pf->mbox.lock);
++	req = otx2_mbox_alloc_msg_nix_set_rx_mode(&pf->mbox);
++	if (!req) {
++		mutex_unlock(&pf->mbox.lock);
++		return;
++	}
++
++	req->mode = NIX_RX_MODE_UCAST;
++
++	if (promisc)
++		req->mode |= NIX_RX_MODE_PROMISC;
++	if (netdev->flags & (IFF_ALLMULTI | IFF_MULTICAST))
++		req->mode |= NIX_RX_MODE_ALLMULTI;
++
++	req->mode |= NIX_RX_MODE_USE_MCE;
++
++	otx2_sync_mbox_msg(&pf->mbox);
++	mutex_unlock(&pf->mbox.lock);
++}
++
+ int otx2_open(struct net_device *netdev)
+ {
+ 	struct otx2_nic *pf = netdev_priv(netdev);
+@@ -1657,6 +1695,8 @@ int otx2_open(struct net_device *netdev)
+ 	if (err)
+ 		goto err_tx_stop_queues;
+ 
++	otx2_do_set_rx_mode(pf);
++
+ 	return 0;
+ 
+ err_tx_stop_queues:
+@@ -1809,43 +1849,11 @@ static void otx2_set_rx_mode(struct net_device *netdev)
+ 	queue_work(pf->otx2_wq, &pf->rx_mode_work);
+ }
+ 
+-static void otx2_do_set_rx_mode(struct work_struct *work)
++static void otx2_rx_mode_wrk_handler(struct work_struct *work)
+ {
+ 	struct otx2_nic *pf = container_of(work, struct otx2_nic, rx_mode_work);
+-	struct net_device *netdev = pf->netdev;
+-	struct nix_rx_mode *req;
+-	bool promisc = false;
+-
+-	if (!(netdev->flags & IFF_UP))
+-		return;
+-
+-	if ((netdev->flags & IFF_PROMISC) ||
+-	    (netdev_uc_count(netdev) > OTX2_MAX_UNICAST_FLOWS)) {
+-		promisc = true;
+-	}
+ 
+-	/* Write unicast address to mcam entries or del from mcam */
+-	if (!promisc && netdev->priv_flags & IFF_UNICAST_FLT)
+-		__dev_uc_sync(netdev, otx2_add_macfilter, otx2_del_macfilter);
+-
+-	mutex_lock(&pf->mbox.lock);
+-	req = otx2_mbox_alloc_msg_nix_set_rx_mode(&pf->mbox);
+-	if (!req) {
+-		mutex_unlock(&pf->mbox.lock);
+-		return;
+-	}
+-
+-	req->mode = NIX_RX_MODE_UCAST;
+-
+-	if (promisc)
+-		req->mode |= NIX_RX_MODE_PROMISC;
+-	if (netdev->flags & (IFF_ALLMULTI | IFF_MULTICAST))
+-		req->mode |= NIX_RX_MODE_ALLMULTI;
+-
+-	req->mode |= NIX_RX_MODE_USE_MCE;
+-
+-	otx2_sync_mbox_msg(&pf->mbox);
+-	mutex_unlock(&pf->mbox.lock);
++	otx2_do_set_rx_mode(pf);
+ }
+ 
+ static int otx2_set_features(struct net_device *netdev,
+@@ -2345,7 +2353,7 @@ static int otx2_wq_init(struct otx2_nic *pf)
+ 	if (!pf->otx2_wq)
+ 		return -ENOMEM;
+ 
+-	INIT_WORK(&pf->rx_mode_work, otx2_do_set_rx_mode);
++	INIT_WORK(&pf->rx_mode_work, otx2_rx_mode_wrk_handler);
+ 	INIT_WORK(&pf->reset_task, otx2_reset_task);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+index f666133a15dea..6e9058265d43e 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+@@ -743,6 +743,7 @@ static int mchp_sparx5_probe(struct platform_device *pdev)
+ 			err = dev_err_probe(sparx5->dev, PTR_ERR(serdes),
+ 					    "port %u: missing serdes\n",
+ 					    portno);
++			of_node_put(portnp);
+ 			goto cleanup_config;
+ 		}
+ 		config->portno = portno;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index acfbe94b52918..ece2ddadaf3bf 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -568,23 +568,6 @@ static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
+ 	return 0;
+ }
+ 
+-u32 ocelot_ptp_rew_op(struct sk_buff *skb)
+-{
+-	struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
+-	u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd;
+-	u32 rew_op = 0;
+-
+-	if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) {
+-		rew_op = ptp_cmd;
+-		rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3;
+-	} else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
+-		rew_op = ptp_cmd;
+-	}
+-
+-	return rew_op;
+-}
+-EXPORT_SYMBOL(ocelot_ptp_rew_op);
+-
+ static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb,
+ 				       unsigned int ptp_class)
+ {
+diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
+index e9d260d84bf33..855833fd42e3d 100644
+--- a/drivers/net/ethernet/mscc/ocelot_net.c
++++ b/drivers/net/ethernet/mscc/ocelot_net.c
+@@ -8,6 +8,7 @@
+  * Copyright 2020-2021 NXP Semiconductors
+  */
+ 
++#include <linux/dsa/ocelot.h>
+ #include <linux/if_bridge.h>
+ #include <net/pkt_cls.h>
+ #include "ocelot.h"
+diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+index 4bd7e9d9ec61c..03cfa0dc7bf99 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
++++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+@@ -972,6 +972,7 @@ static int mscc_ocelot_init_ports(struct platform_device *pdev,
+ 		target = ocelot_regmap_init(ocelot, res);
+ 		if (IS_ERR(target)) {
+ 			err = PTR_ERR(target);
++			of_node_put(portnp);
+ 			goto out_teardown;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+index 11c83a99b0140..f469950c72657 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
+@@ -182,15 +182,21 @@ static int
+ nfp_bpf_check_mtu(struct nfp_app *app, struct net_device *netdev, int new_mtu)
+ {
+ 	struct nfp_net *nn = netdev_priv(netdev);
+-	unsigned int max_mtu;
++	struct nfp_bpf_vnic *bv;
++	struct bpf_prog *prog;
+ 
+ 	if (~nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF)
+ 		return 0;
+ 
+-	max_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
+-	if (new_mtu > max_mtu) {
+-		nn_info(nn, "BPF offload active, MTU over %u not supported\n",
+-			max_mtu);
++	if (nn->xdp_hw.prog) {
++		prog = nn->xdp_hw.prog;
++	} else {
++		bv = nn->app_priv;
++		prog = bv->tc_prog;
++	}
++
++	if (nfp_bpf_offload_check_mtu(nn, prog, new_mtu)) {
++		nn_info(nn, "BPF offload active, potential packet access beyond hardware packet boundary");
+ 		return -EBUSY;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h
+index d0e17eebddd94..16841bb750b72 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/main.h
++++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h
+@@ -560,6 +560,8 @@ bool nfp_is_subprog_start(struct nfp_insn_meta *meta);
+ void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog);
+ int nfp_bpf_jit(struct nfp_prog *prog);
+ bool nfp_bpf_supported_opcode(u8 code);
++bool nfp_bpf_offload_check_mtu(struct nfp_net *nn, struct bpf_prog *prog,
++			       unsigned int mtu);
+ 
+ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
+ 		    int prev_insn_idx);
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+index 53851853562c6..9d97cd281f18e 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+@@ -481,19 +481,28 @@ int nfp_bpf_event_output(struct nfp_app_bpf *bpf, const void *data,
+ 	return 0;
+ }
+ 
++bool nfp_bpf_offload_check_mtu(struct nfp_net *nn, struct bpf_prog *prog,
++			       unsigned int mtu)
++{
++	unsigned int fw_mtu, pkt_off;
++
++	fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
++	pkt_off = min(prog->aux->max_pkt_offset, mtu);
++
++	return fw_mtu < pkt_off;
++}
++
+ static int
+ nfp_net_bpf_load(struct nfp_net *nn, struct bpf_prog *prog,
+ 		 struct netlink_ext_ack *extack)
+ {
+ 	struct nfp_prog *nfp_prog = prog->aux->offload->dev_priv;
+-	unsigned int fw_mtu, pkt_off, max_stack, max_prog_len;
++	unsigned int max_stack, max_prog_len;
+ 	dma_addr_t dma_addr;
+ 	void *img;
+ 	int err;
+ 
+-	fw_mtu = nn_readb(nn, NFP_NET_CFG_BPF_INL_MTU) * 64 - 32;
+-	pkt_off = min(prog->aux->max_pkt_offset, nn->dp.netdev->mtu);
+-	if (fw_mtu < pkt_off) {
++	if (nfp_bpf_offload_check_mtu(nn, prog, nn->dp.netdev->mtu)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "BPF offload not supported with potential packet access beyond HW packet split boundary");
+ 		return -EOPNOTSUPP;
+ 	}
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 1c7f9ed6f1c19..826780e5aa49a 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1184,19 +1184,17 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
+ 		edev->devlink = qed_ops->common->devlink_register(cdev);
+ 		if (IS_ERR(edev->devlink)) {
+ 			DP_NOTICE(edev, "Cannot register devlink\n");
++			rc = PTR_ERR(edev->devlink);
+ 			edev->devlink = NULL;
+-			/* Go on, we can live without devlink */
++			goto err3;
+ 		}
+ 	} else {
+ 		struct net_device *ndev = pci_get_drvdata(pdev);
++		struct qed_devlink *qdl;
+ 
+ 		edev = netdev_priv(ndev);
+-
+-		if (edev->devlink) {
+-			struct qed_devlink *qdl = devlink_priv(edev->devlink);
+-
+-			qdl->cdev = cdev;
+-		}
++		qdl = devlink_priv(edev->devlink);
++		qdl->cdev = cdev;
+ 		edev->cdev = cdev;
+ 		memset(&edev->stats, 0, sizeof(edev->stats));
+ 		memcpy(&edev->dev_info, &dev_info, sizeof(dev_info));
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 55411c100a0e5..6a566676ec0e9 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -157,6 +157,7 @@ static const struct pci_device_id rtl8169_pci_tbl[] = {
+ 	{ PCI_VDEVICE(REALTEK,	0x8129) },
+ 	{ PCI_VDEVICE(REALTEK,	0x8136), RTL_CFG_NO_GBIT },
+ 	{ PCI_VDEVICE(REALTEK,	0x8161) },
++	{ PCI_VDEVICE(REALTEK,	0x8162) },
+ 	{ PCI_VDEVICE(REALTEK,	0x8167) },
+ 	{ PCI_VDEVICE(REALTEK,	0x8168) },
+ 	{ PCI_VDEVICE(NCUBE,	0x8168) },
+diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c
+index 4bd3ef8f3384e..c4fe3c48ac46a 100644
+--- a/drivers/net/ethernet/sfc/mcdi_port_common.c
++++ b/drivers/net/ethernet/sfc/mcdi_port_common.c
+@@ -132,16 +132,27 @@ void mcdi_to_ethtool_linkset(u32 media, u32 cap, unsigned long *linkset)
+ 	case MC_CMD_MEDIA_SFP_PLUS:
+ 	case MC_CMD_MEDIA_QSFP_PLUS:
+ 		SET_BIT(FIBRE);
+-		if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN))
++		if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN)) {
+ 			SET_BIT(1000baseT_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN))
+-			SET_BIT(10000baseT_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN))
++			SET_BIT(1000baseX_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN)) {
++			SET_BIT(10000baseCR_Full);
++			SET_BIT(10000baseLR_Full);
++			SET_BIT(10000baseSR_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN)) {
+ 			SET_BIT(40000baseCR4_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN))
++			SET_BIT(40000baseSR4_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN)) {
+ 			SET_BIT(100000baseCR4_Full);
+-		if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN))
++			SET_BIT(100000baseSR4_Full);
++		}
++		if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN)) {
+ 			SET_BIT(25000baseCR_Full);
++			SET_BIT(25000baseSR_Full);
++		}
+ 		if (cap & (1 << MC_CMD_PHY_CAP_50000FDX_LBN))
+ 			SET_BIT(50000baseCR2_Full);
+ 		break;
+@@ -192,15 +203,19 @@ u32 ethtool_linkset_to_mcdi_cap(const unsigned long *linkset)
+ 		result |= (1 << MC_CMD_PHY_CAP_100FDX_LBN);
+ 	if (TEST_BIT(1000baseT_Half))
+ 		result |= (1 << MC_CMD_PHY_CAP_1000HDX_LBN);
+-	if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full))
++	if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full) ||
++			TEST_BIT(1000baseX_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_1000FDX_LBN);
+-	if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full))
++	if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full) ||
++			TEST_BIT(10000baseCR_Full) || TEST_BIT(10000baseLR_Full) ||
++			TEST_BIT(10000baseSR_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_10000FDX_LBN);
+-	if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full))
++	if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full) ||
++			TEST_BIT(40000baseSR4_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_40000FDX_LBN);
+-	if (TEST_BIT(100000baseCR4_Full))
++	if (TEST_BIT(100000baseCR4_Full) || TEST_BIT(100000baseSR4_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_100000FDX_LBN);
+-	if (TEST_BIT(25000baseCR_Full))
++	if (TEST_BIT(25000baseCR_Full) || TEST_BIT(25000baseSR_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_25000FDX_LBN);
+ 	if (TEST_BIT(50000baseCR2_Full))
+ 		result |= (1 << MC_CMD_PHY_CAP_50000FDX_LBN);
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index a39c5143b3864..797e51802ccbb 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -648,7 +648,7 @@ static int efx_ptp_get_attributes(struct efx_nic *efx)
+ 	} else if (rc == -EINVAL) {
+ 		fmt = MC_CMD_PTP_OUT_GET_ATTRIBUTES_SECONDS_NANOSECONDS;
+ 	} else if (rc == -EPERM) {
+-		netif_info(efx, probe, efx->net_dev, "no PTP support\n");
++		pci_info(efx->pci_dev, "no PTP support\n");
+ 		return rc;
+ 	} else {
+ 		efx_mcdi_display_error(efx, MC_CMD_PTP, sizeof(inbuf),
+@@ -824,7 +824,7 @@ static int efx_ptp_disable(struct efx_nic *efx)
+ 	 * should only have been called during probe.
+ 	 */
+ 	if (rc == -ENOSYS || rc == -EPERM)
+-		netif_info(efx, probe, efx->net_dev, "no PTP support\n");
++		pci_info(efx->pci_dev, "no PTP support\n");
+ 	else if (rc)
+ 		efx_mcdi_display_error(efx, MC_CMD_PTP,
+ 				       MC_CMD_PTP_IN_DISABLE_LEN,
+diff --git a/drivers/net/ethernet/sfc/siena_sriov.c b/drivers/net/ethernet/sfc/siena_sriov.c
+index 83dcfcae3d4b5..441e7f3e53751 100644
+--- a/drivers/net/ethernet/sfc/siena_sriov.c
++++ b/drivers/net/ethernet/sfc/siena_sriov.c
+@@ -1057,7 +1057,7 @@ void efx_siena_sriov_probe(struct efx_nic *efx)
+ 		return;
+ 
+ 	if (efx_siena_sriov_cmd(efx, false, &efx->vi_scale, &count)) {
+-		netif_info(efx, probe, efx->net_dev, "no SR-IOV VFs probed\n");
++		pci_info(efx->pci_dev, "no SR-IOV VFs probed\n");
+ 		return;
+ 	}
+ 	if (count > 0 && count > max_vfs)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 8160087ee92f2..1c4ea0b1b845b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -786,8 +786,6 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 		goto disable;
+ 	if (qopt->num_entries >= dep)
+ 		return -EINVAL;
+-	if (!qopt->base_time)
+-		return -ERANGE;
+ 	if (!qopt->cycle_time)
+ 		return -ERANGE;
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 0c75e0576ee1f..1ef0aaef5c61c 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -1299,10 +1299,8 @@ struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params)
+ 	if (!ale)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	ale->p0_untag_vid_mask =
+-		devm_kmalloc_array(params->dev, BITS_TO_LONGS(VLAN_N_VID),
+-				   sizeof(unsigned long),
+-				   GFP_KERNEL);
++	ale->p0_untag_vid_mask = devm_bitmap_zalloc(params->dev, VLAN_N_VID,
++						    GFP_KERNEL);
+ 	if (!ale->p0_untag_vid_mask)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
+index c674e34b68394..399bc92562f42 100644
+--- a/drivers/net/ethernet/ti/davinci_emac.c
++++ b/drivers/net/ethernet/ti/davinci_emac.c
+@@ -412,8 +412,20 @@ static int emac_set_coalesce(struct net_device *ndev,
+ 	u32 int_ctrl, num_interrupts = 0;
+ 	u32 prescale = 0, addnl_dvdr = 1, coal_intvl = 0;
+ 
+-	if (!coal->rx_coalesce_usecs)
+-		return -EINVAL;
++	if (!coal->rx_coalesce_usecs) {
++		priv->coal_intvl = 0;
++
++		switch (priv->version) {
++		case EMAC_VERSION_2:
++			emac_ctrl_write(EMAC_DM646X_CMINTCTRL, 0);
++			break;
++		default:
++			emac_ctrl_write(EMAC_CTRL_EWINTTCNT, 0);
++			break;
++		}
++
++		return 0;
++	}
+ 
+ 	coal_intvl = coal->rx_coalesce_usecs;
+ 
+diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c
+index e9258a9f3702c..31bc02421dd4e 100644
+--- a/drivers/net/ifb.c
++++ b/drivers/net/ifb.c
+@@ -76,7 +76,9 @@ static void ifb_ri_tasklet(struct tasklet_struct *t)
+ 
+ 	while ((skb = __skb_dequeue(&txp->tq)) != NULL) {
+ 		skb->redirected = 0;
++#ifdef CONFIG_NET_CLS_ACT
+ 		skb->tc_skip_classify = 1;
++#endif
+ 
+ 		u64_stats_update_begin(&txp->tsync);
+ 		txp->tx_packets++;
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 5c928f827173c..aec0fcefdccd6 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -863,9 +863,9 @@ static int ksz9031_config_init(struct phy_device *phydev)
+ 				MII_KSZ9031RN_TX_DATA_PAD_SKEW, 4,
+ 				tx_data_skews, 4, &update);
+ 
+-		if (update && phydev->interface != PHY_INTERFACE_MODE_RGMII)
++		if (update && !phy_interface_is_rgmii(phydev))
+ 			phydev_warn(phydev,
+-				    "*-skew-ps values should be used only with phy-mode = \"rgmii\"\n");
++				    "*-skew-ps values should be used only with RGMII PHY modes\n");
+ 
+ 		/* Silicon Errata Sheet (DS80000691D or DS80000692D):
+ 		 * When the device links in the 1000BASE-T slave mode only,
+@@ -1593,8 +1593,9 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_sset_count = kszphy_get_sset_count,
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+-	.suspend	= genphy_suspend,
+-	.resume		= genphy_resume,
++	/* No suspend/resume callbacks because of errata DS80000700A,
++	 * receiver error following software power down.
++	 */
+ }, {
+ 	.phy_id		= PHY_ID_KSZ8041RNLI,
+ 	.phy_id_mask	= MICREL_PHY_ID_MASK,
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index cbf344c5db610..451adee132139 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -815,7 +815,12 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
+ 	phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
+ 
+ 	/* Restart the PHY */
+-	_phy_start_aneg(phydev);
++	if (phy_is_started(phydev)) {
++		phydev->state = PHY_UP;
++		phy_trigger_machine(phydev);
++	} else {
++		_phy_start_aneg(phydev);
++	}
+ 
+ 	mutex_unlock(&phydev->lock);
+ 	return 0;
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 0d3d9c3ee83c8..d18e4e76a5df4 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -1332,7 +1332,10 @@ void phylink_suspend(struct phylink *pl, bool mac_wol)
+ 		 * but one would hope all packets have been sent. This
+ 		 * also means phylink_resolve() will do nothing.
+ 		 */
+-		netif_carrier_off(pl->netdev);
++		if (pl->netdev)
++			netif_carrier_off(pl->netdev);
++		else
++			pl->old_link_state = false;
+ 
+ 		/* We do not call mac_link_down() here as we want the
+ 		 * link to remain up to receive the WoL packets.
+@@ -1721,7 +1724,7 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
+ 		return -EOPNOTSUPP;
+ 
+ 	if (!phylink_test(pl->supported, Asym_Pause) &&
+-	    !pause->autoneg && pause->rx_pause != pause->tx_pause)
++	    pause->rx_pause != pause->tx_pause)
+ 		return -EINVAL;
+ 
+ 	pause_state = 0;
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 6e87f1fc4874a..ceecee4081384 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -3749,7 +3749,6 @@ vmxnet3_suspend(struct device *device)
+ 	vmxnet3_free_intr_resources(adapter);
+ 
+ 	netif_device_detach(netdev);
+-	netif_tx_stop_all_queues(netdev);
+ 
+ 	/* Create wake-up filters. */
+ 	pmConf = adapter->pm_conf;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 2b1b944d4b281..ec06d3bb9beeb 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -35,6 +35,7 @@
+ #include <net/l3mdev.h>
+ #include <net/fib_rules.h>
+ #include <net/netns/generic.h>
++#include <net/netfilter/nf_conntrack.h>
+ 
+ #define DRV_NAME	"vrf"
+ #define DRV_VERSION	"1.1"
+@@ -424,12 +425,26 @@ static int vrf_local_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	return NETDEV_TX_OK;
+ }
+ 
++static void vrf_nf_set_untracked(struct sk_buff *skb)
++{
++	if (skb_get_nfct(skb) == 0)
++		nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
++}
++
++static void vrf_nf_reset_ct(struct sk_buff *skb)
++{
++	if (skb_get_nfct(skb) == IP_CT_UNTRACKED)
++		nf_reset_ct(skb);
++}
++
+ #if IS_ENABLED(CONFIG_IPV6)
+ static int vrf_ip6_local_out(struct net *net, struct sock *sk,
+ 			     struct sk_buff *skb)
+ {
+ 	int err;
+ 
++	vrf_nf_reset_ct(skb);
++
+ 	err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net,
+ 		      sk, skb, NULL, skb_dst(skb)->dev, dst_output);
+ 
+@@ -508,6 +523,8 @@ static int vrf_ip_local_out(struct net *net, struct sock *sk,
+ {
+ 	int err;
+ 
++	vrf_nf_reset_ct(skb);
++
+ 	err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, skb_dst(skb)->dev, dst_output);
+ 	if (likely(err == 1))
+@@ -626,8 +643,7 @@ static void vrf_finish_direct(struct sk_buff *skb)
+ 		skb_pull(skb, ETH_HLEN);
+ 	}
+ 
+-	/* reset skb device */
+-	nf_reset_ct(skb);
++	vrf_nf_reset_ct(skb);
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -641,7 +657,7 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
+ 	struct neighbour *neigh;
+ 	int ret;
+ 
+-	nf_reset_ct(skb);
++	vrf_nf_reset_ct(skb);
+ 
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->dev = dev;
+@@ -752,6 +768,8 @@ static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
+ 
+ 	skb->dev = vrf_dev;
+ 
++	vrf_nf_set_untracked(skb);
++
+ 	err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, vrf_dev, vrf_ip6_out_direct_finish);
+ 
+@@ -859,7 +877,7 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
+ 	bool is_v6gw = false;
+ 	int ret = -EINVAL;
+ 
+-	nf_reset_ct(skb);
++	vrf_nf_reset_ct(skb);
+ 
+ 	/* Be paranoid, rather than too clever. */
+ 	if (unlikely(skb_headroom(skb) < hh_len && dev->header_ops)) {
+@@ -987,6 +1005,8 @@ static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
+ 
+ 	skb->dev = vrf_dev;
+ 
++	vrf_nf_set_untracked(skb);
++
+ 	err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
+ 		      skb, NULL, vrf_dev, vrf_ip_out_direct_finish);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 2f9be182fbfbb..64c7145b51a2e 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -2690,9 +2690,16 @@ static int ath10k_core_copy_target_iram(struct ath10k *ar)
+ 	int i, ret;
+ 	u32 len, remaining_len;
+ 
+-	hw_mem = ath10k_coredump_get_mem_layout(ar);
++	/* copy target iram feature must work also when
++	 * ATH10K_FW_CRASH_DUMP_RAM_DATA is disabled, so
++	 * _ath10k_coredump_get_mem_layout() to accomplist that
++	 */
++	hw_mem = _ath10k_coredump_get_mem_layout(ar);
+ 	if (!hw_mem)
+-		return -ENOMEM;
++		/* if CONFIG_DEV_COREDUMP is disabled we get NULL, then
++		 * just silently disable the feature by doing nothing
++		 */
++		return 0;
+ 
+ 	for (i = 0; i < hw_mem->region_table.size; i++) {
+ 		tmp = &hw_mem->region_table.regions[i];
+diff --git a/drivers/net/wireless/ath/ath10k/coredump.c b/drivers/net/wireless/ath/ath10k/coredump.c
+index 7eb72290a925c..55e7e11d06d94 100644
+--- a/drivers/net/wireless/ath/ath10k/coredump.c
++++ b/drivers/net/wireless/ath/ath10k/coredump.c
+@@ -1447,11 +1447,17 @@ static u32 ath10k_coredump_get_ramdump_size(struct ath10k *ar)
+ 
+ const struct ath10k_hw_mem_layout *ath10k_coredump_get_mem_layout(struct ath10k *ar)
+ {
+-	int i;
+-
+ 	if (!test_bit(ATH10K_FW_CRASH_DUMP_RAM_DATA, &ath10k_coredump_mask))
+ 		return NULL;
+ 
++	return _ath10k_coredump_get_mem_layout(ar);
++}
++EXPORT_SYMBOL(ath10k_coredump_get_mem_layout);
++
++const struct ath10k_hw_mem_layout *_ath10k_coredump_get_mem_layout(struct ath10k *ar)
++{
++	int i;
++
+ 	if (WARN_ON(ar->target_version == 0))
+ 		return NULL;
+ 
+@@ -1464,7 +1470,6 @@ const struct ath10k_hw_mem_layout *ath10k_coredump_get_mem_layout(struct ath10k
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(ath10k_coredump_get_mem_layout);
+ 
+ struct ath10k_fw_crash_data *ath10k_coredump_new(struct ath10k *ar)
+ {
+diff --git a/drivers/net/wireless/ath/ath10k/coredump.h b/drivers/net/wireless/ath/ath10k/coredump.h
+index 42404e246e0e9..240d705150888 100644
+--- a/drivers/net/wireless/ath/ath10k/coredump.h
++++ b/drivers/net/wireless/ath/ath10k/coredump.h
+@@ -176,6 +176,7 @@ int ath10k_coredump_register(struct ath10k *ar);
+ void ath10k_coredump_unregister(struct ath10k *ar);
+ void ath10k_coredump_destroy(struct ath10k *ar);
+ 
++const struct ath10k_hw_mem_layout *_ath10k_coredump_get_mem_layout(struct ath10k *ar);
+ const struct ath10k_hw_mem_layout *ath10k_coredump_get_mem_layout(struct ath10k *ar);
+ 
+ #else /* CONFIG_DEV_COREDUMP */
+@@ -214,6 +215,12 @@ ath10k_coredump_get_mem_layout(struct ath10k *ar)
+ 	return NULL;
+ }
+ 
++static inline const struct ath10k_hw_mem_layout *
++_ath10k_coredump_get_mem_layout(struct ath10k *ar)
++{
++	return NULL;
++}
++
+ #endif /* CONFIG_DEV_COREDUMP */
+ 
+ #endif /* _COREDUMP_H_ */
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index c272b290fa73d..1f73fbfee0c06 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -993,8 +993,12 @@ static void ath10k_mac_vif_beacon_cleanup(struct ath10k_vif *arvif)
+ 	ath10k_mac_vif_beacon_free(arvif);
+ 
+ 	if (arvif->beacon_buf) {
+-		dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
+-				  arvif->beacon_buf, arvif->beacon_paddr);
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL)
++			kfree(arvif->beacon_buf);
++		else
++			dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
++					  arvif->beacon_buf,
++					  arvif->beacon_paddr);
+ 		arvif->beacon_buf = NULL;
+ 	}
+ }
+@@ -1048,7 +1052,7 @@ static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id)
+ 	arg.channel.min_power = 0;
+ 	arg.channel.max_power = channel->max_power * 2;
+ 	arg.channel.max_reg_power = channel->max_reg_power * 2;
+-	arg.channel.max_antenna_gain = channel->max_antenna_gain * 2;
++	arg.channel.max_antenna_gain = channel->max_antenna_gain;
+ 
+ 	reinit_completion(&ar->vdev_setup_done);
+ 	reinit_completion(&ar->vdev_delete_done);
+@@ -1494,7 +1498,7 @@ static int ath10k_vdev_start_restart(struct ath10k_vif *arvif,
+ 	arg.channel.min_power = 0;
+ 	arg.channel.max_power = chandef->chan->max_power * 2;
+ 	arg.channel.max_reg_power = chandef->chan->max_reg_power * 2;
+-	arg.channel.max_antenna_gain = chandef->chan->max_antenna_gain * 2;
++	arg.channel.max_antenna_gain = chandef->chan->max_antenna_gain;
+ 
+ 	if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
+ 		arg.ssid = arvif->u.ap.ssid;
+@@ -3422,7 +3426,7 @@ static int ath10k_update_channel_list(struct ath10k *ar)
+ 			ch->min_power = 0;
+ 			ch->max_power = channel->max_power * 2;
+ 			ch->max_reg_power = channel->max_reg_power * 2;
+-			ch->max_antenna_gain = channel->max_antenna_gain * 2;
++			ch->max_antenna_gain = channel->max_antenna_gain;
+ 			ch->reg_class_id = 0; /* FIXME */
+ 
+ 			/* FIXME: why use only legacy modes, why not any
+@@ -5576,10 +5580,25 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
+ 	if (vif->type == NL80211_IFTYPE_ADHOC ||
+ 	    vif->type == NL80211_IFTYPE_MESH_POINT ||
+ 	    vif->type == NL80211_IFTYPE_AP) {
+-		arvif->beacon_buf = dma_alloc_coherent(ar->dev,
+-						       IEEE80211_MAX_FRAME_LEN,
+-						       &arvif->beacon_paddr,
+-						       GFP_ATOMIC);
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL) {
++			arvif->beacon_buf = kmalloc(IEEE80211_MAX_FRAME_LEN,
++						    GFP_KERNEL);
++
++			/* Using a kernel pointer in place of a dma_addr_t
++			 * token can lead to undefined behavior if that
++			 * makes it into cache management functions. Use a
++			 * known-invalid address token instead, which
++			 * avoids the warning and makes it easier to catch
++			 * bugs if it does end up getting used.
++			 */
++			arvif->beacon_paddr = DMA_MAPPING_ERROR;
++		} else {
++			arvif->beacon_buf =
++				dma_alloc_coherent(ar->dev,
++						   IEEE80211_MAX_FRAME_LEN,
++						   &arvif->beacon_paddr,
++						   GFP_ATOMIC);
++		}
+ 		if (!arvif->beacon_buf) {
+ 			ret = -ENOMEM;
+ 			ath10k_warn(ar, "failed to allocate beacon buffer: %d\n",
+@@ -5794,8 +5813,12 @@ err_vdev_delete:
+ 
+ err:
+ 	if (arvif->beacon_buf) {
+-		dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
+-				  arvif->beacon_buf, arvif->beacon_paddr);
++		if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL)
++			kfree(arvif->beacon_buf);
++		else
++			dma_free_coherent(ar->dev, IEEE80211_MAX_FRAME_LEN,
++					  arvif->beacon_buf,
++					  arvif->beacon_paddr);
+ 		arvif->beacon_buf = NULL;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c
+index 07e478f9a808c..80fcb917fe4e1 100644
+--- a/drivers/net/wireless/ath/ath10k/qmi.c
++++ b/drivers/net/wireless/ath/ath10k/qmi.c
+@@ -864,7 +864,8 @@ static void ath10k_qmi_event_server_exit(struct ath10k_qmi *qmi)
+ 
+ 	ath10k_qmi_remove_msa_permission(qmi);
+ 	ath10k_core_free_board_files(ar);
+-	if (!test_bit(ATH10K_SNOC_FLAG_UNREGISTERING, &ar_snoc->flags))
++	if (!test_bit(ATH10K_SNOC_FLAG_UNREGISTERING, &ar_snoc->flags) &&
++	    !test_bit(ATH10K_SNOC_FLAG_MODEM_STOPPED, &ar_snoc->flags))
+ 		ath10k_snoc_fw_crashed_dump(ar);
+ 
+ 	ath10k_snoc_fw_indication(ar, ATH10K_QMI_EVENT_FW_DOWN_IND);
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index b746052737e0b..eb705214f3f0a 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -1363,8 +1363,11 @@ static void ath10k_rx_indication_async_work(struct work_struct *work)
+ 		ep->ep_ops.ep_rx_complete(ar, skb);
+ 	}
+ 
+-	if (test_bit(ATH10K_FLAG_CORE_REGISTERED, &ar->dev_flags))
++	if (test_bit(ATH10K_FLAG_CORE_REGISTERED, &ar->dev_flags)) {
++		local_bh_disable();
+ 		napi_schedule(&ar->napi);
++		local_bh_enable();
++	}
+ }
+ 
+ static int ath10k_sdio_read_rtc_state(struct ath10k_sdio *ar_sdio, unsigned char *state)
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index ea00fbb156015..9513ab696fff1 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -12,6 +12,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/property.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/remoteproc/qcom_rproc.h>
+ #include <linux/of_address.h>
+ #include <linux/iommu.h>
+ 
+@@ -1477,6 +1478,74 @@ void ath10k_snoc_fw_crashed_dump(struct ath10k *ar)
+ 	mutex_unlock(&ar->dump_mutex);
+ }
+ 
++static int ath10k_snoc_modem_notify(struct notifier_block *nb, unsigned long action,
++				    void *data)
++{
++	struct ath10k_snoc *ar_snoc = container_of(nb, struct ath10k_snoc, nb);
++	struct ath10k *ar = ar_snoc->ar;
++	struct qcom_ssr_notify_data *notify_data = data;
++
++	switch (action) {
++	case QCOM_SSR_BEFORE_POWERUP:
++		ath10k_dbg(ar, ATH10K_DBG_SNOC, "received modem starting event\n");
++		clear_bit(ATH10K_SNOC_FLAG_MODEM_STOPPED, &ar_snoc->flags);
++		break;
++
++	case QCOM_SSR_AFTER_POWERUP:
++		ath10k_dbg(ar, ATH10K_DBG_SNOC, "received modem running event\n");
++		break;
++
++	case QCOM_SSR_BEFORE_SHUTDOWN:
++		ath10k_dbg(ar, ATH10K_DBG_SNOC, "received modem %s event\n",
++			   notify_data->crashed ? "crashed" : "stopping");
++		if (!notify_data->crashed)
++			set_bit(ATH10K_SNOC_FLAG_MODEM_STOPPED, &ar_snoc->flags);
++		else
++			clear_bit(ATH10K_SNOC_FLAG_MODEM_STOPPED, &ar_snoc->flags);
++		break;
++
++	case QCOM_SSR_AFTER_SHUTDOWN:
++		ath10k_dbg(ar, ATH10K_DBG_SNOC, "received modem offline event\n");
++		break;
++
++	default:
++		ath10k_err(ar, "received unrecognized event %lu\n", action);
++		break;
++	}
++
++	return NOTIFY_OK;
++}
++
++static int ath10k_modem_init(struct ath10k *ar)
++{
++	struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
++	void *notifier;
++	int ret;
++
++	ar_snoc->nb.notifier_call = ath10k_snoc_modem_notify;
++
++	notifier = qcom_register_ssr_notifier("mpss", &ar_snoc->nb);
++	if (IS_ERR(notifier)) {
++		ret = PTR_ERR(notifier);
++		ath10k_err(ar, "failed to initialize modem notifier: %d\n", ret);
++		return ret;
++	}
++
++	ar_snoc->notifier = notifier;
++
++	return 0;
++}
++
++static void ath10k_modem_deinit(struct ath10k *ar)
++{
++	int ret;
++	struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
++
++	ret = qcom_unregister_ssr_notifier(ar_snoc->notifier, &ar_snoc->nb);
++	if (ret)
++		ath10k_err(ar, "error %d unregistering notifier\n", ret);
++}
++
+ static int ath10k_setup_msa_resources(struct ath10k *ar, u32 msa_size)
+ {
+ 	struct device *dev = ar->dev;
+@@ -1740,10 +1809,17 @@ static int ath10k_snoc_probe(struct platform_device *pdev)
+ 		goto err_fw_deinit;
+ 	}
+ 
++	ret = ath10k_modem_init(ar);
++	if (ret)
++		goto err_qmi_deinit;
++
+ 	ath10k_dbg(ar, ATH10K_DBG_SNOC, "snoc probe\n");
+ 
+ 	return 0;
+ 
++err_qmi_deinit:
++	ath10k_qmi_deinit(ar);
++
+ err_fw_deinit:
+ 	ath10k_fw_deinit(ar);
+ 
+@@ -1771,6 +1847,7 @@ static int ath10k_snoc_free_resources(struct ath10k *ar)
+ 	ath10k_fw_deinit(ar);
+ 	ath10k_snoc_free_irq(ar);
+ 	ath10k_snoc_release_resource(ar);
++	ath10k_modem_deinit(ar);
+ 	ath10k_qmi_deinit(ar);
+ 	ath10k_core_destroy(ar);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.h b/drivers/net/wireless/ath/ath10k/snoc.h
+index 5095d1893681b..d4bce17076960 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.h
++++ b/drivers/net/wireless/ath/ath10k/snoc.h
+@@ -6,6 +6,8 @@
+ #ifndef _SNOC_H_
+ #define _SNOC_H_
+ 
++#include <linux/notifier.h>
++
+ #include "hw.h"
+ #include "ce.h"
+ #include "qmi.h"
+@@ -45,6 +47,7 @@ struct ath10k_snoc_ce_irq {
+ enum ath10k_snoc_flags {
+ 	ATH10K_SNOC_FLAG_REGISTERED,
+ 	ATH10K_SNOC_FLAG_UNREGISTERING,
++	ATH10K_SNOC_FLAG_MODEM_STOPPED,
+ 	ATH10K_SNOC_FLAG_RECOVERY,
+ 	ATH10K_SNOC_FLAG_8BIT_HOST_CAP_QUIRK,
+ };
+@@ -75,6 +78,8 @@ struct ath10k_snoc {
+ 	struct clk_bulk_data *clks;
+ 	size_t num_clks;
+ 	struct ath10k_qmi *qmi;
++	struct notifier_block nb;
++	void *notifier;
+ 	unsigned long flags;
+ 	bool xo_cal_supported;
+ 	u32 xo_cal_data;
+diff --git a/drivers/net/wireless/ath/ath10k/usb.c b/drivers/net/wireless/ath/ath10k/usb.c
+index 19b9c27e30e20..3d98f19c6ec8a 100644
+--- a/drivers/net/wireless/ath/ath10k/usb.c
++++ b/drivers/net/wireless/ath/ath10k/usb.c
+@@ -525,7 +525,7 @@ static int ath10k_usb_submit_ctrl_in(struct ath10k *ar,
+ 			      req,
+ 			      USB_DIR_IN | USB_TYPE_VENDOR |
+ 			      USB_RECIP_DEVICE, value, index, buf,
+-			      size, 2 * HZ);
++			      size, 2000);
+ 
+ 	if (ret < 0) {
+ 		ath10k_warn(ar, "Failed to read usb control message: %d\n",
+@@ -853,6 +853,11 @@ static int ath10k_usb_setup_pipe_resources(struct ath10k *ar,
+ 				   le16_to_cpu(endpoint->wMaxPacketSize),
+ 				   endpoint->bInterval);
+ 		}
++
++		/* Ignore broken descriptors. */
++		if (usb_endpoint_maxp(endpoint) == 0)
++			continue;
++
+ 		urbcount = 0;
+ 
+ 		pipe_num =
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index b8a4bbfe10b87..7c1c2658cb5f8 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -2610,6 +2610,10 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
+ 	if (ieee80211_is_beacon(hdr->frame_control))
+ 		ath10k_mac_handle_beacon(ar, skb);
+ 
++	if (ieee80211_is_beacon(hdr->frame_control) ||
++	    ieee80211_is_probe_resp(hdr->frame_control))
++		status->boottime_ns = ktime_get_boottime_ns();
++
+ 	ath10k_dbg(ar, ATH10K_DBG_MGMT,
+ 		   "event mgmt rx skb %pK len %d ftype %02x stype %02x\n",
+ 		   skb, skb->len,
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 41c1a3d339c25..01bfd09a9d88c 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -2066,7 +2066,9 @@ struct wmi_channel {
+ 	union {
+ 		__le32 reginfo1;
+ 		struct {
++			/* note: power unit is 1 dBm */
+ 			u8 antenna_max;
++			/* note: power unit is 0.5 dBm */
+ 			u8 max_tx_power;
+ 		} __packed;
+ 	} __packed;
+@@ -2086,6 +2088,7 @@ struct wmi_channel_arg {
+ 	u32 min_power;
+ 	u32 max_power;
+ 	u32 max_reg_power;
++	/* note: power unit is 1 dBm */
+ 	u32 max_antenna_gain;
+ 	u32 reg_class_id;
+ 	enum wmi_phy_mode mode;
+diff --git a/drivers/net/wireless/ath/ath11k/dbring.c b/drivers/net/wireless/ath/ath11k/dbring.c
+index 5e1f5437b4185..fd98ba5b1130b 100644
+--- a/drivers/net/wireless/ath/ath11k/dbring.c
++++ b/drivers/net/wireless/ath/ath11k/dbring.c
+@@ -8,8 +8,7 @@
+ 
+ static int ath11k_dbring_bufs_replenish(struct ath11k *ar,
+ 					struct ath11k_dbring *ring,
+-					struct ath11k_dbring_element *buff,
+-					gfp_t gfp)
++					struct ath11k_dbring_element *buff)
+ {
+ 	struct ath11k_base *ab = ar->ab;
+ 	struct hal_srng *srng;
+@@ -35,7 +34,7 @@ static int ath11k_dbring_bufs_replenish(struct ath11k *ar,
+ 		goto err;
+ 
+ 	spin_lock_bh(&ring->idr_lock);
+-	buf_id = idr_alloc(&ring->bufs_idr, buff, 0, ring->bufs_max, gfp);
++	buf_id = idr_alloc(&ring->bufs_idr, buff, 0, ring->bufs_max, GFP_ATOMIC);
+ 	spin_unlock_bh(&ring->idr_lock);
+ 	if (buf_id < 0) {
+ 		ret = -ENOBUFS;
+@@ -72,8 +71,7 @@ err:
+ }
+ 
+ static int ath11k_dbring_fill_bufs(struct ath11k *ar,
+-				   struct ath11k_dbring *ring,
+-				   gfp_t gfp)
++				   struct ath11k_dbring *ring)
+ {
+ 	struct ath11k_dbring_element *buff;
+ 	struct hal_srng *srng;
+@@ -92,11 +90,11 @@ static int ath11k_dbring_fill_bufs(struct ath11k *ar,
+ 	size = sizeof(*buff) + ring->buf_sz + align - 1;
+ 
+ 	while (num_remain > 0) {
+-		buff = kzalloc(size, gfp);
++		buff = kzalloc(size, GFP_ATOMIC);
+ 		if (!buff)
+ 			break;
+ 
+-		ret = ath11k_dbring_bufs_replenish(ar, ring, buff, gfp);
++		ret = ath11k_dbring_bufs_replenish(ar, ring, buff);
+ 		if (ret) {
+ 			ath11k_warn(ar->ab, "failed to replenish db ring num_remain %d req_ent %d\n",
+ 				    num_remain, req_entries);
+@@ -176,7 +174,7 @@ int ath11k_dbring_buf_setup(struct ath11k *ar,
+ 	ring->hp_addr = ath11k_hal_srng_get_hp_addr(ar->ab, srng);
+ 	ring->tp_addr = ath11k_hal_srng_get_tp_addr(ar->ab, srng);
+ 
+-	ret = ath11k_dbring_fill_bufs(ar, ring, GFP_KERNEL);
++	ret = ath11k_dbring_fill_bufs(ar, ring);
+ 
+ 	return ret;
+ }
+@@ -322,7 +320,7 @@ int ath11k_dbring_buffer_release_event(struct ath11k_base *ab,
+ 		}
+ 
+ 		memset(buff, 0, size);
+-		ath11k_dbring_bufs_replenish(ar, ring, buff, GFP_ATOMIC);
++		ath11k_dbring_bufs_replenish(ar, ring, buff);
+ 	}
+ 
+ 	spin_unlock_bh(&srng->lock);
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 603d2f93ac18f..9e225f322a24d 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2342,8 +2342,10 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ 	channel_num = meta_data;
+ 	center_freq = meta_data >> 16;
+ 
+-	if (center_freq >= 5935 && center_freq <= 7105) {
++	if (center_freq >= ATH11K_MIN_6G_FREQ &&
++	    center_freq <= ATH11K_MAX_6G_FREQ) {
+ 		rx_status->band = NL80211_BAND_6GHZ;
++		rx_status->freq = center_freq;
+ 	} else if (channel_num >= 1 && channel_num <= 14) {
+ 		rx_status->band = NL80211_BAND_2GHZ;
+ 	} else if (channel_num >= 36 && channel_num <= 173) {
+@@ -2361,8 +2363,9 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ 				rx_desc, sizeof(struct hal_rx_desc));
+ 	}
+ 
+-	rx_status->freq = ieee80211_channel_to_frequency(channel_num,
+-							 rx_status->band);
++	if (rx_status->band != NL80211_BAND_6GHZ)
++		rx_status->freq = ieee80211_channel_to_frequency(channel_num,
++								 rx_status->band);
+ 
+ 	ath11k_dp_rx_h_rate(ar, rx_desc, rx_status);
+ }
+@@ -3315,7 +3318,7 @@ static int ath11k_dp_rx_h_defrag_reo_reinject(struct ath11k *ar, struct dp_rx_ti
+ 
+ 	paddr = dma_map_single(ab->dev, defrag_skb->data,
+ 			       defrag_skb->len + skb_tailroom(defrag_skb),
+-			       DMA_FROM_DEVICE);
++			       DMA_TO_DEVICE);
+ 	if (dma_mapping_error(ab->dev, paddr))
+ 		return -ENOMEM;
+ 
+@@ -3380,7 +3383,7 @@ err_free_idr:
+ 	spin_unlock_bh(&rx_refill_ring->idr_lock);
+ err_unmap_dma:
+ 	dma_unmap_single(ab->dev, paddr, defrag_skb->len + skb_tailroom(defrag_skb),
+-			 DMA_FROM_DEVICE);
++			 DMA_TO_DEVICE);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index e9b3689331ec2..89a64ebd620f3 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6590,7 +6590,7 @@ static int __ath11k_mac_register(struct ath11k *ar)
+ 		ar->hw->wiphy->interface_modes &= ~BIT(NL80211_IFTYPE_MONITOR);
+ 
+ 	/* Apply the regd received during initialization */
+-	ret = ath11k_regd_update(ar, true);
++	ret = ath11k_regd_update(ar);
+ 	if (ret) {
+ 		ath11k_err(ar->ab, "ath11k regd update failed: %d\n", ret);
+ 		goto err_unregister_hw;
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index b5e34d670715e..4c5071b7d11dc 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2707,8 +2707,10 @@ static void ath11k_qmi_driver_event_work(struct work_struct *work)
+ 		list_del(&event->list);
+ 		spin_unlock(&qmi->event_lock);
+ 
+-		if (test_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags))
++		if (test_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags)) {
++			kfree(event);
+ 			return;
++		}
+ 
+ 		switch (event->type) {
+ 		case ATH11K_QMI_EVENT_SERVER_ARRIVE:
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index e1a1df169034b..92c59009a8ac2 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -198,7 +198,7 @@ static void ath11k_copy_regd(struct ieee80211_regdomain *regd_orig,
+ 		       sizeof(struct ieee80211_reg_rule));
+ }
+ 
+-int ath11k_regd_update(struct ath11k *ar, bool init)
++int ath11k_regd_update(struct ath11k *ar)
+ {
+ 	struct ieee80211_regdomain *regd, *regd_copy = NULL;
+ 	int ret, regd_len, pdev_id;
+@@ -209,7 +209,10 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+ 
+ 	spin_lock_bh(&ab->base_lock);
+ 
+-	if (init) {
++	/* Prefer the latest regd update over default if it's available */
++	if (ab->new_regd[pdev_id]) {
++		regd = ab->new_regd[pdev_id];
++	} else {
+ 		/* Apply the regd received during init through
+ 		 * WMI_REG_CHAN_LIST_CC event. In case of failure to
+ 		 * receive the regd, initialize with a default world
+@@ -222,8 +225,6 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+ 				    "failed to receive default regd during init\n");
+ 			regd = (struct ieee80211_regdomain *)&ath11k_world_regd;
+ 		}
+-	} else {
+-		regd = ab->new_regd[pdev_id];
+ 	}
+ 
+ 	if (!regd) {
+@@ -683,7 +684,7 @@ void ath11k_regd_update_work(struct work_struct *work)
+ 					 regd_update_work);
+ 	int ret;
+ 
+-	ret = ath11k_regd_update(ar, false);
++	ret = ath11k_regd_update(ar);
+ 	if (ret) {
+ 		/* Firmware has already moved to the new regd. We need
+ 		 * to maintain channel consistency across FW, Host driver
+diff --git a/drivers/net/wireless/ath/ath11k/reg.h b/drivers/net/wireless/ath/ath11k/reg.h
+index 65d56d44796f6..5fb9dc03a74e8 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.h
++++ b/drivers/net/wireless/ath/ath11k/reg.h
+@@ -31,6 +31,6 @@ void ath11k_regd_update_work(struct work_struct *work);
+ struct ieee80211_regdomain *
+ ath11k_reg_build_regd(struct ath11k_base *ab,
+ 		      struct cur_regulatory_info *reg_info, bool intersect);
+-int ath11k_regd_update(struct ath11k *ar, bool init);
++int ath11k_regd_update(struct ath11k *ar);
+ int ath11k_reg_update_chan_list(struct ath11k *ar);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 6c253eae9d069..99c0b81e496bf 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -1339,6 +1339,7 @@ int ath11k_wmi_pdev_bss_chan_info_request(struct ath11k *ar,
+ 				     WMI_TAG_PDEV_BSS_CHAN_INFO_REQUEST) |
+ 			  FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
+ 	cmd->req_type = type;
++	cmd->pdev_id = ar->pdev->pdev_id;
+ 
+ 	ath11k_dbg(ar->ab, ATH11K_DBG_WMI,
+ 		   "WMI bss chan info req type %d\n", type);
+@@ -5792,6 +5793,17 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 
+ 	pdev_idx = reg_info->phy_id;
+ 
++	/* Avoid default reg rule updates sent during FW recovery if
++	 * it is already available
++	 */
++	spin_lock(&ab->base_lock);
++	if (test_bit(ATH11K_FLAG_RECOVERY, &ab->dev_flags) &&
++	    ab->default_regd[pdev_idx]) {
++		spin_unlock(&ab->base_lock);
++		goto mem_free;
++	}
++	spin_unlock(&ab->base_lock);
++
+ 	if (pdev_idx >= ab->num_radios) {
+ 		/* Process the event for phy0 only if single_pdev_only
+ 		 * is true. If pdev_idx is valid but not 0, discard the
+@@ -5829,10 +5841,10 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 	}
+ 
+ 	spin_lock(&ab->base_lock);
+-	if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags)) {
+-		/* Once mac is registered, ar is valid and all CC events from
+-		 * fw is considered to be received due to user requests
+-		 * currently.
++	if (ab->default_regd[pdev_idx]) {
++		/* The initial rules from FW after WMI Init is to build
++		 * the default regd. From then on, any rules updated for
++		 * the pdev could be due to user reg changes.
+ 		 * Free previously built regd before assigning the newly
+ 		 * generated regd to ar. NULL pointer handling will be
+ 		 * taken care by kfree itself.
+@@ -5842,13 +5854,9 @@ static int ath11k_reg_chan_list_event(struct ath11k_base *ab, struct sk_buff *sk
+ 		ab->new_regd[pdev_idx] = regd;
+ 		ieee80211_queue_work(ar->hw, &ar->regd_update_work);
+ 	} else {
+-		/* Multiple events for the same *ar is not expected. But we
+-		 * can still clear any previously stored default_regd if we
+-		 * are receiving this event for the same radio by mistake.
+-		 * NULL pointer handling will be taken care by kfree itself.
++		/* This regd would be applied during mac registration and is
++		 * held constant throughout for regd intersection purpose
+ 		 */
+-		kfree(ab->default_regd[pdev_idx]);
+-		/* This regd would be applied during mac registration */
+ 		ab->default_regd[pdev_idx] = regd;
+ 	}
+ 	ab->dfs_region = reg_info->dfs_region;
+@@ -6119,8 +6127,10 @@ static void ath11k_mgmt_rx_event(struct ath11k_base *ab, struct sk_buff *skb)
+ 	if (rx_ev.status & WMI_RX_STATUS_ERR_MIC)
+ 		status->flag |= RX_FLAG_MMIC_ERROR;
+ 
+-	if (rx_ev.chan_freq >= ATH11K_MIN_6G_FREQ) {
++	if (rx_ev.chan_freq >= ATH11K_MIN_6G_FREQ &&
++	    rx_ev.chan_freq <= ATH11K_MAX_6G_FREQ) {
+ 		status->band = NL80211_BAND_6GHZ;
++		status->freq = rx_ev.chan_freq;
+ 	} else if (rx_ev.channel >= 1 && rx_ev.channel <= 14) {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH11K_MAX_5G_CHAN) {
+@@ -6141,8 +6151,10 @@ static void ath11k_mgmt_rx_event(struct ath11k_base *ab, struct sk_buff *skb)
+ 
+ 	sband = &ar->mac.sbands[status->band];
+ 
+-	status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
+-						      status->band);
++	if (status->band != NL80211_BAND_6GHZ)
++		status->freq = ieee80211_channel_to_frequency(rx_ev.channel,
++							      status->band);
++
+ 	status->signal = rx_ev.snr + ATH11K_DEFAULT_NOISE_FLOOR;
+ 	status->rate_idx = ath11k_mac_bitrate_to_idx(sband, rx_ev.rate / 100);
+ 
+@@ -6301,6 +6313,8 @@ static void ath11k_scan_event(struct ath11k_base *ab, struct sk_buff *skb)
+ 		ath11k_wmi_event_scan_start_failed(ar);
+ 		break;
+ 	case WMI_SCAN_EVENT_DEQUEUED:
++		__ath11k_mac_scan_finish(ar);
++		break;
+ 	case WMI_SCAN_EVENT_PREEMPTED:
+ 	case WMI_SCAN_EVENT_RESTARTED:
+ 	case WMI_SCAN_EVENT_FOREIGN_CHAN_EXIT:
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
+index d35c47e0b19d4..0b7d337b36930 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.h
++++ b/drivers/net/wireless/ath/ath11k/wmi.h
+@@ -2960,6 +2960,7 @@ struct wmi_pdev_bss_chan_info_req_cmd {
+ 	u32 tlv_header;
+ 	/* ref wmi_bss_chan_info_req_type */
+ 	u32 req_type;
++	u32 pdev_id;
+ } __packed;
+ 
+ struct wmi_ap_ps_peer_cmd {
+@@ -4056,7 +4057,6 @@ struct wmi_vdev_stopped_event {
+ } __packed;
+ 
+ struct wmi_pdev_bss_chan_info_event {
+-	u32 pdev_id;
+ 	u32 freq;	/* Units in MHz */
+ 	u32 noise_floor;	/* units are dBm */
+ 	/* rx clear - how often the channel was unused */
+@@ -4074,6 +4074,7 @@ struct wmi_pdev_bss_chan_info_event {
+ 	/*rx_cycle cnt for my bss in 64bits format */
+ 	u32 rx_bss_cycle_count_low;
+ 	u32 rx_bss_cycle_count_high;
++	u32 pdev_id;
+ } __packed;
+ 
+ #define WMI_VDEV_INSTALL_KEY_COMPL_STATUS_SUCCESS 0
+diff --git a/drivers/net/wireless/ath/ath6kl/usb.c b/drivers/net/wireless/ath/ath6kl/usb.c
+index 5372e948e761d..aba70f35e574b 100644
+--- a/drivers/net/wireless/ath/ath6kl/usb.c
++++ b/drivers/net/wireless/ath/ath6kl/usb.c
+@@ -340,6 +340,11 @@ static int ath6kl_usb_setup_pipe_resources(struct ath6kl_usb *ar_usb)
+ 				   le16_to_cpu(endpoint->wMaxPacketSize),
+ 				   endpoint->bInterval);
+ 		}
++
++		/* Ignore broken descriptors. */
++		if (usb_endpoint_maxp(endpoint) == 0)
++			continue;
++
+ 		urbcount = 0;
+ 
+ 		pipe_num =
+@@ -907,7 +912,7 @@ static int ath6kl_usb_submit_ctrl_in(struct ath6kl_usb *ar_usb,
+ 				 req,
+ 				 USB_DIR_IN | USB_TYPE_VENDOR |
+ 				 USB_RECIP_DEVICE, value, index, buf,
+-				 size, 2 * HZ);
++				 size, 2000);
+ 
+ 	if (ret < 0) {
+ 		ath6kl_warn("Failed to read usb control message: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 139831539da37..98090e40e1cf4 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -533,8 +533,10 @@ irqreturn_t ath_isr(int irq, void *dev)
+ 	ath9k_debug_sync_cause(sc, sync_cause);
+ 	status &= ah->imask;	/* discard unasked-for bits */
+ 
+-	if (test_bit(ATH_OP_HW_RESET, &common->op_flags))
++	if (test_bit(ATH_OP_HW_RESET, &common->op_flags)) {
++		ath9k_hw_kill_interrupts(sc->sc_ah);
+ 		return IRQ_HANDLED;
++	}
+ 
+ 	/*
+ 	 * If there are no status bits set, then this interrupt was not
+diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c
+index 80390495ea250..75cb53a3ec15e 100644
+--- a/drivers/net/wireless/ath/dfs_pattern_detector.c
++++ b/drivers/net/wireless/ath/dfs_pattern_detector.c
+@@ -183,10 +183,12 @@ static void channel_detector_exit(struct dfs_pattern_detector *dpd,
+ 	if (cd == NULL)
+ 		return;
+ 	list_del(&cd->head);
+-	for (i = 0; i < dpd->num_radar_types; i++) {
+-		struct pri_detector *de = cd->detectors[i];
+-		if (de != NULL)
+-			de->exit(de);
++	if (cd->detectors) {
++		for (i = 0; i < dpd->num_radar_types; i++) {
++			struct pri_detector *de = cd->detectors[i];
++			if (de != NULL)
++				de->exit(de);
++		}
+ 	}
+ 	kfree(cd->detectors);
+ 	kfree(cd);
+diff --git a/drivers/net/wireless/ath/wcn36xx/dxe.c b/drivers/net/wireless/ath/wcn36xx/dxe.c
+index 8e1dbfda65386..aff04ef662663 100644
+--- a/drivers/net/wireless/ath/wcn36xx/dxe.c
++++ b/drivers/net/wireless/ath/wcn36xx/dxe.c
+@@ -403,8 +403,21 @@ static void reap_tx_dxes(struct wcn36xx *wcn, struct wcn36xx_dxe_ch *ch)
+ 			dma_unmap_single(wcn->dev, ctl->desc->src_addr_l,
+ 					 ctl->skb->len, DMA_TO_DEVICE);
+ 			info = IEEE80211_SKB_CB(ctl->skb);
+-			if (!(info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS)) {
+-				/* Keep frame until TX status comes */
++			if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++				if (info->flags & IEEE80211_TX_CTL_NO_ACK) {
++					info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
++					ieee80211_tx_status_irqsafe(wcn->hw, ctl->skb);
++				} else {
++					/* Wait for the TX ack indication or timeout... */
++					spin_lock(&wcn->dxe_lock);
++					if (WARN_ON(wcn->tx_ack_skb))
++						ieee80211_free_txskb(wcn->hw, wcn->tx_ack_skb);
++					wcn->tx_ack_skb = ctl->skb; /* Tracking ref */
++					mod_timer(&wcn->tx_ack_timer, jiffies + HZ / 10);
++					spin_unlock(&wcn->dxe_lock);
++				}
++				/* do not free, ownership transferred to mac80211 status cb */
++			} else {
+ 				ieee80211_free_txskb(wcn->hw, ctl->skb);
+ 			}
+ 
+@@ -426,7 +439,6 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ {
+ 	struct wcn36xx *wcn = (struct wcn36xx *)dev;
+ 	int int_src, int_reason;
+-	bool transmitted = false;
+ 
+ 	wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_INT_SRC_RAW_REG, &int_src);
+ 
+@@ -466,7 +478,6 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ 		if (int_reason & (WCN36XX_CH_STAT_INT_DONE_MASK |
+ 				  WCN36XX_CH_STAT_INT_ED_MASK)) {
+ 			reap_tx_dxes(wcn, &wcn->dxe_tx_h_ch);
+-			transmitted = true;
+ 		}
+ 	}
+ 
+@@ -479,7 +490,6 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ 					   WCN36XX_DXE_0_INT_CLR,
+ 					   WCN36XX_INT_MASK_CHAN_TX_L);
+ 
+-
+ 		if (int_reason & WCN36XX_CH_STAT_INT_ERR_MASK ) {
+ 			wcn36xx_dxe_write_register(wcn,
+ 						   WCN36XX_DXE_0_INT_ERR_CLR,
+@@ -507,25 +517,8 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
+ 		if (int_reason & (WCN36XX_CH_STAT_INT_DONE_MASK |
+ 				  WCN36XX_CH_STAT_INT_ED_MASK)) {
+ 			reap_tx_dxes(wcn, &wcn->dxe_tx_l_ch);
+-			transmitted = true;
+-		}
+-	}
+-
+-	spin_lock(&wcn->dxe_lock);
+-	if (wcn->tx_ack_skb && transmitted) {
+-		struct ieee80211_tx_info *info = IEEE80211_SKB_CB(wcn->tx_ack_skb);
+-
+-		/* TX complete, no need to wait for 802.11 ack indication */
+-		if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS &&
+-		    info->flags & IEEE80211_TX_CTL_NO_ACK) {
+-			info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
+-			del_timer(&wcn->tx_ack_timer);
+-			ieee80211_tx_status_irqsafe(wcn->hw, wcn->tx_ack_skb);
+-			wcn->tx_ack_skb = NULL;
+-			ieee80211_wake_queues(wcn->hw);
+ 		}
+ 	}
+-	spin_unlock(&wcn->dxe_lock);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -613,6 +606,10 @@ static int wcn36xx_rx_handle_packets(struct wcn36xx *wcn,
+ 	dxe = ctl->desc;
+ 
+ 	while (!(READ_ONCE(dxe->ctrl) & WCN36xx_DXE_CTRL_VLD)) {
++		/* do not read until we own DMA descriptor */
++		dma_rmb();
++
++		/* read/modify DMA descriptor */
+ 		skb = ctl->skb;
+ 		dma_addr = dxe->dst_addr_l;
+ 		ret = wcn36xx_dxe_fill_skb(wcn->dev, ctl, GFP_ATOMIC);
+@@ -623,9 +620,15 @@ static int wcn36xx_rx_handle_packets(struct wcn36xx *wcn,
+ 			dma_unmap_single(wcn->dev, dma_addr, WCN36XX_PKT_SIZE,
+ 					DMA_FROM_DEVICE);
+ 			wcn36xx_rx_skb(wcn, skb);
+-		} /* else keep old skb not submitted and use it for rx DMA */
++		}
++		/* else keep old skb not submitted and reuse it for rx DMA
++		 * (dropping the packet that it contained)
++		 */
+ 
++		/* flush descriptor changes before re-marking as valid */
++		dma_wmb();
+ 		dxe->ctrl = ctrl;
++
+ 		ctl = ctl->next;
+ 		dxe = ctl->desc;
+ 	}
+diff --git a/drivers/net/wireless/ath/wcn36xx/hal.h b/drivers/net/wireless/ath/wcn36xx/hal.h
+index 455143c4164ee..de3bca043c2b3 100644
+--- a/drivers/net/wireless/ath/wcn36xx/hal.h
++++ b/drivers/net/wireless/ath/wcn36xx/hal.h
+@@ -359,6 +359,8 @@ enum wcn36xx_hal_host_msg_type {
+ 	WCN36XX_HAL_START_SCAN_OFFLOAD_RSP = 205,
+ 	WCN36XX_HAL_STOP_SCAN_OFFLOAD_REQ = 206,
+ 	WCN36XX_HAL_STOP_SCAN_OFFLOAD_RSP = 207,
++	WCN36XX_HAL_UPDATE_CHANNEL_LIST_REQ = 208,
++	WCN36XX_HAL_UPDATE_CHANNEL_LIST_RSP = 209,
+ 	WCN36XX_HAL_SCAN_OFFLOAD_IND = 210,
+ 
+ 	WCN36XX_HAL_AVOID_FREQ_RANGE_IND = 233,
+@@ -1353,6 +1355,36 @@ struct wcn36xx_hal_stop_scan_offload_rsp_msg {
+ 	u32 status;
+ } __packed;
+ 
++#define WCN36XX_HAL_CHAN_REG1_MIN_PWR_MASK  0x000000ff
++#define WCN36XX_HAL_CHAN_REG1_MAX_PWR_MASK  0x0000ff00
++#define WCN36XX_HAL_CHAN_REG1_REG_PWR_MASK  0x00ff0000
++#define WCN36XX_HAL_CHAN_REG1_CLASS_ID_MASK 0xff000000
++#define WCN36XX_HAL_CHAN_REG2_ANT_GAIN_MASK 0x000000ff
++#define WCN36XX_HAL_CHAN_INFO_FLAG_PASSIVE  BIT(7)
++#define WCN36XX_HAL_CHAN_INFO_FLAG_DFS      BIT(10)
++#define WCN36XX_HAL_CHAN_INFO_FLAG_HT       BIT(11)
++#define WCN36XX_HAL_CHAN_INFO_FLAG_VHT      BIT(12)
++#define WCN36XX_HAL_CHAN_INFO_PHY_11A       0
++#define WCN36XX_HAL_CHAN_INFO_PHY_11BG      1
++#define WCN36XX_HAL_DEFAULT_ANT_GAIN        6
++#define WCN36XX_HAL_DEFAULT_MIN_POWER       6
++
++struct wcn36xx_hal_channel_param {
++	u32 mhz;
++	u32 band_center_freq1;
++	u32 band_center_freq2;
++	u32 channel_info;
++	u32 reg_info_1;
++	u32 reg_info_2;
++} __packed;
++
++struct wcn36xx_hal_update_channel_list_req_msg {
++	struct wcn36xx_hal_msg_header header;
++
++	u8 num_channel;
++	struct wcn36xx_hal_channel_param channels[80];
++} __packed;
++
+ enum wcn36xx_hal_rate_index {
+ 	HW_RATE_INDEX_1MBPS	= 0x82,
+ 	HW_RATE_INDEX_2MBPS	= 0x84,
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index c7592143f2eb9..dd1df4334cc51 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -135,7 +135,9 @@ static struct ieee80211_supported_band wcn_band_2ghz = {
+ 		.cap =	IEEE80211_HT_CAP_GRN_FLD |
+ 			IEEE80211_HT_CAP_SGI_20 |
+ 			IEEE80211_HT_CAP_DSSSCCK40 |
+-			IEEE80211_HT_CAP_LSIG_TXOP_PROT,
++			IEEE80211_HT_CAP_LSIG_TXOP_PROT |
++			IEEE80211_HT_CAP_SGI_40 |
++			IEEE80211_HT_CAP_SUP_WIDTH_20_40,
+ 		.ht_supported = true,
+ 		.ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K,
+ 		.ampdu_density = IEEE80211_HT_MPDU_DENSITY_16,
+@@ -569,12 +571,14 @@ static int wcn36xx_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 		if (IEEE80211_KEY_FLAG_PAIRWISE & key_conf->flags) {
+ 			sta_priv->is_data_encrypted = true;
+ 			/* Reconfigure bss with encrypt_type */
+-			if (NL80211_IFTYPE_STATION == vif->type)
++			if (NL80211_IFTYPE_STATION == vif->type) {
+ 				wcn36xx_smd_config_bss(wcn,
+ 						       vif,
+ 						       sta,
+ 						       sta->addr,
+ 						       true);
++				wcn36xx_smd_config_sta(wcn, vif, sta);
++			}
+ 
+ 			wcn36xx_smd_set_stakey(wcn,
+ 				vif_priv->encrypt_type,
+@@ -667,6 +671,7 @@ static int wcn36xx_hw_scan(struct ieee80211_hw *hw,
+ 
+ 	mutex_unlock(&wcn->scan_lock);
+ 
++	wcn36xx_smd_update_channel_list(wcn, &hw_req->req);
+ 	return wcn36xx_smd_start_hw_scan(wcn, vif, &hw_req->req);
+ }
+ 
+@@ -1113,6 +1118,13 @@ static int wcn36xx_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wow)
+ 			goto out;
+ 		ret = wcn36xx_smd_wlan_host_suspend_ind(wcn);
+ 	}
++
++	/* Disable IRQ, we don't want to handle any packet before mac80211 is
++	 * resumed and ready to receive packets.
++	 */
++	disable_irq(wcn->tx_irq);
++	disable_irq(wcn->rx_irq);
++
+ out:
+ 	mutex_unlock(&wcn->conf_mutex);
+ 	return ret;
+@@ -1135,6 +1147,10 @@ static int wcn36xx_resume(struct ieee80211_hw *hw)
+ 		wcn36xx_smd_ipv6_ns_offload(wcn, vif, false);
+ 		wcn36xx_smd_arp_offload(wcn, vif, false);
+ 	}
++
++	enable_irq(wcn->tx_irq);
++	enable_irq(wcn->rx_irq);
++
+ 	mutex_unlock(&wcn->conf_mutex);
+ 
+ 	return 0;
+@@ -1328,7 +1344,6 @@ static int wcn36xx_init_ieee80211(struct wcn36xx *wcn)
+ 	ieee80211_hw_set(wcn->hw, HAS_RATE_CONTROL);
+ 	ieee80211_hw_set(wcn->hw, SINGLE_SCAN_ON_ALL_BANDS);
+ 	ieee80211_hw_set(wcn->hw, REPORTS_TX_ACK_STATUS);
+-	ieee80211_hw_set(wcn->hw, CONNECTION_MONITOR);
+ 
+ 	wcn->hw->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
+ 		BIT(NL80211_IFTYPE_AP) |
+diff --git a/drivers/net/wireless/ath/wcn36xx/smd.c b/drivers/net/wireless/ath/wcn36xx/smd.c
+index 0e3be17d8ceaf..3624a7a2c968e 100644
+--- a/drivers/net/wireless/ath/wcn36xx/smd.c
++++ b/drivers/net/wireless/ath/wcn36xx/smd.c
+@@ -16,6 +16,7 @@
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
++#include <linux/bitfield.h>
+ #include <linux/etherdevice.h>
+ #include <linux/firmware.h>
+ #include <linux/bitops.h>
+@@ -927,6 +928,86 @@ out:
+ 	return ret;
+ }
+ 
++int wcn36xx_smd_update_channel_list(struct wcn36xx *wcn, struct cfg80211_scan_request *req)
++{
++	struct wcn36xx_hal_update_channel_list_req_msg *msg_body;
++	int ret, i;
++
++	msg_body = kzalloc(sizeof(*msg_body), GFP_KERNEL);
++	if (!msg_body)
++		return -ENOMEM;
++
++	INIT_HAL_MSG((*msg_body), WCN36XX_HAL_UPDATE_CHANNEL_LIST_REQ);
++
++	msg_body->num_channel = min_t(u8, req->n_channels, sizeof(msg_body->channels));
++	for (i = 0; i < msg_body->num_channel; i++) {
++		struct wcn36xx_hal_channel_param *param = &msg_body->channels[i];
++		u32 min_power = WCN36XX_HAL_DEFAULT_MIN_POWER;
++		u32 ant_gain = WCN36XX_HAL_DEFAULT_ANT_GAIN;
++
++		param->mhz = req->channels[i]->center_freq;
++		param->band_center_freq1 = req->channels[i]->center_freq;
++		param->band_center_freq2 = 0;
++
++		if (req->channels[i]->flags & IEEE80211_CHAN_NO_IR)
++			param->channel_info |= WCN36XX_HAL_CHAN_INFO_FLAG_PASSIVE;
++
++		if (req->channels[i]->flags & IEEE80211_CHAN_RADAR)
++			param->channel_info |= WCN36XX_HAL_CHAN_INFO_FLAG_DFS;
++
++		if (req->channels[i]->band == NL80211_BAND_5GHZ) {
++			param->channel_info |= WCN36XX_HAL_CHAN_INFO_FLAG_HT;
++			param->channel_info |= WCN36XX_HAL_CHAN_INFO_FLAG_VHT;
++			param->channel_info |= WCN36XX_HAL_CHAN_INFO_PHY_11A;
++		} else {
++			param->channel_info |= WCN36XX_HAL_CHAN_INFO_PHY_11BG;
++		}
++
++		if (min_power > req->channels[i]->max_power)
++			min_power = req->channels[i]->max_power;
++
++		if (req->channels[i]->max_antenna_gain)
++			ant_gain = req->channels[i]->max_antenna_gain;
++
++		u32p_replace_bits(&param->reg_info_1, min_power,
++				  WCN36XX_HAL_CHAN_REG1_MIN_PWR_MASK);
++		u32p_replace_bits(&param->reg_info_1, req->channels[i]->max_power,
++				  WCN36XX_HAL_CHAN_REG1_MAX_PWR_MASK);
++		u32p_replace_bits(&param->reg_info_1, req->channels[i]->max_reg_power,
++				  WCN36XX_HAL_CHAN_REG1_REG_PWR_MASK);
++		u32p_replace_bits(&param->reg_info_1, 0,
++				  WCN36XX_HAL_CHAN_REG1_CLASS_ID_MASK);
++		u32p_replace_bits(&param->reg_info_2, ant_gain,
++				  WCN36XX_HAL_CHAN_REG2_ANT_GAIN_MASK);
++
++		wcn36xx_dbg(WCN36XX_DBG_HAL,
++			    "%s: freq=%u, channel_info=%08x, reg_info1=%08x, reg_info2=%08x\n",
++			    __func__, param->mhz, param->channel_info, param->reg_info_1,
++			    param->reg_info_2);
++	}
++
++	mutex_lock(&wcn->hal_mutex);
++
++	PREPARE_HAL_BUF(wcn->hal_buf, (*msg_body));
++
++	ret = wcn36xx_smd_send_and_wait(wcn, msg_body->header.len);
++	if (ret) {
++		wcn36xx_err("Sending hal_update_channel_list failed\n");
++		goto out;
++	}
++
++	ret = wcn36xx_smd_rsp_status_check(wcn->hal_buf, wcn->hal_rsp_len);
++	if (ret) {
++		wcn36xx_err("hal_update_channel_list response failed err=%d\n", ret);
++		goto out;
++	}
++
++out:
++	kfree(msg_body);
++	mutex_unlock(&wcn->hal_mutex);
++	return ret;
++}
++
+ static int wcn36xx_smd_switch_channel_rsp(void *buf, size_t len)
+ {
+ 	struct wcn36xx_hal_switch_channel_rsp_msg *rsp;
+@@ -2623,30 +2704,52 @@ static int wcn36xx_smd_delete_sta_context_ind(struct wcn36xx *wcn,
+ 					      size_t len)
+ {
+ 	struct wcn36xx_hal_delete_sta_context_ind_msg *rsp = buf;
+-	struct wcn36xx_vif *tmp;
++	struct wcn36xx_vif *vif_priv;
++	struct ieee80211_vif *vif;
++	struct ieee80211_bss_conf *bss_conf;
+ 	struct ieee80211_sta *sta;
++	bool found = false;
+ 
+ 	if (len != sizeof(*rsp)) {
+ 		wcn36xx_warn("Corrupted delete sta indication\n");
+ 		return -EIO;
+ 	}
+ 
+-	wcn36xx_dbg(WCN36XX_DBG_HAL, "delete station indication %pM index %d\n",
+-		    rsp->addr2, rsp->sta_id);
++	wcn36xx_dbg(WCN36XX_DBG_HAL,
++		    "delete station indication %pM index %d reason %d\n",
++		    rsp->addr2, rsp->sta_id, rsp->reason_code);
+ 
+-	list_for_each_entry(tmp, &wcn->vif_list, list) {
++	list_for_each_entry(vif_priv, &wcn->vif_list, list) {
+ 		rcu_read_lock();
+-		sta = ieee80211_find_sta(wcn36xx_priv_to_vif(tmp), rsp->addr2);
+-		if (sta)
+-			ieee80211_report_low_ack(sta, 0);
++		vif = wcn36xx_priv_to_vif(vif_priv);
++
++		if (vif->type == NL80211_IFTYPE_STATION) {
++			/* We could call ieee80211_find_sta too, but checking
++			 * bss_conf is clearer.
++			 */
++			bss_conf = &vif->bss_conf;
++			if (vif_priv->sta_assoc &&
++			    !memcmp(bss_conf->bssid, rsp->addr2, ETH_ALEN)) {
++				found = true;
++				wcn36xx_dbg(WCN36XX_DBG_HAL,
++					    "connection loss bss_index %d\n",
++					    vif_priv->bss_index);
++				ieee80211_connection_loss(vif);
++			}
++		} else {
++			sta = ieee80211_find_sta(vif, rsp->addr2);
++			if (sta) {
++				found = true;
++				ieee80211_report_low_ack(sta, 0);
++			}
++		}
++
+ 		rcu_read_unlock();
+-		if (sta)
++		if (found)
+ 			return 0;
+ 	}
+ 
+-	wcn36xx_warn("STA with addr %pM and index %d not found\n",
+-		     rsp->addr2,
+-		     rsp->sta_id);
++	wcn36xx_warn("BSS or STA with addr %pM not found\n", rsp->addr2);
+ 	return -ENOENT;
+ }
+ 
+@@ -3060,6 +3163,7 @@ int wcn36xx_smd_rsp_process(struct rpmsg_device *rpdev,
+ 	case WCN36XX_HAL_GTK_OFFLOAD_RSP:
+ 	case WCN36XX_HAL_GTK_OFFLOAD_GETINFO_RSP:
+ 	case WCN36XX_HAL_HOST_RESUME_RSP:
++	case WCN36XX_HAL_UPDATE_CHANNEL_LIST_RSP:
+ 		memcpy(wcn->hal_buf, buf, len);
+ 		wcn->hal_rsp_len = len;
+ 		complete(&wcn->hal_rsp_compl);
+diff --git a/drivers/net/wireless/ath/wcn36xx/smd.h b/drivers/net/wireless/ath/wcn36xx/smd.h
+index d8bded03945d4..d3774568d885e 100644
+--- a/drivers/net/wireless/ath/wcn36xx/smd.h
++++ b/drivers/net/wireless/ath/wcn36xx/smd.h
+@@ -70,6 +70,7 @@ int wcn36xx_smd_update_scan_params(struct wcn36xx *wcn, u8 *channels, size_t cha
+ int wcn36xx_smd_start_hw_scan(struct wcn36xx *wcn, struct ieee80211_vif *vif,
+ 			      struct cfg80211_scan_request *req);
+ int wcn36xx_smd_stop_hw_scan(struct wcn36xx *wcn);
++int wcn36xx_smd_update_channel_list(struct wcn36xx *wcn, struct cfg80211_scan_request *req);
+ int wcn36xx_smd_add_sta_self(struct wcn36xx *wcn, struct ieee80211_vif *vif);
+ int wcn36xx_smd_delete_sta_self(struct wcn36xx *wcn, u8 *addr);
+ int wcn36xx_smd_delete_sta(struct wcn36xx *wcn, u8 sta_index);
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index cab196bb38cd4..bbd7194c82e27 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -31,6 +31,13 @@ struct wcn36xx_rate {
+ 	enum rate_info_bw bw;
+ };
+ 
++/* Buffer descriptor rx_ch field is limited to 5-bit (4+1), a mapping is used
++ * for 11A Channels.
++ */
++static const u8 ab_rx_ch_map[] = { 36, 40, 44, 48, 52, 56, 60, 64, 100, 104,
++				   108, 112, 116, 120, 124, 128, 132, 136, 140,
++				   149, 153, 157, 161, 165, 144 };
++
+ static const struct wcn36xx_rate wcn36xx_rate_table[] = {
+ 	/* 11b rates */
+ 	{  10, 0, RX_ENC_LEGACY, 0, RATE_INFO_BW_20 },
+@@ -291,6 +298,22 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 	    ieee80211_is_probe_resp(hdr->frame_control))
+ 		status.boottime_ns = ktime_get_boottime_ns();
+ 
++	if (bd->scan_learn) {
++		/* If packet originates from hardware scanning, extract the
++		 * band/channel from bd descriptor.
++		 */
++		u8 hwch = (bd->reserved0 << 4) + bd->rx_ch;
++
++		if (bd->rf_band != 1 && hwch <= sizeof(ab_rx_ch_map) && hwch >= 1) {
++			status.band = NL80211_BAND_5GHZ;
++			status.freq = ieee80211_channel_to_frequency(ab_rx_ch_map[hwch - 1],
++								     status.band);
++		} else {
++			status.band = NL80211_BAND_2GHZ;
++			status.freq = ieee80211_channel_to_frequency(hwch, status.band);
++		}
++	}
++
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+@@ -321,8 +344,6 @@ static void wcn36xx_set_tx_pdu(struct wcn36xx_tx_bd *bd,
+ 		bd->pdu.mpdu_header_off;
+ 	bd->pdu.mpdu_len = len;
+ 	bd->pdu.tid = tid;
+-	/* Use seq number generated by mac80211 */
+-	bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_HOST;
+ }
+ 
+ static inline struct wcn36xx_vif *get_vif_by_addr(struct wcn36xx *wcn,
+@@ -419,6 +440,9 @@ static void wcn36xx_set_tx_data(struct wcn36xx_tx_bd *bd,
+ 		tid = ieee80211_get_tid(hdr);
+ 		/* TID->QID is one-to-one mapping */
+ 		bd->queue_id = tid;
++		bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_DPU_QOS;
++	} else {
++		bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_DPU_NON_QOS;
+ 	}
+ 
+ 	if (info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT ||
+@@ -429,6 +453,9 @@ static void wcn36xx_set_tx_data(struct wcn36xx_tx_bd *bd,
+ 	if (ieee80211_is_any_nullfunc(hdr->frame_control)) {
+ 		/* Don't use a regular queue for null packet (no ampdu) */
+ 		bd->queue_id = WCN36XX_TX_U_WQ_ID;
++		bd->bd_rate = WCN36XX_BD_RATE_CTRL;
++		if (ieee80211_is_qos_nullfunc(hdr->frame_control))
++			bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_HOST;
+ 	}
+ 
+ 	if (bcast) {
+@@ -488,6 +515,8 @@ static void wcn36xx_set_tx_mgmt(struct wcn36xx_tx_bd *bd,
+ 		bd->queue_id = WCN36XX_TX_U_WQ_ID;
+ 	*vif_priv = __vif_priv;
+ 
++	bd->pdu.bd_ssn = WCN36XX_TXBD_SSN_FILL_DPU_NON_QOS;
++
+ 	wcn36xx_set_tx_pdu(bd,
+ 			   ieee80211_is_data_qos(hdr->frame_control) ?
+ 			   sizeof(struct ieee80211_qos_hdr) :
+@@ -502,10 +531,11 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct wcn36xx_vif *vif_priv = NULL;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+-	unsigned long flags;
+ 	bool is_low = ieee80211_is_data(hdr->frame_control);
+ 	bool bcast = is_broadcast_ether_addr(hdr->addr1) ||
+ 		is_multicast_ether_addr(hdr->addr1);
++	bool ack_ind = (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) &&
++					!(info->flags & IEEE80211_TX_CTL_NO_ACK);
+ 	struct wcn36xx_tx_bd bd;
+ 	int ret;
+ 
+@@ -521,30 +551,16 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
+ 
+ 	bd.dpu_rf = WCN36XX_BMU_WQ_TX;
+ 
+-	if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++	if (unlikely(ack_ind)) {
+ 		wcn36xx_dbg(WCN36XX_DBG_DXE, "TX_ACK status requested\n");
+ 
+-		spin_lock_irqsave(&wcn->dxe_lock, flags);
+-		if (wcn->tx_ack_skb) {
+-			spin_unlock_irqrestore(&wcn->dxe_lock, flags);
+-			wcn36xx_warn("tx_ack_skb already set\n");
+-			return -EINVAL;
+-		}
+-
+-		wcn->tx_ack_skb = skb;
+-		spin_unlock_irqrestore(&wcn->dxe_lock, flags);
+-
+ 		/* Only one at a time is supported by fw. Stop the TX queues
+ 		 * until the ack status gets back.
+ 		 */
+ 		ieee80211_stop_queues(wcn->hw);
+ 
+-		/* TX watchdog if no TX irq or ack indication received  */
+-		mod_timer(&wcn->tx_ack_timer, jiffies + HZ / 10);
+-
+ 		/* Request ack indication from the firmware */
+-		if (!(info->flags & IEEE80211_TX_CTL_NO_ACK))
+-			bd.tx_comp = 1;
++		bd.tx_comp = 1;
+ 	}
+ 
+ 	/* Data frames served first*/
+@@ -558,14 +574,8 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
+ 	bd.tx_bd_sign = 0xbdbdbdbd;
+ 
+ 	ret = wcn36xx_dxe_tx_frame(wcn, vif_priv, &bd, skb, is_low);
+-	if (ret && (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS)) {
+-		/* If the skb has not been transmitted,
+-		 * don't keep a reference to it.
+-		 */
+-		spin_lock_irqsave(&wcn->dxe_lock, flags);
+-		wcn->tx_ack_skb = NULL;
+-		spin_unlock_irqrestore(&wcn->dxe_lock, flags);
+-
++	if (unlikely(ret && ack_ind)) {
++		/* If the skb has not been transmitted, resume TX queue */
+ 		ieee80211_wake_queues(wcn->hw);
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.h b/drivers/net/wireless/ath/wcn36xx/txrx.h
+index 032216e82b2be..b54311ffde9c5 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.h
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.h
+@@ -110,7 +110,8 @@ struct wcn36xx_rx_bd {
+ 	/* 0x44 */
+ 	u32	exp_seq_num:12;
+ 	u32	cur_seq_num:12;
+-	u32	fr_type_subtype:8;
++	u32	rf_band:2;
++	u32	fr_type_subtype:6;
+ 
+ 	/* 0x48 */
+ 	u32	msdu_size:16;
+diff --git a/drivers/net/wireless/broadcom/b43/phy_g.c b/drivers/net/wireless/broadcom/b43/phy_g.c
+index d5a1a5c582366..ac72ca39e409b 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_g.c
++++ b/drivers/net/wireless/broadcom/b43/phy_g.c
+@@ -2297,7 +2297,7 @@ static u8 b43_gphy_aci_scan(struct b43_wldev *dev)
+ 	b43_phy_mask(dev, B43_PHY_G_CRS, 0x7FFF);
+ 	b43_set_all_gains(dev, 3, 8, 1);
+ 
+-	start = (channel - 5 > 0) ? channel - 5 : 1;
++	start = (channel > 5) ? channel - 5 : 1;
+ 	end = (channel + 5 < 14) ? channel + 5 : 13;
+ 
+ 	for (i = start; i <= end; i++) {
+diff --git a/drivers/net/wireless/broadcom/b43legacy/radio.c b/drivers/net/wireless/broadcom/b43legacy/radio.c
+index 06891b4f837b9..fdf78c10a05c2 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/radio.c
++++ b/drivers/net/wireless/broadcom/b43legacy/radio.c
+@@ -283,7 +283,7 @@ u8 b43legacy_radio_aci_scan(struct b43legacy_wldev *dev)
+ 			    & 0x7FFF);
+ 	b43legacy_set_all_gains(dev, 3, 8, 1);
+ 
+-	start = (channel - 5 > 0) ? channel - 5 : 1;
++	start = (channel > 5) ? channel - 5 : 1;
+ 	end = (channel + 5 < 14) ? channel + 5 : 13;
+ 
+ 	for (i = start; i <= end; i++) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+index 6d5188b78f2de..0af452dca7664 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+@@ -75,6 +75,16 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ 		},
+ 		.driver_data = (void *)&acepc_t8_data,
+ 	},
++	{
++		/* Cyberbook T116 rugged tablet */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "20170531"),
++		},
++		/* The factory image nvram file is identical to the ACEPC T8 one */
++		.driver_data = (void *)&acepc_t8_data,
++	},
+ 	{
+ 		/* Match for the GPDwin which unfortunately uses somewhat
+ 		 * generic dmi strings, which is why we test for 4 strings.
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index 513f9e5387290..24de6e5eb6a4c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -284,16 +284,19 @@ int iwl_pnvm_load(struct iwl_trans *trans,
+ 	/* First attempt to get the PNVM from BIOS */
+ 	package = iwl_uefi_get_pnvm(trans, &len);
+ 	if (!IS_ERR_OR_NULL(package)) {
+-		data = kmemdup(package->data, len, GFP_KERNEL);
++		if (len >= sizeof(*package)) {
++			/* we need only the data */
++			len -= sizeof(*package);
++			data = kmemdup(package->data, len, GFP_KERNEL);
++		} else {
++			data = NULL;
++		}
+ 
+ 		/* free package regardless of whether kmemdup succeeded */
+ 		kfree(package);
+ 
+-		if (data) {
+-			/* we need only the data size */
+-			len -= sizeof(*package);
++		if (data)
+ 			goto parse;
+-		}
+ 	}
+ 
+ 	/* If it's not available, try from the filesystem */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 6a259d867d90e..9ed56c68a506a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2093,7 +2093,6 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 		iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert,
+ 					false, 0);
+ 		ret = 1;
+-		mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
+ 		goto err;
+ 	}
+ 
+@@ -2142,6 +2141,7 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
+ 		}
+ 	}
+ 
++	/* after the successful handshake, we're out of D3 */
+ 	mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
+ 
+ 	/*
+@@ -2212,6 +2212,9 @@ out:
+ 	 */
+ 	set_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status);
+ 
++	/* regardless of what happened, we're now out of D3 */
++	mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
++
+ 	return 1;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+index 4a3d2971a98b7..ec8a223f90e85 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+@@ -405,6 +405,9 @@ bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm,
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (iwlmvm_mod_params.power_scheme != IWL_POWER_SCHEME_CAM)
++		return false;
++
+ 	if (num_of_ant(iwl_mvm_get_valid_rx_ant(mvm)) == 1)
+ 		return false;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index be3ad42813532..1ffd7685deefa 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -931,9 +931,9 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
+ 		      iwl_qu_b0_hr1_b0, iwl_ax101_name),
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+-		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
++		      IWL_CFG_MAC_TYPE_QU, SILICON_B_STEP,
+ 		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
+-		      IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
++		      IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
+ 		      iwl_qu_b0_hr_b0, iwl_ax203_name),
+ 
+ 	/* Qu C step */
+@@ -945,7 +945,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP,
+ 		      IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
+-		      IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
++		      IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
+ 		      iwl_qu_c0_hr_b0, iwl_ax203_name),
+ 
+ 	/* QuZ */
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 20436a289d5cd..5d6dc1dd050d4 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -292,6 +292,7 @@ err_add_card:
+ 	if_usb_reset_device(cardp);
+ dealloc:
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ 
+ error:
+ 	return r;
+@@ -316,6 +317,7 @@ static void if_usb_disconnect(struct usb_interface *intf)
+ 
+ 	/* Unlink and free urb */
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ 
+ 	usb_set_intfdata(intf, NULL);
+ 	usb_put_dev(interface_to_usbdev(intf));
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index fe0a69e804d8c..75b5319d033f3 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -230,6 +230,7 @@ static int if_usb_probe(struct usb_interface *intf,
+ 
+ dealloc:
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ error:
+ lbtf_deb_leave(LBTF_DEB_MAIN);
+ 	return -ENOMEM;
+@@ -254,6 +255,7 @@ static void if_usb_disconnect(struct usb_interface *intf)
+ 
+ 	/* Unlink and free urb */
+ 	if_usb_free(cardp);
++	kfree(cardp);
+ 
+ 	usb_set_intfdata(intf, NULL);
+ 	usb_put_dev(interface_to_usbdev(intf));
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index 6696bce561786..cf08a4af84d6d 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -657,14 +657,15 @@ int mwifiex_send_delba(struct mwifiex_private *priv, int tid, u8 *peer_mac,
+ 	uint16_t del_ba_param_set;
+ 
+ 	memset(&delba, 0, sizeof(delba));
+-	delba.del_ba_param_set = cpu_to_le16(tid << DELBA_TID_POS);
+ 
+-	del_ba_param_set = le16_to_cpu(delba.del_ba_param_set);
++	del_ba_param_set = tid << DELBA_TID_POS;
++
+ 	if (initiator)
+ 		del_ba_param_set |= IEEE80211_DELBA_PARAM_INITIATOR_MASK;
+ 	else
+ 		del_ba_param_set &= ~IEEE80211_DELBA_PARAM_INITIATOR_MASK;
+ 
++	delba.del_ba_param_set = cpu_to_le16(del_ba_param_set);
+ 	memcpy(&delba.peer_mac_addr, peer_mac, ETH_ALEN);
+ 
+ 	/* We don't wait for the response of this command */
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 0961f4a5e415c..97f0f39364d67 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -908,16 +908,20 @@ mwifiex_init_new_priv_params(struct mwifiex_private *priv,
+ 	switch (type) {
+ 	case NL80211_IFTYPE_STATION:
+ 	case NL80211_IFTYPE_ADHOC:
+-		priv->bss_role =  MWIFIEX_BSS_ROLE_STA;
++		priv->bss_role = MWIFIEX_BSS_ROLE_STA;
++		priv->bss_type = MWIFIEX_BSS_TYPE_STA;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_CLIENT:
+-		priv->bss_role =  MWIFIEX_BSS_ROLE_STA;
++		priv->bss_role = MWIFIEX_BSS_ROLE_STA;
++		priv->bss_type = MWIFIEX_BSS_TYPE_P2P;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_GO:
+-		priv->bss_role =  MWIFIEX_BSS_ROLE_UAP;
++		priv->bss_role = MWIFIEX_BSS_ROLE_UAP;
++		priv->bss_type = MWIFIEX_BSS_TYPE_P2P;
+ 		break;
+ 	case NL80211_IFTYPE_AP:
+ 		priv->bss_role = MWIFIEX_BSS_ROLE_UAP;
++		priv->bss_type = MWIFIEX_BSS_TYPE_UAP;
+ 		break;
+ 	default:
+ 		mwifiex_dbg(adapter, ERROR,
+@@ -1229,29 +1233,15 @@ mwifiex_cfg80211_change_virtual_intf(struct wiphy *wiphy,
+ 		break;
+ 	case NL80211_IFTYPE_P2P_CLIENT:
+ 	case NL80211_IFTYPE_P2P_GO:
++		if (mwifiex_cfg80211_deinit_p2p(priv))
++			return -EFAULT;
++
+ 		switch (type) {
+-		case NL80211_IFTYPE_STATION:
+-			if (mwifiex_cfg80211_deinit_p2p(priv))
+-				return -EFAULT;
+-			priv->adapter->curr_iface_comb.p2p_intf--;
+-			priv->adapter->curr_iface_comb.sta_intf++;
+-			dev->ieee80211_ptr->iftype = type;
+-			if (mwifiex_deinit_priv_params(priv))
+-				return -1;
+-			if (mwifiex_init_new_priv_params(priv, dev, type))
+-				return -1;
+-			if (mwifiex_sta_init_cmd(priv, false, false))
+-				return -1;
+-			break;
+ 		case NL80211_IFTYPE_ADHOC:
+-			if (mwifiex_cfg80211_deinit_p2p(priv))
+-				return -EFAULT;
++		case NL80211_IFTYPE_STATION:
+ 			return mwifiex_change_vif_to_sta_adhoc(dev, curr_iftype,
+ 							       type, params);
+-			break;
+ 		case NL80211_IFTYPE_AP:
+-			if (mwifiex_cfg80211_deinit_p2p(priv))
+-				return -EFAULT;
+ 			return mwifiex_change_vif_to_ap(dev, curr_iftype, type,
+ 							params);
+ 		case NL80211_IFTYPE_UNSPECIFIED:
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 46517515ba728..777c0bab65d57 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -17,6 +17,7 @@
+  * this warranty disclaimer.
+  */
+ 
++#include <linux/iopoll.h>
+ #include <linux/firmware.h>
+ 
+ #include "decl.h"
+@@ -636,11 +637,15 @@ static void mwifiex_delay_for_sleep_cookie(struct mwifiex_adapter *adapter,
+ 			    "max count reached while accessing sleep cookie\n");
+ }
+ 
++#define N_WAKEUP_TRIES_SHORT_INTERVAL 15
++#define N_WAKEUP_TRIES_LONG_INTERVAL 35
++
+ /* This function wakes up the card by reading fw_status register. */
+ static int mwifiex_pm_wakeup_card(struct mwifiex_adapter *adapter)
+ {
+ 	struct pcie_service_card *card = adapter->card;
+ 	const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
++	int retval;
+ 
+ 	mwifiex_dbg(adapter, EVENT,
+ 		    "event: Wakeup device...\n");
+@@ -648,11 +653,24 @@ static int mwifiex_pm_wakeup_card(struct mwifiex_adapter *adapter)
+ 	if (reg->sleep_cookie)
+ 		mwifiex_pcie_dev_wakeup_delay(adapter);
+ 
+-	/* Accessing fw_status register will wakeup device */
+-	if (mwifiex_write_reg(adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
+-		mwifiex_dbg(adapter, ERROR,
+-			    "Writing fw_status register failed\n");
+-		return -1;
++	/* The 88W8897 PCIe+USB firmware (latest version 15.68.19.p21) sometimes
++	 * appears to ignore or miss our wakeup request, so we continue trying
++	 * until we receive an interrupt from the card.
++	 */
++	if (read_poll_timeout(mwifiex_write_reg, retval,
++			      READ_ONCE(adapter->int_status) != 0,
++			      500, 500 * N_WAKEUP_TRIES_SHORT_INTERVAL,
++			      false,
++			      adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
++		if (read_poll_timeout(mwifiex_write_reg, retval,
++				      READ_ONCE(adapter->int_status) != 0,
++				      10000, 10000 * N_WAKEUP_TRIES_LONG_INTERVAL,
++				      false,
++				      adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
++			mwifiex_dbg(adapter, ERROR,
++				    "Firmware didn't wake up\n");
++			return -EIO;
++		}
+ 	}
+ 
+ 	if (reg->sleep_cookie) {
+@@ -1479,6 +1497,14 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb,
+ 			ret = -1;
+ 			goto done_unmap;
+ 		}
++
++		/* The firmware (latest version 15.68.19.p21) of the 88W8897 PCIe+USB card
++		 * seems to crash randomly after setting the TX ring write pointer when
++		 * ASPM powersaving is enabled. A workaround seems to be keeping the bus
++		 * busy by reading a random register afterwards.
++		 */
++		mwifiex_read_reg(adapter, PCI_VENDOR_ID, &rx_val);
++
+ 		if ((mwifiex_pcie_txbd_not_full(card)) &&
+ 		    tx_param->next_pkt_len) {
+ 			/* have more packets and TxBD still can hold more */
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 426e39d4ccf0f..9736aa0ab7fd4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -505,6 +505,22 @@ static int mwifiex_usb_probe(struct usb_interface *intf,
+ 		}
+ 	}
+ 
++	switch (card->usb_boot_state) {
++	case USB8XXX_FW_DNLD:
++		/* Reject broken descriptors. */
++		if (!card->rx_cmd_ep || !card->tx_cmd_ep)
++			return -ENODEV;
++		if (card->bulk_out_maxpktsize == 0)
++			return -ENODEV;
++		break;
++	case USB8XXX_FW_READY:
++		/* Assume the driver can handle missing endpoints for now. */
++		break;
++	default:
++		WARN_ON(1);
++		return -ENODEV;
++	}
++
+ 	usb_set_intfdata(intf, card);
+ 
+ 	ret = mwifiex_add_card(card, &card->fw_done, &usb_ops,
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index 3bf6571f41490..529e325498cdb 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -5800,8 +5800,8 @@ static void mwl8k_fw_state_machine(const struct firmware *fw, void *context)
+ fail:
+ 	priv->fw_state = FW_STATE_ERROR;
+ 	complete(&priv->firmware_loading_complete);
+-	device_release_driver(&priv->pdev->dev);
+ 	mwl8k_release_firmware(priv);
++	device_release_driver(&priv->pdev->dev);
+ }
+ 
+ #define MAX_RESTART_ATTEMPTS 1
+diff --git a/drivers/net/wireless/mediatek/mt76/debugfs.c b/drivers/net/wireless/mediatek/mt76/debugfs.c
+index fa48cc3a7a8f7..ad97308c78534 100644
+--- a/drivers/net/wireless/mediatek/mt76/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/debugfs.c
+@@ -116,8 +116,11 @@ static int mt76_read_rate_txpower(struct seq_file *s, void *data)
+ 	return 0;
+ }
+ 
+-struct dentry *mt76_register_debugfs(struct mt76_dev *dev)
++struct dentry *
++mt76_register_debugfs_fops(struct mt76_dev *dev,
++			   const struct file_operations *ops)
+ {
++	const struct file_operations *fops = ops ? ops : &fops_regval;
+ 	struct dentry *dir;
+ 
+ 	dir = debugfs_create_dir("mt76", dev->hw->wiphy->debugfsdir);
+@@ -126,8 +129,7 @@ struct dentry *mt76_register_debugfs(struct mt76_dev *dev)
+ 
+ 	debugfs_create_u8("led_pin", 0600, dir, &dev->led_pin);
+ 	debugfs_create_u32("regidx", 0600, dir, &dev->debugfs_reg);
+-	debugfs_create_file_unsafe("regval", 0600, dir, dev,
+-				   &fops_regval);
++	debugfs_create_file_unsafe("regval", 0600, dir, dev, fops);
+ 	debugfs_create_file_unsafe("napi_threaded", 0600, dir, dev,
+ 				   &fops_napi_threaded);
+ 	debugfs_create_blob("eeprom", 0400, dir, &dev->eeprom);
+@@ -140,4 +142,4 @@ struct dentry *mt76_register_debugfs(struct mt76_dev *dev)
+ 
+ 	return dir;
+ }
+-EXPORT_SYMBOL_GPL(mt76_register_debugfs);
++EXPORT_SYMBOL_GPL(mt76_register_debugfs_fops);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 25c5ceef52577..4d01fd85283df 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -869,7 +869,13 @@ struct mt76_phy *mt76_alloc_phy(struct mt76_dev *dev, unsigned int size,
+ int mt76_register_phy(struct mt76_phy *phy, bool vht,
+ 		      struct ieee80211_rate *rates, int n_rates);
+ 
+-struct dentry *mt76_register_debugfs(struct mt76_dev *dev);
++struct dentry *mt76_register_debugfs_fops(struct mt76_dev *dev,
++					  const struct file_operations *ops);
++static inline struct dentry *mt76_register_debugfs(struct mt76_dev *dev)
++{
++	return mt76_register_debugfs_fops(dev, NULL);
++}
++
+ int mt76_queues_read(struct seq_file *s, void *data);
+ void mt76_seq_puts_array(struct seq_file *file, const char *str,
+ 			 s8 *val, int len);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
+index cb4659771fd97..bda22ca0bd714 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
+@@ -2,6 +2,33 @@
+ 
+ #include "mt7615.h"
+ 
++static int
++mt7615_reg_set(void *data, u64 val)
++{
++	struct mt7615_dev *dev = data;
++
++	mt7615_mutex_acquire(dev);
++	mt76_wr(dev, dev->mt76.debugfs_reg, val);
++	mt7615_mutex_release(dev);
++
++	return 0;
++}
++
++static int
++mt7615_reg_get(void *data, u64 *val)
++{
++	struct mt7615_dev *dev = data;
++
++	mt7615_mutex_acquire(dev);
++	*val = mt76_rr(dev, dev->mt76.debugfs_reg);
++	mt7615_mutex_release(dev);
++
++	return 0;
++}
++
++DEFINE_DEBUGFS_ATTRIBUTE(fops_regval, mt7615_reg_get, mt7615_reg_set,
++			 "0x%08llx\n");
++
+ static int
+ mt7615_radar_pattern_set(void *data, u64 val)
+ {
+@@ -506,7 +533,7 @@ int mt7615_init_debugfs(struct mt7615_dev *dev)
+ {
+ 	struct dentry *dir;
+ 
+-	dir = mt76_register_debugfs(&dev->mt76);
++	dir = mt76_register_debugfs_fops(&dev->mt76, &fops_regval);
+ 	if (!dir)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+index 2f1ac644e018e..47f23ac905a3c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+@@ -49,12 +49,14 @@ int mt7615_thermal_init(struct mt7615_dev *dev)
+ {
+ 	struct wiphy *wiphy = mt76_hw(dev)->wiphy;
+ 	struct device *hwmon;
++	const char *name;
+ 
+ 	if (!IS_REACHABLE(CONFIG_HWMON))
+ 		return 0;
+ 
+-	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev,
+-						       wiphy_name(wiphy), dev,
++	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7615_%s",
++			      wiphy_name(wiphy));
++	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, dev,
+ 						       mt7615_hwmon_groups);
+ 	if (IS_ERR(hwmon))
+ 		return PTR_ERR(hwmon);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index ff3f85e4087c9..5455231f51881 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -755,12 +755,15 @@ int mt7615_mac_write_txwi(struct mt7615_dev *dev, __le32 *txwi,
+ 	if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+ 		txwi[3] |= cpu_to_le32(MT_TXD3_NO_ACK);
+ 
+-	txwi[7] = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+-		  FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype) |
+-		  FIELD_PREP(MT_TXD7_SPE_IDX, 0x18);
+-	if (!is_mmio)
+-		txwi[8] = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
+-			  FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++	val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
++	      FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype) |
++	      FIELD_PREP(MT_TXD7_SPE_IDX, 0x18);
++	txwi[7] = cpu_to_le32(val);
++	if (!is_mmio) {
++		val = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
++		      FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++		txwi[8] = cpu_to_le32(val);
++	}
+ 
+ 	return 0;
+ }
+@@ -1494,32 +1497,41 @@ out:
+ }
+ 
+ static void
+-mt7615_mac_tx_free_token(struct mt7615_dev *dev, u16 token)
++mt7615_txwi_free(struct mt7615_dev *dev, struct mt76_txwi_cache *txwi)
+ {
+ 	struct mt76_dev *mdev = &dev->mt76;
+-	struct mt76_txwi_cache *txwi;
+ 	__le32 *txwi_data;
+ 	u32 val;
+ 	u8 wcid;
+ 
+-	trace_mac_tx_free(dev, token);
+-	txwi = mt76_token_put(mdev, token);
+-	if (!txwi)
+-		return;
++	mt7615_txp_skb_unmap(mdev, txwi);
++	if (!txwi->skb)
++		goto out;
+ 
+ 	txwi_data = (__le32 *)mt76_get_txwi_ptr(mdev, txwi);
+ 	val = le32_to_cpu(txwi_data[1]);
+ 	wcid = FIELD_GET(MT_TXD1_WLAN_IDX, val);
++	mt76_tx_complete_skb(mdev, wcid, txwi->skb);
+ 
+-	mt7615_txp_skb_unmap(mdev, txwi);
+-	if (txwi->skb) {
+-		mt76_tx_complete_skb(mdev, wcid, txwi->skb);
+-		txwi->skb = NULL;
+-	}
+-
++out:
++	txwi->skb = NULL;
+ 	mt76_put_txwi(mdev, txwi);
+ }
+ 
++static void
++mt7615_mac_tx_free_token(struct mt7615_dev *dev, u16 token)
++{
++	struct mt76_dev *mdev = &dev->mt76;
++	struct mt76_txwi_cache *txwi;
++
++	trace_mac_tx_free(dev, token);
++	txwi = mt76_token_put(mdev, token);
++	if (!txwi)
++		return;
++
++	mt7615_txwi_free(dev, txwi);
++}
++
+ static void mt7615_mac_tx_free(struct mt7615_dev *dev, struct sk_buff *skb)
+ {
+ 	struct mt7615_tx_free *free = (struct mt7615_tx_free *)skb->data;
+@@ -2026,16 +2038,8 @@ void mt7615_tx_token_put(struct mt7615_dev *dev)
+ 	int id;
+ 
+ 	spin_lock_bh(&dev->mt76.token_lock);
+-	idr_for_each_entry(&dev->mt76.token, txwi, id) {
+-		mt7615_txp_skb_unmap(&dev->mt76, txwi);
+-		if (txwi->skb) {
+-			struct ieee80211_hw *hw;
+-
+-			hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
+-			ieee80211_free_txskb(hw, txwi->skb);
+-		}
+-		mt76_put_txwi(&dev->mt76, txwi);
+-	}
++	idr_for_each_entry(&dev->mt76.token, txwi, id)
++		mt7615_txwi_free(dev, txwi);
+ 	spin_unlock_bh(&dev->mt76.token_lock);
+ 	idr_destroy(&dev->mt76.token);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index dada43d6d879e..51260a669d166 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -135,8 +135,6 @@ static int get_omac_idx(enum nl80211_iftype type, u64 mask)
+ 	int i;
+ 
+ 	switch (type) {
+-	case NL80211_IFTYPE_MESH_POINT:
+-	case NL80211_IFTYPE_ADHOC:
+ 	case NL80211_IFTYPE_STATION:
+ 		/* prefer hw bssid slot 1-3 */
+ 		i = get_free_idx(mask, HW_BSSID_1, HW_BSSID_3);
+@@ -160,6 +158,8 @@ static int get_omac_idx(enum nl80211_iftype type, u64 mask)
+ 			return HW_BSSID_0;
+ 
+ 		break;
++	case NL80211_IFTYPE_ADHOC:
++	case NL80211_IFTYPE_MESH_POINT:
+ 	case NL80211_IFTYPE_MONITOR:
+ 	case NL80211_IFTYPE_AP:
+ 		/* ap uses hw bssid 0 and ext bssid */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index f8a09692d3e4c..4fed3afad67cc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -808,7 +808,8 @@ mt7615_mcu_ctrl_pm_state(struct mt7615_dev *dev, int band, int state)
+ 
+ static int
+ mt7615_mcu_bss_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+-			 struct ieee80211_sta *sta, bool enable)
++			 struct ieee80211_sta *sta, struct mt7615_phy *phy,
++			 bool enable)
+ {
+ 	struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ 	u32 type = vif->p2p ? NETWORK_P2P : NETWORK_INFRA;
+@@ -821,6 +822,7 @@ mt7615_mcu_bss_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	switch (vif->type) {
+ 	case NL80211_IFTYPE_MESH_POINT:
+ 	case NL80211_IFTYPE_AP:
++	case NL80211_IFTYPE_MONITOR:
+ 		break;
+ 	case NL80211_IFTYPE_STATION:
+ 		/* TODO: enable BSS_INFO_UAPSD & BSS_INFO_PM */
+@@ -840,14 +842,19 @@ mt7615_mcu_bss_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	bss = (struct bss_info_basic *)tlv;
+-	memcpy(bss->bssid, vif->bss_conf.bssid, ETH_ALEN);
+-	bss->bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int);
+ 	bss->network_type = cpu_to_le32(type);
+-	bss->dtim_period = vif->bss_conf.dtim_period;
+ 	bss->bmc_tx_wlan_idx = wlan_idx;
+ 	bss->wmm_idx = mvif->mt76.wmm_idx;
+ 	bss->active = enable;
+ 
++	if (vif->type != NL80211_IFTYPE_MONITOR) {
++		memcpy(bss->bssid, vif->bss_conf.bssid, ETH_ALEN);
++		bss->bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int);
++		bss->dtim_period = vif->bss_conf.dtim_period;
++	} else {
++		memcpy(bss->bssid, phy->mt76->macaddr, ETH_ALEN);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -863,6 +870,7 @@ mt7615_mcu_bss_omac_tlv(struct sk_buff *skb, struct ieee80211_vif *vif)
+ 	tlv = mt76_connac_mcu_add_tlv(skb, BSS_INFO_OMAC, sizeof(*omac));
+ 
+ 	switch (vif->type) {
++	case NL80211_IFTYPE_MONITOR:
+ 	case NL80211_IFTYPE_MESH_POINT:
+ 	case NL80211_IFTYPE_AP:
+ 		if (vif->p2p)
+@@ -929,7 +937,7 @@ mt7615_mcu_add_bss(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 	if (enable)
+ 		mt7615_mcu_bss_omac_tlv(skb, vif);
+ 
+-	mt7615_mcu_bss_basic_tlv(skb, vif, sta, enable);
++	mt7615_mcu_bss_basic_tlv(skb, vif, sta, phy, enable);
+ 
+ 	if (enable && mvif->mt76.omac_idx >= EXT_BSSID_START &&
+ 	    mvif->mt76.omac_idx < REPEATER_BSSID_START)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 5c3a81e5f559d..d25b50e769328 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -689,7 +689,7 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ 		if (ht_cap->ht_supported)
+ 			mode |= PHY_TYPE_BIT_HT;
+ 
+-		if (he_cap->has_he)
++		if (he_cap && he_cap->has_he)
+ 			mode |= PHY_TYPE_BIT_HE;
+ 	} else if (band == NL80211_BAND_5GHZ) {
+ 		mode |= PHY_TYPE_BIT_OFDM;
+@@ -700,7 +700,7 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ 		if (vht_cap->vht_supported)
+ 			mode |= PHY_TYPE_BIT_VHT;
+ 
+-		if (he_cap->has_he)
++		if (he_cap && he_cap->has_he)
+ 			mode |= PHY_TYPE_BIT_HE;
+ 	}
+ 
+@@ -719,6 +719,7 @@ void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
+ 	struct sta_rec_state *state;
+ 	struct sta_rec_phy *phy;
+ 	struct tlv *tlv;
++	u16 supp_rates;
+ 
+ 	/* starec ht */
+ 	if (sta->ht_cap.ht_supported) {
+@@ -767,7 +768,15 @@ void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
+ 
+ 	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA, sizeof(*ra_info));
+ 	ra_info = (struct sta_rec_ra_info *)tlv;
+-	ra_info->legacy = cpu_to_le16((u16)sta->supp_rates[band]);
++
++	supp_rates = sta->supp_rates[band];
++	if (band == NL80211_BAND_2GHZ)
++		supp_rates = FIELD_PREP(RA_LEGACY_OFDM, supp_rates >> 4) |
++			     FIELD_PREP(RA_LEGACY_CCK, supp_rates & 0xf);
++	else
++		supp_rates = FIELD_PREP(RA_LEGACY_OFDM, supp_rates);
++
++	ra_info->legacy = cpu_to_le16(supp_rates);
+ 
+ 	if (sta->ht_cap.ht_supported)
+ 		memcpy(ra_info->rx_mcs_bitmask, sta->ht_cap.mcs.rx_mask,
+@@ -1929,19 +1938,22 @@ mt76_connac_mcu_key_iter(struct ieee80211_hw *hw,
+ 	    key->cipher != WLAN_CIPHER_SUITE_TKIP)
+ 		return;
+ 
+-	if (key->cipher == WLAN_CIPHER_SUITE_TKIP) {
+-		gtk_tlv->proto = cpu_to_le32(NL80211_WPA_VERSION_1);
++	if (key->cipher == WLAN_CIPHER_SUITE_TKIP)
+ 		cipher = BIT(3);
+-	} else {
+-		gtk_tlv->proto = cpu_to_le32(NL80211_WPA_VERSION_2);
++	else
+ 		cipher = BIT(4);
+-	}
+ 
+ 	/* we are assuming here to have a single pairwise key */
+ 	if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) {
++		if (key->cipher == WLAN_CIPHER_SUITE_TKIP)
++			gtk_tlv->proto = cpu_to_le32(NL80211_WPA_VERSION_1);
++		else
++			gtk_tlv->proto = cpu_to_le32(NL80211_WPA_VERSION_2);
++
+ 		gtk_tlv->pairwise_cipher = cpu_to_le32(cipher);
+-		gtk_tlv->group_cipher = cpu_to_le32(cipher);
+ 		gtk_tlv->keyid = key->keyidx;
++	} else {
++		gtk_tlv->group_cipher = cpu_to_le32(cipher);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 1c73beb226771..77d4435e4581e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -124,6 +124,8 @@ struct sta_rec_state {
+ 	u8 rsv[1];
+ } __packed;
+ 
++#define RA_LEGACY_OFDM GENMASK(13, 6)
++#define RA_LEGACY_CCK  GENMASK(3, 0)
+ #define HT_MCS_MASK_NUM 10
+ struct sta_rec_ra_info {
+ 	__le16 tag;
+@@ -844,14 +846,14 @@ struct mt76_connac_gtk_rekey_tlv {
+ 			* 2: rekey update
+ 			*/
+ 	u8 keyid;
+-	u8 pad[2];
++	u8 option; /* 1: rekey data update without enabling offload */
++	u8 pad[1];
+ 	__le32 proto; /* WPA-RSN-WAPI-OPSN */
+ 	__le32 pairwise_cipher;
+ 	__le32 group_cipher;
+ 	__le32 key_mgmt; /* NONE-PSK-IEEE802.1X */
+ 	__le32 mgmt_group_cipher;
+-	u8 option; /* 1: rekey data update without enabling offload */
+-	u8 reserverd[3];
++	u8 reserverd[4];
+ } __packed;
+ 
+ #define MT76_CONNAC_WOW_MASK_MAX_LEN			16
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index c32e6dc687739..07b21b2085823 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -176,7 +176,7 @@ void mt76x02_mac_wcid_set_drop(struct mt76x02_dev *dev, u8 idx, bool drop)
+ 		mt76_wr(dev, MT_WCID_DROP(idx), (val & ~bit) | (bit * drop));
+ }
+ 
+-static __le16
++static u16
+ mt76x02_mac_tx_rate_val(struct mt76x02_dev *dev,
+ 			const struct ieee80211_tx_rate *rate, u8 *nss_val)
+ {
+@@ -222,14 +222,14 @@ mt76x02_mac_tx_rate_val(struct mt76x02_dev *dev,
+ 		rateval |= MT_RXWI_RATE_SGI;
+ 
+ 	*nss_val = nss;
+-	return cpu_to_le16(rateval);
++	return rateval;
+ }
+ 
+ void mt76x02_mac_wcid_set_rate(struct mt76x02_dev *dev, struct mt76_wcid *wcid,
+ 			       const struct ieee80211_tx_rate *rate)
+ {
+ 	s8 max_txpwr_adj = mt76x02_tx_get_max_txpwr_adj(dev, rate);
+-	__le16 rateval;
++	u16 rateval;
+ 	u32 tx_info;
+ 	s8 nss;
+ 
+@@ -342,7 +342,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ 	struct ieee80211_key_conf *key = info->control.hw_key;
+ 	u32 wcid_tx_info;
+ 	u16 rate_ht_mask = FIELD_PREP(MT_RXWI_RATE_PHY, BIT(1) | BIT(2));
+-	u16 txwi_flags = 0;
++	u16 txwi_flags = 0, rateval;
+ 	u8 nss;
+ 	s8 txpwr_adj, max_txpwr_adj;
+ 	u8 ccmp_pn[8], nstreams = dev->mphy.chainmask & 0xf;
+@@ -380,14 +380,15 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ 
+ 	if (wcid && (rate->idx < 0 || !rate->count)) {
+ 		wcid_tx_info = wcid->tx_info;
+-		txwi->rate = FIELD_GET(MT_WCID_TX_INFO_RATE, wcid_tx_info);
++		rateval = FIELD_GET(MT_WCID_TX_INFO_RATE, wcid_tx_info);
+ 		max_txpwr_adj = FIELD_GET(MT_WCID_TX_INFO_TXPWR_ADJ,
+ 					  wcid_tx_info);
+ 		nss = FIELD_GET(MT_WCID_TX_INFO_NSS, wcid_tx_info);
+ 	} else {
+-		txwi->rate = mt76x02_mac_tx_rate_val(dev, rate, &nss);
++		rateval = mt76x02_mac_tx_rate_val(dev, rate, &nss);
+ 		max_txpwr_adj = mt76x02_tx_get_max_txpwr_adj(dev, rate);
+ 	}
++	txwi->rate = cpu_to_le16(rateval);
+ 
+ 	txpwr_adj = mt76x02_tx_get_txpwr_adj(dev, dev->txpower_conf,
+ 					     max_txpwr_adj);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 4798d6344305d..b171027e0cfa8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -130,9 +130,12 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+ 	struct wiphy *wiphy = phy->mt76->hw->wiphy;
+ 	struct thermal_cooling_device *cdev;
+ 	struct device *hwmon;
++	const char *name;
+ 
+-	cdev = thermal_cooling_device_register(wiphy_name(wiphy), phy,
+-					       &mt7915_thermal_ops);
++	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7915_%s",
++			      wiphy_name(wiphy));
++
++	cdev = thermal_cooling_device_register(name, phy, &mt7915_thermal_ops);
+ 	if (!IS_ERR(cdev)) {
+ 		if (sysfs_create_link(&wiphy->dev.kobj, &cdev->device.kobj,
+ 				      "cooling_device") < 0)
+@@ -144,8 +147,7 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+ 	if (!IS_REACHABLE(CONFIG_HWMON))
+ 		return 0;
+ 
+-	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev,
+-						       wiphy_name(wiphy), phy,
++	hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy,
+ 						       mt7915_hwmon_groups);
+ 	if (IS_ERR(hwmon))
+ 		return PTR_ERR(hwmon);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 2462704094b0a..bbc996f86b5c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1232,7 +1232,7 @@ mt7915_mac_add_txs_skb(struct mt7915_dev *dev, struct mt76_wcid *wcid, int pid,
+ 		goto out;
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+-	if (!(txs_data[0] & le32_to_cpu(MT_TXS0_ACK_ERROR_MASK)))
++	if (!(txs_data[0] & cpu_to_le32(MT_TXS0_ACK_ERROR_MASK)))
+ 		info->flags |= IEEE80211_TX_STAT_ACK;
+ 
+ 	info->status.ampdu_len = 1;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.h b/drivers/net/wireless/mediatek/mt76/mt7915/mac.h
+index eb1885f4bd8eb..fee7741b5d421 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.h
+@@ -272,7 +272,8 @@ enum tx_mcu_port_q_idx {
+ #define MT_TX_RATE_MODE			GENMASK(9, 6)
+ #define MT_TX_RATE_SU_EXT_TONE		BIT(5)
+ #define MT_TX_RATE_DCM			BIT(4)
+-#define MT_TX_RATE_IDX			GENMASK(3, 0)
++/* VHT/HE only use bits 0-3 */
++#define MT_TX_RATE_IDX			GENMASK(5, 0)
+ 
+ #define MT_TXP_MAX_BUF_NUM		6
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 43960770a9af2..ba36d3caec8e1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -721,7 +721,7 @@ mt7915_mcu_alloc_sta_req(struct mt7915_dev *dev, struct mt7915_vif *mvif,
+ 		.bss_idx = mvif->idx,
+ 		.wlan_idx_lo = msta ? to_wcid_lo(msta->wcid.idx) : 0,
+ 		.wlan_idx_hi = msta ? to_wcid_hi(msta->wcid.idx) : 0,
+-		.muar_idx = msta ? mvif->omac_idx : 0,
++		.muar_idx = msta && msta->wcid.sta ? mvif->omac_idx : 0xe,
+ 		.is_tlv_append = 1,
+ 	};
+ 	struct sk_buff *skb;
+@@ -757,7 +757,7 @@ mt7915_mcu_alloc_wtbl_req(struct mt7915_dev *dev, struct mt7915_sta *msta,
+ 	}
+ 
+ 	if (sta_hdr)
+-		sta_hdr->len = cpu_to_le16(sizeof(hdr));
++		le16_add_cpu(&sta_hdr->len, sizeof(hdr));
+ 
+ 	return skb_put_data(nskb, &hdr, sizeof(hdr));
+ }
+@@ -925,7 +925,7 @@ static void mt7915_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ 
+ 	elem = ieee80211_bss_get_elem(bss, WLAN_EID_EXT_CAPABILITY);
+ 
+-	if (!elem || elem->datalen < 10 ||
++	if (!elem || elem->datalen <= 10 ||
+ 	    !(elem->data[10] &
+ 	      WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT))
+ 		data->tolerated = false;
+@@ -1201,7 +1201,7 @@ mt7915_mcu_sta_key_tlv(struct mt7915_sta *msta, struct sk_buff *skb,
+ 		u8 cipher;
+ 
+ 		cipher = mt7915_mcu_get_cipher(key->cipher);
+-		if (cipher == MT_CIPHER_NONE)
++		if (cipher == MCU_CIPHER_NONE)
+ 			return -EOPNOTSUPP;
+ 
+ 		sec_key = &sec->key[0];
+@@ -2790,7 +2790,7 @@ out:
+ 	default:
+ 		ret = -EAGAIN;
+ 		dev_err(dev->mt76.dev, "Failed to release patch semaphore\n");
+-		goto out;
++		break;
+ 	}
+ 	release_firmware(fw);
+ 
+@@ -3391,20 +3391,20 @@ int mt7915_mcu_set_chan_info(struct mt7915_phy *phy, int cmd)
+ 
+ static int mt7915_mcu_set_eeprom_flash(struct mt7915_dev *dev)
+ {
+-#define TOTAL_PAGE_MASK		GENMASK(7, 5)
++#define MAX_PAGE_IDX_MASK	GENMASK(7, 5)
+ #define PAGE_IDX_MASK		GENMASK(4, 2)
+ #define PER_PAGE_SIZE		0x400
+ 	struct mt7915_mcu_eeprom req = { .buffer_mode = EE_MODE_BUFFER };
+-	u8 total = MT7915_EEPROM_SIZE / PER_PAGE_SIZE;
++	u8 total = DIV_ROUND_UP(MT7915_EEPROM_SIZE, PER_PAGE_SIZE);
+ 	u8 *eep = (u8 *)dev->mt76.eeprom.data;
+ 	int eep_len;
+ 	int i;
+ 
+-	for (i = 0; i <= total; i++, eep += eep_len) {
++	for (i = 0; i < total; i++, eep += eep_len) {
+ 		struct sk_buff *skb;
+ 		int ret;
+ 
+-		if (i == total)
++		if (i == total - 1 && !!(MT7915_EEPROM_SIZE % PER_PAGE_SIZE))
+ 			eep_len = MT7915_EEPROM_SIZE % PER_PAGE_SIZE;
+ 		else
+ 			eep_len = PER_PAGE_SIZE;
+@@ -3414,7 +3414,7 @@ static int mt7915_mcu_set_eeprom_flash(struct mt7915_dev *dev)
+ 		if (!skb)
+ 			return -ENOMEM;
+ 
+-		req.format = FIELD_PREP(TOTAL_PAGE_MASK, total) |
++		req.format = FIELD_PREP(MAX_PAGE_IDX_MASK, total - 1) |
+ 			     FIELD_PREP(PAGE_IDX_MASK, i) | EE_FORMAT_WHOLE;
+ 		req.len = cpu_to_le16(eep_len);
+ 
+@@ -3481,7 +3481,7 @@ static int mt7915_mcu_set_pre_cal(struct mt7915_dev *dev, u8 idx,
+ 		u8 idx;
+ 		u8 rsv[4];
+ 		__le32 len;
+-	} req;
++	} req = {};
+ 	struct sk_buff *skb;
+ 
+ 	skb = mt76_mcu_msg_alloc(&dev->mt76, NULL, sizeof(req) + len);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+index 77468bdae460b..30f3b3085c786 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+@@ -4,6 +4,32 @@
+ #include "mt7921.h"
+ #include "eeprom.h"
+ 
++static int
++mt7921_reg_set(void *data, u64 val)
++{
++	struct mt7921_dev *dev = data;
++
++	mt7921_mutex_acquire(dev);
++	mt76_wr(dev, dev->mt76.debugfs_reg, val);
++	mt7921_mutex_release(dev);
++
++	return 0;
++}
++
++static int
++mt7921_reg_get(void *data, u64 *val)
++{
++	struct mt7921_dev *dev = data;
++
++	mt7921_mutex_acquire(dev);
++	*val = mt76_rr(dev, dev->mt76.debugfs_reg);
++	mt7921_mutex_release(dev);
++
++	return 0;
++}
++
++DEFINE_DEBUGFS_ATTRIBUTE(fops_regval, mt7921_reg_get, mt7921_reg_set,
++			 "0x%08llx\n");
+ static int
+ mt7921_fw_debug_set(void *data, u64 val)
+ {
+@@ -69,6 +95,8 @@ mt7921_tx_stats_show(struct seq_file *file, void *data)
+ 	struct mt7921_dev *dev = file->private;
+ 	int stat[8], i, n;
+ 
++	mt7921_mutex_acquire(dev);
++
+ 	mt7921_ampdu_stat_read_phy(&dev->phy, file);
+ 
+ 	/* Tx amsdu info */
+@@ -78,6 +106,8 @@ mt7921_tx_stats_show(struct seq_file *file, void *data)
+ 		n += stat[i];
+ 	}
+ 
++	mt7921_mutex_release(dev);
++
+ 	for (i = 0; i < ARRAY_SIZE(stat); i++) {
+ 		seq_printf(file, "AMSDU pack count of %d MSDU in TXD: 0x%x ",
+ 			   i + 1, stat[i]);
+@@ -98,6 +128,8 @@ mt7921_queues_acq(struct seq_file *s, void *data)
+ 	struct mt7921_dev *dev = dev_get_drvdata(s->private);
+ 	int i;
+ 
++	mt7921_mutex_acquire(dev);
++
+ 	for (i = 0; i < 16; i++) {
+ 		int j, acs = i / 4, index = i % 4;
+ 		u32 ctrl, val, qlen = 0;
+@@ -117,6 +149,8 @@ mt7921_queues_acq(struct seq_file *s, void *data)
+ 		seq_printf(s, "AC%d%d: queued=%d\n", acs, index, qlen);
+ 	}
+ 
++	mt7921_mutex_release(dev);
++
+ 	return 0;
+ }
+ 
+@@ -373,7 +407,7 @@ int mt7921_init_debugfs(struct mt7921_dev *dev)
+ {
+ 	struct dentry *dir;
+ 
+-	dir = mt76_register_debugfs(&dev->mt76);
++	dir = mt76_register_debugfs_fops(&dev->mt76, &fops_regval);
+ 	if (!dir)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index a9ce10b988273..78a00028137bd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -106,6 +106,10 @@ mt7921_mac_init_band(struct mt7921_dev *dev, u8 band)
+ 	mt76_set(dev, MT_WF_RMAC_MIB_TIME0(band), MT_WF_RMAC_MIB_RXTIME_EN);
+ 	mt76_set(dev, MT_WF_RMAC_MIB_AIRTIME0(band), MT_WF_RMAC_MIB_RXTIME_EN);
+ 
++	/* enable MIB tx-rx time reporting */
++	mt76_set(dev, MT_MIB_SCR1(band), MT_MIB_TXDUR_EN);
++	mt76_set(dev, MT_MIB_SCR1(band), MT_MIB_RXDUR_EN);
++
+ 	mt76_rmw_field(dev, MT_DMA_DCR0(band), MT_DMA_DCR0_MAX_RX_LEN, 1536);
+ 	/* disable rx rate report by default due to hw issues */
+ 	mt76_clear(dev, MT_DMA_DCR0(band), MT_DMA_DCR0_RXD_G5_EN);
+@@ -247,8 +251,17 @@ int mt7921_register_device(struct mt7921_dev *dev)
+ 
+ void mt7921_unregister_device(struct mt7921_dev *dev)
+ {
++	int i;
++	struct mt76_connac_pm *pm = &dev->pm;
++
+ 	mt76_unregister_device(&dev->mt76);
++	mt76_for_each_q_rx(&dev->mt76, i)
++		napi_disable(&dev->mt76.napi[i]);
++	cancel_delayed_work_sync(&pm->ps_work);
++	cancel_work_sync(&pm->wake_work);
++
+ 	mt7921_tx_token_put(dev);
++	mt7921_mcu_drv_pmctrl(dev);
+ 	mt7921_dma_cleanup(dev);
+ 	mt7921_mcu_exit(dev);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 7fe2e3a50428f..8a16f3f4d5253 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -180,12 +180,56 @@ mt7921_mac_decode_he_radiotap_ru(struct mt76_rx_status *status,
+ 				      IEEE80211_RADIOTAP_HE_DATA2_RU_OFFSET);
+ }
+ 
++static void
++mt7921_mac_decode_he_mu_radiotap(struct sk_buff *skb,
++				 struct mt76_rx_status *status,
++				 __le32 *rxv)
++{
++	static const struct ieee80211_radiotap_he_mu mu_known = {
++		.flags1 = HE_BITS(MU_FLAGS1_SIG_B_MCS_KNOWN) |
++			  HE_BITS(MU_FLAGS1_SIG_B_DCM_KNOWN) |
++			  HE_BITS(MU_FLAGS1_CH1_RU_KNOWN) |
++			  HE_BITS(MU_FLAGS1_SIG_B_SYMS_USERS_KNOWN) |
++			  HE_BITS(MU_FLAGS1_SIG_B_COMP_KNOWN),
++		.flags2 = HE_BITS(MU_FLAGS2_BW_FROM_SIG_A_BW_KNOWN) |
++			  HE_BITS(MU_FLAGS2_PUNC_FROM_SIG_A_BW_KNOWN),
++	};
++	struct ieee80211_radiotap_he_mu *he_mu = NULL;
++
++	he_mu = skb_push(skb, sizeof(mu_known));
++	memcpy(he_mu, &mu_known, sizeof(mu_known));
++
++#define MU_PREP(f, v)	le16_encode_bits(v, IEEE80211_RADIOTAP_HE_MU_##f)
++
++	he_mu->flags1 |= MU_PREP(FLAGS1_SIG_B_MCS, status->rate_idx);
++	if (status->he_dcm)
++		he_mu->flags1 |= MU_PREP(FLAGS1_SIG_B_DCM, status->he_dcm);
++
++	he_mu->flags2 |= MU_PREP(FLAGS2_BW_FROM_SIG_A_BW, status->bw) |
++			 MU_PREP(FLAGS2_SIG_B_SYMS_USERS,
++				 le32_get_bits(rxv[2], MT_CRXV_HE_NUM_USER));
++
++	he_mu->ru_ch1[0] = FIELD_GET(MT_CRXV_HE_RU0, cpu_to_le32(rxv[3]));
++
++	if (status->bw >= RATE_INFO_BW_40) {
++		he_mu->flags1 |= HE_BITS(MU_FLAGS1_CH2_RU_KNOWN);
++		he_mu->ru_ch2[0] =
++			FIELD_GET(MT_CRXV_HE_RU1, cpu_to_le32(rxv[3]));
++	}
++
++	if (status->bw >= RATE_INFO_BW_80) {
++		he_mu->ru_ch1[1] =
++			FIELD_GET(MT_CRXV_HE_RU2, cpu_to_le32(rxv[3]));
++		he_mu->ru_ch2[1] =
++			FIELD_GET(MT_CRXV_HE_RU3, cpu_to_le32(rxv[3]));
++	}
++}
++
+ static void
+ mt7921_mac_decode_he_radiotap(struct sk_buff *skb,
+ 			      struct mt76_rx_status *status,
+ 			      __le32 *rxv, u32 phy)
+ {
+-	/* TODO: struct ieee80211_radiotap_he_mu */
+ 	static const struct ieee80211_radiotap_he known = {
+ 		.data1 = HE_BITS(DATA1_DATA_MCS_KNOWN) |
+ 			 HE_BITS(DATA1_DATA_DCM_KNOWN) |
+@@ -193,6 +237,7 @@ mt7921_mac_decode_he_radiotap(struct sk_buff *skb,
+ 			 HE_BITS(DATA1_CODING_KNOWN) |
+ 			 HE_BITS(DATA1_LDPC_XSYMSEG_KNOWN) |
+ 			 HE_BITS(DATA1_DOPPLER_KNOWN) |
++			 HE_BITS(DATA1_SPTL_REUSE_KNOWN) |
+ 			 HE_BITS(DATA1_BSS_COLOR_KNOWN),
+ 		.data2 = HE_BITS(DATA2_GI_KNOWN) |
+ 			 HE_BITS(DATA2_TXBF_KNOWN) |
+@@ -207,9 +252,12 @@ mt7921_mac_decode_he_radiotap(struct sk_buff *skb,
+ 
+ 	he->data3 = HE_PREP(DATA3_BSS_COLOR, BSS_COLOR, rxv[14]) |
+ 		    HE_PREP(DATA3_LDPC_XSYMSEG, LDPC_EXT_SYM, rxv[2]);
++	he->data4 = HE_PREP(DATA4_SU_MU_SPTL_REUSE, SR_MASK, rxv[11]);
+ 	he->data5 = HE_PREP(DATA5_PE_DISAMBIG, PE_DISAMBIG, rxv[2]) |
+ 		    le16_encode_bits(ltf_size,
+ 				     IEEE80211_RADIOTAP_HE_DATA5_LTF_SIZE);
++	if (cpu_to_le32(rxv[0]) & MT_PRXV_TXBF)
++		he->data5 |= HE_BITS(DATA5_TXBF);
+ 	he->data6 = HE_PREP(DATA6_TXOP, TXOP_DUR, rxv[14]) |
+ 		    HE_PREP(DATA6_DOPPLER, DOPPLER, rxv[14]);
+ 
+@@ -217,8 +265,7 @@ mt7921_mac_decode_he_radiotap(struct sk_buff *skb,
+ 	case MT_PHY_TYPE_HE_SU:
+ 		he->data1 |= HE_BITS(DATA1_FORMAT_SU) |
+ 			     HE_BITS(DATA1_UL_DL_KNOWN) |
+-			     HE_BITS(DATA1_BEAM_CHANGE_KNOWN) |
+-			     HE_BITS(DATA1_SPTL_REUSE_KNOWN);
++			     HE_BITS(DATA1_BEAM_CHANGE_KNOWN);
+ 
+ 		he->data3 |= HE_PREP(DATA3_BEAM_CHANGE, BEAM_CHNG, rxv[14]) |
+ 			     HE_PREP(DATA3_UL_DL, UPLINK, rxv[2]);
+@@ -232,17 +279,15 @@ mt7921_mac_decode_he_radiotap(struct sk_buff *skb,
+ 		break;
+ 	case MT_PHY_TYPE_HE_MU:
+ 		he->data1 |= HE_BITS(DATA1_FORMAT_MU) |
+-			     HE_BITS(DATA1_UL_DL_KNOWN) |
+-			     HE_BITS(DATA1_SPTL_REUSE_KNOWN);
++			     HE_BITS(DATA1_UL_DL_KNOWN);
+ 
+ 		he->data3 |= HE_PREP(DATA3_UL_DL, UPLINK, rxv[2]);
+-		he->data4 |= HE_PREP(DATA4_SU_MU_SPTL_REUSE, SR_MASK, rxv[11]);
++		he->data4 |= HE_PREP(DATA4_MU_STA_ID, MU_AID, rxv[7]);
+ 
+ 		mt7921_mac_decode_he_radiotap_ru(status, he, rxv);
+ 		break;
+ 	case MT_PHY_TYPE_HE_TB:
+ 		he->data1 |= HE_BITS(DATA1_FORMAT_TRIG) |
+-			     HE_BITS(DATA1_SPTL_REUSE_KNOWN) |
+ 			     HE_BITS(DATA1_SPTL_REUSE2_KNOWN) |
+ 			     HE_BITS(DATA1_SPTL_REUSE3_KNOWN) |
+ 			     HE_BITS(DATA1_SPTL_REUSE4_KNOWN);
+@@ -606,9 +651,13 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 
+ 	mt7921_mac_assoc_rssi(dev, skb);
+ 
+-	if (rxv && status->flag & RX_FLAG_RADIOTAP_HE)
++	if (rxv && status->flag & RX_FLAG_RADIOTAP_HE) {
+ 		mt7921_mac_decode_he_radiotap(skb, status, rxv, mode);
+ 
++		if (status->flag & RX_FLAG_RADIOTAP_HE_MU)
++			mt7921_mac_decode_he_mu_radiotap(skb, status, rxv);
++	}
++
+ 	if (!status->wcid || !ieee80211_is_data_qos(fc))
+ 		return 0;
+ 
+@@ -735,8 +784,9 @@ mt7921_mac_write_txwi_80211(struct mt7921_dev *dev, __le32 *txwi,
+ static void mt7921_update_txs(struct mt76_wcid *wcid, __le32 *txwi)
+ {
+ 	struct mt7921_sta *msta = container_of(wcid, struct mt7921_sta, wcid);
+-	u32 pid, frame_type = FIELD_GET(MT_TXD2_FRAME_TYPE, txwi[2]);
++	u32 pid, frame_type;
+ 
++	frame_type = FIELD_GET(MT_TXD2_FRAME_TYPE, le32_to_cpu(txwi[2]));
+ 	if (!(frame_type & (IEEE80211_FTYPE_DATA >> 2)))
+ 		return;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+index 3af67fac213df..f0194c8780372 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+@@ -116,6 +116,7 @@ enum rx_pkt_type {
+ #define MT_PRXV_TX_DCM			BIT(4)
+ #define MT_PRXV_TX_ER_SU_106T		BIT(5)
+ #define MT_PRXV_NSTS			GENMASK(9, 7)
++#define MT_PRXV_TXBF			BIT(10)
+ #define MT_PRXV_HT_AD_CODE		BIT(11)
+ #define MT_PRXV_FRAME_MODE		GENMASK(14, 12)
+ #define MT_PRXV_SGI			GENMASK(16, 15)
+@@ -138,8 +139,15 @@ enum rx_pkt_type {
+ #define MT_CRXV_HE_LTF_SIZE		GENMASK(18, 17)
+ #define MT_CRXV_HE_LDPC_EXT_SYM		BIT(20)
+ #define MT_CRXV_HE_PE_DISAMBIG		BIT(23)
++#define MT_CRXV_HE_NUM_USER		GENMASK(30, 24)
+ #define MT_CRXV_HE_UPLINK		BIT(31)
+ 
++#define MT_CRXV_HE_RU0			GENMASK(7, 0)
++#define MT_CRXV_HE_RU1			GENMASK(15, 8)
++#define MT_CRXV_HE_RU2			GENMASK(23, 16)
++#define MT_CRXV_HE_RU3			GENMASK(31, 24)
++#define MT_CRXV_HE_MU_AID		GENMASK(30, 20)
++
+ #define MT_CRXV_HE_SR_MASK		GENMASK(11, 8)
+ #define MT_CRXV_HE_SR1_MASK		GENMASK(16, 12)
+ #define MT_CRXV_HE_SR2_MASK             GENMASK(20, 17)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 9fbaacc67cfad..506a1909ce6d5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -157,6 +157,7 @@ mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 			  struct sk_buff *skb, int seq)
+ {
+ 	struct mt7921_mcu_rxd *rxd;
++	int mcu_cmd = cmd & MCU_CMD_MASK;
+ 	int ret = 0;
+ 
+ 	if (!skb) {
+@@ -194,6 +195,9 @@ mt7921_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 		skb_pull(skb, sizeof(*rxd));
+ 		event = (struct mt7921_mcu_uni_event *)skb->data;
+ 		ret = le32_to_cpu(event->status);
++		/* skip invalid event */
++		if (mcu_cmd != event->cid)
++			ret = -EAGAIN;
+ 		break;
+ 	}
+ 	case MCU_CMD_REG_READ: {
+@@ -316,11 +320,13 @@ mt7921_mcu_tx_rate_parse(struct mt76_phy *mphy,
+ 			 struct rate_info *rate, u16 r)
+ {
+ 	struct ieee80211_supported_band *sband;
+-	u16 flags = 0;
++	u16 flags = 0, rate_idx;
+ 	u8 txmode = FIELD_GET(MT_WTBL_RATE_TX_MODE, r);
+ 	u8 gi = 0;
+ 	u8 bw = 0;
++	bool cck = false;
+ 
++	memset(rate, 0, sizeof(*rate));
+ 	rate->mcs = FIELD_GET(MT_WTBL_RATE_MCS, r);
+ 	rate->nss = FIELD_GET(MT_WTBL_RATE_NSS, r) + 1;
+ 
+@@ -345,13 +351,18 @@ mt7921_mcu_tx_rate_parse(struct mt76_phy *mphy,
+ 
+ 	switch (txmode) {
+ 	case MT_PHY_TYPE_CCK:
++		cck = true;
++		fallthrough;
+ 	case MT_PHY_TYPE_OFDM:
+ 		if (mphy->chandef.chan->band == NL80211_BAND_5GHZ)
+ 			sband = &mphy->sband_5g.sband;
+ 		else
+ 			sband = &mphy->sband_2g.sband;
+ 
+-		rate->legacy = sband->bitrates[rate->mcs].bitrate;
++		rate_idx = FIELD_GET(MT_TX_RATE_IDX, r);
++		rate_idx = mt76_get_rate(mphy->dev, sband, rate_idx,
++					 cck);
++		rate->legacy = sband->bitrates[rate_idx].bitrate;
+ 		break;
+ 	case MT_PHY_TYPE_HT:
+ 	case MT_PHY_TYPE_HT_GF:
+@@ -532,7 +543,8 @@ mt7921_mcu_tx_done_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ 		peer.g8 = !!(sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
+ 		peer.g16 = !!(sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160);
+ 		mt7921_mcu_tx_rate_parse(mphy->mt76, &peer,
+-					 &msta->stats.tx_rate, event->tx_rate);
++					 &msta->stats.tx_rate,
++					 le16_to_cpu(event->tx_rate));
+ 
+ 		spin_lock_bh(&dev->sta_poll_lock);
+ 		break;
+@@ -619,7 +631,7 @@ mt7921_mcu_sta_key_tlv(struct mt7921_sta *msta, struct sk_buff *skb,
+ 		u8 cipher;
+ 
+ 		cipher = mt7921_mcu_get_cipher(key->cipher);
+-		if (cipher == MT_CIPHER_NONE)
++		if (cipher == MCU_CIPHER_NONE)
+ 			return -EOPNOTSUPP;
+ 
+ 		sec_key = &sec->key[0];
+@@ -815,7 +827,7 @@ out:
+ 	default:
+ 		ret = -EAGAIN;
+ 		dev_err(dev->mt76.dev, "Failed to release patch semaphore\n");
+-		goto out;
++		break;
+ 	}
+ 	release_firmware(fw);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+index de3c091f67368..42e7271848956 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+@@ -296,11 +296,11 @@ struct mt7921_txpwr_event {
+ struct mt7921_mcu_tx_done_event {
+ 	u8 pid;
+ 	u8 status;
+-	u16 seq;
++	__le16 seq;
+ 
+ 	u8 wlan_idx;
+ 	u8 tx_cnt;
+-	u16 tx_rate;
++	__le16 tx_rate;
+ 
+ 	u8 flag;
+ 	u8 tid;
+@@ -312,9 +312,9 @@ struct mt7921_mcu_tx_done_event {
+ 	u8 reason;
+ 	u8 rsv0[1];
+ 
+-	u32 delay;
+-	u32 timestamp;
+-	u32 applied_flag;
++	__le32 delay;
++	__le32 timestamp;
++	__le32 applied_flag;
+ 
+ 	u8 txs[28];
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+index b6944c867a573..26fb118237626 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+@@ -96,6 +96,10 @@
+ #define MT_WF_MIB_BASE(_band)		((_band) ? 0xa4800 : 0x24800)
+ #define MT_WF_MIB(_band, ofs)		(MT_WF_MIB_BASE(_band) + (ofs))
+ 
++#define MT_MIB_SCR1(_band)		MT_WF_MIB(_band, 0x004)
++#define MT_MIB_TXDUR_EN			BIT(8)
++#define MT_MIB_RXDUR_EN			BIT(9)
++
+ #define MT_MIB_SDR3(_band)		MT_WF_MIB(_band, 0x698)
+ #define MT_MIB_SDR3_FCS_ERR_MASK	GENMASK(31, 16)
+ 
+@@ -108,9 +112,9 @@
+ #define MT_MIB_SDR34(_band)		MT_WF_MIB(_band, 0x090)
+ #define MT_MIB_MU_BF_TX_CNT		GENMASK(15, 0)
+ 
+-#define MT_MIB_SDR36(_band)		MT_WF_MIB(_band, 0x098)
++#define MT_MIB_SDR36(_band)		MT_WF_MIB(_band, 0x054)
+ #define MT_MIB_SDR36_TXTIME_MASK	GENMASK(23, 0)
+-#define MT_MIB_SDR37(_band)		MT_WF_MIB(_band, 0x09c)
++#define MT_MIB_SDR37(_band)		MT_WF_MIB(_band, 0x058)
+ #define MT_MIB_SDR37_RXTIME_MASK	GENMASK(23, 0)
+ 
+ #define MT_MIB_DR8(_band)		MT_WF_MIB(_band, 0x0c0)
+diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+index 96973ec7bd9ac..87c14969c75fa 100644
+--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
++++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+@@ -129,8 +129,7 @@ static void cfg_scan_result(enum scan_event scan_event,
+ 						info->frame_len,
+ 						(s32)info->rssi * 100,
+ 						GFP_KERNEL);
+-		if (!bss)
+-			cfg80211_put_bss(wiphy, bss);
++		cfg80211_put_bss(wiphy, bss);
+ 	} else if (scan_event == SCAN_EVENT_DONE) {
+ 		mutex_lock(&priv->scan_req_lock);
+ 
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c
+index 585784258c665..4efab907a3ac6 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c
+@@ -28,7 +28,7 @@ u8 rtl818x_ioread8_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_rcvctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_GET_REG, RTL8187_REQT_READ,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits8, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits8, sizeof(val), 500);
+ 
+ 	val = priv->io_dmabuf->bits8;
+ 	mutex_unlock(&priv->io_mutex);
+@@ -45,7 +45,7 @@ u16 rtl818x_ioread16_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_rcvctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_GET_REG, RTL8187_REQT_READ,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits16, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits16, sizeof(val), 500);
+ 
+ 	val = priv->io_dmabuf->bits16;
+ 	mutex_unlock(&priv->io_mutex);
+@@ -62,7 +62,7 @@ u32 rtl818x_ioread32_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_rcvctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_GET_REG, RTL8187_REQT_READ,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits32, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits32, sizeof(val), 500);
+ 
+ 	val = priv->io_dmabuf->bits32;
+ 	mutex_unlock(&priv->io_mutex);
+@@ -79,7 +79,7 @@ void rtl818x_iowrite8_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits8, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits8, sizeof(val), 500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ }
+@@ -93,7 +93,7 @@ void rtl818x_iowrite16_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits16, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits16, sizeof(val), 500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ }
+@@ -107,7 +107,7 @@ void rtl818x_iowrite32_idx(struct rtl8187_priv *priv,
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			(unsigned long)addr, idx & 0x03,
+-			&priv->io_dmabuf->bits32, sizeof(val), HZ / 2);
++			&priv->io_dmabuf->bits32, sizeof(val), 500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ }
+@@ -183,7 +183,7 @@ static void rtl8225_write_8051(struct ieee80211_hw *dev, u8 addr, __le16 data)
+ 	usb_control_msg(priv->udev, usb_sndctrlpipe(priv->udev, 0),
+ 			RTL8187_REQ_SET_REG, RTL8187_REQT_WRITE,
+ 			addr, 0x8225, &priv->io_dmabuf->bits16, sizeof(data),
+-			HZ / 2);
++			500);
+ 
+ 	mutex_unlock(&priv->io_mutex);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index e6399519584bd..a384fc3a4f2b0 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -1556,12 +1556,10 @@ static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size,
+ 	u32 i;
+ 	u16 idx = 0;
+ 	u16 ctl;
+-	u8 rcr;
+ 
+-	rcr = rtw_read8(rtwdev, REG_RCR + 2);
+ 	ctl = rtw_read16(rtwdev, REG_PKTBUF_DBG_CTRL) & 0xf000;
+ 	/* disable rx clock gate */
+-	rtw_write8(rtwdev, REG_RCR, rcr | BIT(3));
++	rtw_write32_set(rtwdev, REG_RCR, BIT_DISGCLK);
+ 
+ 	do {
+ 		rtw_write16(rtwdev, REG_PKTBUF_DBG_CTRL, start_pg | ctl);
+@@ -1580,7 +1578,8 @@ static void rtw_fw_read_fifo_page(struct rtw_dev *rtwdev, u32 offset, u32 size,
+ 
+ out:
+ 	rtw_write16(rtwdev, REG_PKTBUF_DBG_CTRL, ctl);
+-	rtw_write8(rtwdev, REG_RCR + 2, rcr);
++	/* restore rx clock gate */
++	rtw_write32_clr(rtwdev, REG_RCR, BIT_DISGCLK);
+ }
+ 
+ static void rtw_fw_read_fifo(struct rtw_dev *rtwdev, enum rtw_fw_fifo_sel sel,
+diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
+index f5ce75095e904..c0fb1e446245f 100644
+--- a/drivers/net/wireless/realtek/rtw88/reg.h
++++ b/drivers/net/wireless/realtek/rtw88/reg.h
+@@ -406,6 +406,7 @@
+ #define BIT_MFBEN		BIT(22)
+ #define BIT_DISCHKPPDLLEN	BIT(21)
+ #define BIT_PKTCTL_DLEN		BIT(20)
++#define BIT_DISGCLK		BIT(19)
+ #define BIT_TIM_PARSER_EN	BIT(18)
+ #define BIT_BC_MD_EN		BIT(17)
+ #define BIT_UC_MD_EN		BIT(16)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_core.c b/drivers/net/wireless/rsi/rsi_91x_core.c
+index a48e616e0fb91..6bfaab48b507d 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_core.c
++++ b/drivers/net/wireless/rsi/rsi_91x_core.c
+@@ -399,6 +399,8 @@ void rsi_core_xmit(struct rsi_common *common, struct sk_buff *skb)
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+ 	tx_params = (struct skb_info *)info->driver_data;
++	/* info->driver_data and info->control part of union so make copy */
++	tx_params->have_key = !!info->control.hw_key;
+ 	wh = (struct ieee80211_hdr *)&skb->data[0];
+ 	tx_params->sta_id = 0;
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index f4a26f16f00f4..dca81a4bbdd7f 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -203,7 +203,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 		wh->frame_control |= cpu_to_le16(RSI_SET_PS_ENABLE);
+ 
+ 	if ((!(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) &&
+-	    info->control.hw_key) {
++	    tx_params->have_key) {
+ 		if (rsi_is_cipher_wep(common))
+ 			ieee80211_size += 4;
+ 		else
+@@ -214,15 +214,17 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 			RSI_WIFI_DATA_Q);
+ 	data_desc->header_len = ieee80211_size;
+ 
+-	if (common->min_rate != RSI_RATE_AUTO) {
++	if (common->rate_config[common->band].fixed_enabled) {
+ 		/* Send fixed rate */
++		u16 fixed_rate = common->rate_config[common->band].fixed_hw_rate;
++
+ 		data_desc->frame_info = cpu_to_le16(RATE_INFO_ENABLE);
+-		data_desc->rate_info = cpu_to_le16(common->min_rate);
++		data_desc->rate_info = cpu_to_le16(fixed_rate);
+ 
+ 		if (conf_is_ht40(&common->priv->hw->conf))
+ 			data_desc->bbp_info = cpu_to_le16(FULL40M_ENABLE);
+ 
+-		if ((common->vif_info[0].sgi) && (common->min_rate & 0x100)) {
++		if (common->vif_info[0].sgi && (fixed_rate & 0x100)) {
+ 		       /* Only MCS rates */
+ 			data_desc->rate_info |=
+ 				cpu_to_le16(ENABLE_SHORTGI_RATE);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index b66975f545675..e70c1c7fdf595 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -510,7 +510,6 @@ static int rsi_mac80211_add_interface(struct ieee80211_hw *hw,
+ 	if ((vif->type == NL80211_IFTYPE_AP) ||
+ 	    (vif->type == NL80211_IFTYPE_P2P_GO)) {
+ 		rsi_send_rx_filter_frame(common, DISALLOW_BEACONS);
+-		common->min_rate = RSI_RATE_AUTO;
+ 		for (i = 0; i < common->max_stations; i++)
+ 			common->stations[i].sta = NULL;
+ 	}
+@@ -1228,20 +1227,32 @@ static int rsi_mac80211_set_rate_mask(struct ieee80211_hw *hw,
+ 				      struct ieee80211_vif *vif,
+ 				      const struct cfg80211_bitrate_mask *mask)
+ {
++	const unsigned int mcs_offset = ARRAY_SIZE(rsi_rates);
+ 	struct rsi_hw *adapter = hw->priv;
+ 	struct rsi_common *common = adapter->priv;
+-	enum nl80211_band band = hw->conf.chandef.chan->band;
++	int i;
+ 
+ 	mutex_lock(&common->mutex);
+-	common->fixedrate_mask[band] = 0;
+ 
+-	if (mask->control[band].legacy == 0xfff) {
+-		common->fixedrate_mask[band] =
+-			(mask->control[band].ht_mcs[0] << 12);
+-	} else {
+-		common->fixedrate_mask[band] =
+-			mask->control[band].legacy;
++	for (i = 0; i < ARRAY_SIZE(common->rate_config); i++) {
++		struct rsi_rate_config *cfg = &common->rate_config[i];
++		u32 bm;
++
++		bm = mask->control[i].legacy | (mask->control[i].ht_mcs[0] << mcs_offset);
++		if (hweight32(bm) == 1) { /* single rate */
++			int rate_index = ffs(bm) - 1;
++
++			if (rate_index < mcs_offset)
++				cfg->fixed_hw_rate = rsi_rates[rate_index].hw_value;
++			else
++				cfg->fixed_hw_rate = rsi_mcsrates[rate_index - mcs_offset];
++			cfg->fixed_enabled = true;
++		} else {
++			cfg->configured_mask = bm;
++			cfg->fixed_enabled = false;
++		}
+ 	}
++
+ 	mutex_unlock(&common->mutex);
+ 
+ 	return 0;
+@@ -1378,46 +1389,6 @@ void rsi_indicate_pkt_to_os(struct rsi_common *common,
+ 	ieee80211_rx_irqsafe(hw, skb);
+ }
+ 
+-static void rsi_set_min_rate(struct ieee80211_hw *hw,
+-			     struct ieee80211_sta *sta,
+-			     struct rsi_common *common)
+-{
+-	u8 band = hw->conf.chandef.chan->band;
+-	u8 ii;
+-	u32 rate_bitmap;
+-	bool matched = false;
+-
+-	common->bitrate_mask[band] = sta->supp_rates[band];
+-
+-	rate_bitmap = (common->fixedrate_mask[band] & sta->supp_rates[band]);
+-
+-	if (rate_bitmap & 0xfff) {
+-		/* Find out the min rate */
+-		for (ii = 0; ii < ARRAY_SIZE(rsi_rates); ii++) {
+-			if (rate_bitmap & BIT(ii)) {
+-				common->min_rate = rsi_rates[ii].hw_value;
+-				matched = true;
+-				break;
+-			}
+-		}
+-	}
+-
+-	common->vif_info[0].is_ht = sta->ht_cap.ht_supported;
+-
+-	if ((common->vif_info[0].is_ht) && (rate_bitmap >> 12)) {
+-		for (ii = 0; ii < ARRAY_SIZE(rsi_mcsrates); ii++) {
+-			if ((rate_bitmap >> 12) & BIT(ii)) {
+-				common->min_rate = rsi_mcsrates[ii];
+-				matched = true;
+-				break;
+-			}
+-		}
+-	}
+-
+-	if (!matched)
+-		common->min_rate = 0xffff;
+-}
+-
+ /**
+  * rsi_mac80211_sta_add() - This function notifies driver about a peer getting
+  *			    connected.
+@@ -1516,9 +1487,9 @@ static int rsi_mac80211_sta_add(struct ieee80211_hw *hw,
+ 
+ 	if ((vif->type == NL80211_IFTYPE_STATION) ||
+ 	    (vif->type == NL80211_IFTYPE_P2P_CLIENT)) {
+-		rsi_set_min_rate(hw, sta, common);
++		common->bitrate_mask[common->band] = sta->supp_rates[common->band];
++		common->vif_info[0].is_ht = sta->ht_cap.ht_supported;
+ 		if (sta->ht_cap.ht_supported) {
+-			common->vif_info[0].is_ht = true;
+ 			common->bitrate_mask[NL80211_BAND_2GHZ] =
+ 					sta->supp_rates[NL80211_BAND_2GHZ];
+ 			if ((sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20) ||
+@@ -1592,7 +1563,6 @@ static int rsi_mac80211_sta_remove(struct ieee80211_hw *hw,
+ 		bss->qos = sta->wme;
+ 		common->bitrate_mask[NL80211_BAND_2GHZ] = 0;
+ 		common->bitrate_mask[NL80211_BAND_5GHZ] = 0;
+-		common->min_rate = 0xffff;
+ 		common->vif_info[0].is_ht = false;
+ 		common->vif_info[0].sgi = false;
+ 		common->vif_info[0].seq_start = 0;
+diff --git a/drivers/net/wireless/rsi/rsi_91x_main.c b/drivers/net/wireless/rsi/rsi_91x_main.c
+index d98483298555c..f1bf71e6c6081 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_main.c
++++ b/drivers/net/wireless/rsi/rsi_91x_main.c
+@@ -211,9 +211,10 @@ int rsi_read_pkt(struct rsi_common *common, u8 *rx_pkt, s32 rcv_pkt_len)
+ 			bt_pkt_type = frame_desc[offset + BT_RX_PKT_TYPE_OFST];
+ 			if (bt_pkt_type == BT_CARD_READY_IND) {
+ 				rsi_dbg(INFO_ZONE, "BT Card ready recvd\n");
+-				if (rsi_bt_ops.attach(common, &g_proto_ops))
+-					rsi_dbg(ERR_ZONE,
+-						"Failed to attach BT module\n");
++				if (common->fsm_state == FSM_MAC_INIT_DONE)
++					rsi_attach_bt(common);
++				else
++					common->bt_defer_attach = true;
+ 			} else {
+ 				if (common->bt_adapter)
+ 					rsi_bt_ops.recv_pkt(common->bt_adapter,
+@@ -278,6 +279,15 @@ void rsi_set_bt_context(void *priv, void *bt_context)
+ }
+ #endif
+ 
++void rsi_attach_bt(struct rsi_common *common)
++{
++#ifdef CONFIG_RSI_COEX
++	if (rsi_bt_ops.attach(common, &g_proto_ops))
++		rsi_dbg(ERR_ZONE,
++			"Failed to attach BT module\n");
++#endif
++}
++
+ /**
+  * rsi_91x_init() - This function initializes os interface operations.
+  * @oper_mode: One of DEV_OPMODE_*.
+@@ -359,6 +369,7 @@ struct rsi_hw *rsi_91x_init(u16 oper_mode)
+ 	if (common->coex_mode > 1) {
+ 		if (rsi_coex_attach(common)) {
+ 			rsi_dbg(ERR_ZONE, "Failed to init coex module\n");
++			rsi_kill_thread(&common->tx_thread);
+ 			goto err;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mgmt.c b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+index 891fd5f0fa765..0848f7a7e76c6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mgmt.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+@@ -276,7 +276,7 @@ static void rsi_set_default_parameters(struct rsi_common *common)
+ 	common->channel_width = BW_20MHZ;
+ 	common->rts_threshold = IEEE80211_MAX_RTS_THRESHOLD;
+ 	common->channel = 1;
+-	common->min_rate = 0xffff;
++	memset(&common->rate_config, 0, sizeof(common->rate_config));
+ 	common->fsm_state = FSM_CARD_NOT_READY;
+ 	common->iface_down = true;
+ 	common->endpoint = EP_2GHZ_20MHZ;
+@@ -1314,7 +1314,7 @@ static int rsi_send_auto_rate_request(struct rsi_common *common,
+ 	u8 band = hw->conf.chandef.chan->band;
+ 	u8 num_supported_rates = 0;
+ 	u8 rate_table_offset, rate_offset = 0;
+-	u32 rate_bitmap;
++	u32 rate_bitmap, configured_rates;
+ 	u16 *selected_rates, min_rate;
+ 	bool is_ht = false, is_sgi = false;
+ 	u16 frame_len = sizeof(struct rsi_auto_rate);
+@@ -1364,6 +1364,10 @@ static int rsi_send_auto_rate_request(struct rsi_common *common,
+ 			is_sgi = true;
+ 	}
+ 
++	/* Limit to any rates administratively configured by cfg80211 */
++	configured_rates = common->rate_config[band].configured_mask ?: 0xffffffff;
++	rate_bitmap &= configured_rates;
++
+ 	if (band == NL80211_BAND_2GHZ) {
+ 		if ((rate_bitmap == 0) && (is_ht))
+ 			min_rate = RSI_RATE_MCS0;
+@@ -1389,10 +1393,13 @@ static int rsi_send_auto_rate_request(struct rsi_common *common,
+ 	num_supported_rates = jj;
+ 
+ 	if (is_ht) {
+-		for (ii = 0; ii < ARRAY_SIZE(mcs); ii++)
+-			selected_rates[jj++] = mcs[ii];
+-		num_supported_rates += ARRAY_SIZE(mcs);
+-		rate_offset += ARRAY_SIZE(mcs);
++		for (ii = 0; ii < ARRAY_SIZE(mcs); ii++) {
++			if (configured_rates & BIT(ii + ARRAY_SIZE(rsi_rates))) {
++				selected_rates[jj++] = mcs[ii];
++				num_supported_rates++;
++				rate_offset++;
++			}
++		}
+ 	}
+ 
+ 	sort(selected_rates, jj, sizeof(u16), &rsi_compare, NULL);
+@@ -1482,7 +1489,7 @@ void rsi_inform_bss_status(struct rsi_common *common,
+ 					      qos_enable,
+ 					      aid, sta_id,
+ 					      vif);
+-		if (common->min_rate == 0xffff)
++		if (!common->rate_config[common->band].fixed_enabled)
+ 			rsi_send_auto_rate_request(common, sta, sta_id, vif);
+ 		if (opmode == RSI_OPMODE_STA &&
+ 		    !(assoc_cap & WLAN_CAPABILITY_PRIVACY) &&
+@@ -2071,6 +2078,9 @@ static int rsi_handle_ta_confirm_type(struct rsi_common *common,
+ 				if (common->reinit_hw) {
+ 					complete(&common->wlan_init_completion);
+ 				} else {
++					if (common->bt_defer_attach)
++						rsi_attach_bt(common);
++
+ 					return rsi_mac80211_attach(common);
+ 				}
+ 			}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index e0c502bc42707..9f16128e4ffab 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -24,10 +24,7 @@
+ /* Default operating mode is wlan STA + BT */
+ static u16 dev_oper_mode = DEV_OPMODE_STA_BT_DUAL;
+ module_param(dev_oper_mode, ushort, 0444);
+-MODULE_PARM_DESC(dev_oper_mode,
+-		 "1[Wi-Fi], 4[BT], 8[BT LE], 5[Wi-Fi STA + BT classic]\n"
+-		 "9[Wi-Fi STA + BT LE], 13[Wi-Fi STA + BT classic + BT LE]\n"
+-		 "6[AP + BT classic], 14[AP + BT classic + BT LE]");
++MODULE_PARM_DESC(dev_oper_mode, DEV_OPMODE_PARAM_DESC);
+ 
+ /**
+  * rsi_sdio_set_cmd52_arg() - This function prepares cmd 52 read/write arg.
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index e97f92915ed98..6821ea9918956 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -25,10 +25,7 @@
+ /* Default operating mode is wlan STA + BT */
+ static u16 dev_oper_mode = DEV_OPMODE_STA_BT_DUAL;
+ module_param(dev_oper_mode, ushort, 0444);
+-MODULE_PARM_DESC(dev_oper_mode,
+-		 "1[Wi-Fi], 4[BT], 8[BT LE], 5[Wi-Fi STA + BT classic]\n"
+-		 "9[Wi-Fi STA + BT LE], 13[Wi-Fi STA + BT classic + BT LE]\n"
+-		 "6[AP + BT classic], 14[AP + BT classic + BT LE]");
++MODULE_PARM_DESC(dev_oper_mode, DEV_OPMODE_PARAM_DESC);
+ 
+ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num, gfp_t flags);
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_hal.h b/drivers/net/wireless/rsi/rsi_hal.h
+index d044a440fa080..5b07262a97408 100644
+--- a/drivers/net/wireless/rsi/rsi_hal.h
++++ b/drivers/net/wireless/rsi/rsi_hal.h
+@@ -28,6 +28,17 @@
+ #define DEV_OPMODE_AP_BT		6
+ #define DEV_OPMODE_AP_BT_DUAL		14
+ 
++#define DEV_OPMODE_PARAM_DESC		\
++	__stringify(DEV_OPMODE_WIFI_ALONE)	"[Wi-Fi alone], "	\
++	__stringify(DEV_OPMODE_BT_ALONE)	"[BT classic alone], "	\
++	__stringify(DEV_OPMODE_BT_LE_ALONE)	"[BT LE alone], "	\
++	__stringify(DEV_OPMODE_BT_DUAL)		"[BT classic + BT LE alone], " \
++	__stringify(DEV_OPMODE_STA_BT)		"[Wi-Fi STA + BT classic], " \
++	__stringify(DEV_OPMODE_STA_BT_LE)	"[Wi-Fi STA + BT LE], "	\
++	__stringify(DEV_OPMODE_STA_BT_DUAL)	"[Wi-Fi STA + BT classic + BT LE], " \
++	__stringify(DEV_OPMODE_AP_BT)		"[Wi-Fi AP + BT classic], "	\
++	__stringify(DEV_OPMODE_AP_BT_DUAL)	"[Wi-Fi AP + BT classic + BT LE]"
++
+ #define FLASH_WRITE_CHUNK_SIZE		(4 * 1024)
+ #define FLASH_SECTOR_SIZE		(4 * 1024)
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_main.h b/drivers/net/wireless/rsi/rsi_main.h
+index 0f535850a3836..dcf8fb40698b7 100644
+--- a/drivers/net/wireless/rsi/rsi_main.h
++++ b/drivers/net/wireless/rsi/rsi_main.h
+@@ -61,6 +61,7 @@ enum RSI_FSM_STATES {
+ extern u32 rsi_zone_enabled;
+ extern __printf(2, 3) void rsi_dbg(u32 zone, const char *fmt, ...);
+ 
++#define RSI_MAX_BANDS			2
+ #define RSI_MAX_VIFS                    3
+ #define NUM_EDCA_QUEUES                 4
+ #define IEEE80211_ADDR_LEN              6
+@@ -139,6 +140,7 @@ struct skb_info {
+ 	u8 internal_hdr_size;
+ 	struct ieee80211_vif *vif;
+ 	u8 vap_id;
++	bool have_key;
+ };
+ 
+ enum edca_queue {
+@@ -229,6 +231,12 @@ struct rsi_9116_features {
+ 	u32 ps_options;
+ };
+ 
++struct rsi_rate_config {
++	u32 configured_mask;	/* configured by mac80211 bits 0-11=legacy 12+ mcs */
++	u16 fixed_hw_rate;
++	bool fixed_enabled;
++};
++
+ struct rsi_common {
+ 	struct rsi_hw *priv;
+ 	struct vif_priv vif_info[RSI_MAX_VIFS];
+@@ -254,8 +262,8 @@ struct rsi_common {
+ 	u8 channel_width;
+ 
+ 	u16 rts_threshold;
+-	u16 bitrate_mask[2];
+-	u32 fixedrate_mask[2];
++	u32 bitrate_mask[RSI_MAX_BANDS];
++	struct rsi_rate_config rate_config[RSI_MAX_BANDS];
+ 
+ 	u8 rf_reset;
+ 	struct transmit_q_stats tx_stats;
+@@ -276,7 +284,6 @@ struct rsi_common {
+ 	u8 mac_id;
+ 	u8 radio_id;
+ 	u16 rate_pwr[20];
+-	u16 min_rate;
+ 
+ 	/* WMM algo related */
+ 	u8 selected_qnum;
+@@ -320,6 +327,7 @@ struct rsi_common {
+ 	struct ieee80211_vif *roc_vif;
+ 
+ 	bool eapol4_confirm;
++	bool bt_defer_attach;
+ 	void *bt_adapter;
+ 
+ 	struct cfg80211_scan_request *hwscan;
+@@ -401,5 +409,6 @@ struct rsi_host_intf_ops {
+ 
+ enum rsi_host_intf rsi_get_host_intf(void *priv);
+ void rsi_set_bt_context(void *priv, void *bt_context);
++void rsi_attach_bt(struct rsi_common *common);
+ 
+ #endif
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 44275908d61a2..0169f7743b6d3 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1669,6 +1669,10 @@ static int netfront_resume(struct xenbus_device *dev)
+ 
+ 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+ 
++	netif_tx_lock_bh(info->netdev);
++	netif_device_detach(info->netdev);
++	netif_tx_unlock_bh(info->netdev);
++
+ 	xennet_disconnect_backend(info);
+ 	return 0;
+ }
+@@ -2283,6 +2287,10 @@ static int xennet_connect(struct net_device *dev)
+ 	 * domain a kick because we've probably just requeued some
+ 	 * packets.
+ 	 */
++	netif_tx_lock_bh(np->netdev);
++	netif_device_attach(np->netdev);
++	netif_tx_unlock_bh(np->netdev);
++
+ 	netif_carrier_on(np->netdev);
+ 	for (j = 0; j < num_queues; ++j) {
+ 		queue = &np->queues[j];
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index cd64bfe204025..e8468e349e6fd 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -2218,7 +2218,7 @@ static int pn533_fill_fragment_skbs(struct pn533 *dev, struct sk_buff *skb)
+ 		frag = pn533_alloc_skb(dev, frag_size);
+ 		if (!frag) {
+ 			skb_queue_purge(&dev->fragment_skb);
+-			break;
++			return -ENOMEM;
+ 		}
+ 
+ 		if (!dev->tgt_mode) {
+@@ -2287,7 +2287,7 @@ static int pn533_transceive(struct nfc_dev *nfc_dev,
+ 		/* jumbo frame ? */
+ 		if (skb->len > PN533_CMD_DATAEXCH_DATA_MAXLEN) {
+ 			rc = pn533_fill_fragment_skbs(dev, skb);
+-			if (rc <= 0)
++			if (rc < 0)
+ 				goto error;
+ 
+ 			skb = skb_dequeue(&dev->fragment_skb);
+@@ -2355,7 +2355,7 @@ static int pn533_tm_send(struct nfc_dev *nfc_dev, struct sk_buff *skb)
+ 	/* let's split in multiple chunks if size's too big */
+ 	if (skb->len > PN533_CMD_DATAEXCH_DATA_MAXLEN) {
+ 		rc = pn533_fill_fragment_skbs(dev, skb);
+-		if (rc <= 0)
++		if (rc < 0)
+ 			goto error;
+ 
+ 		/* get the first skb */
+diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
+index 92dec49522972..3fd1bdb9fc05b 100644
+--- a/drivers/nvdimm/btt.c
++++ b/drivers/nvdimm/btt.c
+@@ -1538,7 +1538,6 @@ static int btt_blk_init(struct btt *btt)
+ 		int rc = nd_integrity_init(btt->btt_disk, btt_meta_size(btt));
+ 
+ 		if (rc) {
+-			del_gendisk(btt->btt_disk);
+ 			blk_cleanup_disk(btt->btt_disk);
+ 			return rc;
+ 		}
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index abc9bdfd48bde..a189155d2c60c 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -138,13 +138,12 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ {
+ 	struct nvme_ns *ns;
+ 
+-	mutex_lock(&ctrl->scan_lock);
+ 	down_read(&ctrl->namespaces_rwsem);
+-	list_for_each_entry(ns, &ctrl->namespaces, list)
+-		if (nvme_mpath_clear_current_path(ns))
+-			kblockd_schedule_work(&ns->head->requeue_work);
++	list_for_each_entry(ns, &ctrl->namespaces, list) {
++		nvme_mpath_clear_current_path(ns);
++		kblockd_schedule_work(&ns->head->requeue_work);
++	}
+ 	up_read(&ctrl->namespaces_rwsem);
+-	mutex_unlock(&ctrl->scan_lock);
+ }
+ 
+ static bool nvme_path_is_disabled(struct nvme_ns *ns)
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 042c594bc57e2..0498801542eb6 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1095,11 +1095,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ 		return ret;
+ 
+ 	if (ctrl->ctrl.icdoff) {
++		ret = -EOPNOTSUPP;
+ 		dev_err(ctrl->ctrl.device, "icdoff is not supported!\n");
+ 		goto destroy_admin;
+ 	}
+ 
+ 	if (!(ctrl->ctrl.sgls & (1 << 2))) {
++		ret = -EOPNOTSUPP;
+ 		dev_err(ctrl->ctrl.device,
+ 			"Mandatory keyed sgls are not supported!\n");
+ 		goto destroy_admin;
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 3e5053c5ec836..a05e99cc89275 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1553,6 +1553,8 @@ static void nvmet_port_release(struct config_item *item)
+ {
+ 	struct nvmet_port *port = to_nvmet_port(item);
+ 
++	/* Let inflight controllers teardown complete */
++	flush_scheduled_work();
+ 	list_del(&port->global_entry);
+ 
+ 	kfree(port->ana_state);
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 891174ccd44bb..f1eedbf493d5b 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -1818,12 +1818,36 @@ restart:
+ 	mutex_unlock(&nvmet_rdma_queue_mutex);
+ }
+ 
++static void nvmet_rdma_destroy_port_queues(struct nvmet_rdma_port *port)
++{
++	struct nvmet_rdma_queue *queue, *tmp;
++	struct nvmet_port *nport = port->nport;
++
++	mutex_lock(&nvmet_rdma_queue_mutex);
++	list_for_each_entry_safe(queue, tmp, &nvmet_rdma_queue_list,
++				 queue_list) {
++		if (queue->port != nport)
++			continue;
++
++		list_del_init(&queue->queue_list);
++		__nvmet_rdma_queue_disconnect(queue);
++	}
++	mutex_unlock(&nvmet_rdma_queue_mutex);
++}
++
+ static void nvmet_rdma_disable_port(struct nvmet_rdma_port *port)
+ {
+ 	struct rdma_cm_id *cm_id = xchg(&port->cm_id, NULL);
+ 
+ 	if (cm_id)
+ 		rdma_destroy_id(cm_id);
++
++	/*
++	 * Destroy the remaining queues, which are not belong to any
++	 * controller yet. Do it here after the RDMA-CM was destroyed
++	 * guarantees that no new queue will be created.
++	 */
++	nvmet_rdma_destroy_port_queues(port);
+ }
+ 
+ static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port)
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d641bfa07a801..84c387e4bf431 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1096,7 +1096,7 @@ recv:
+ 	}
+ 
+ 	if (queue->hdr_digest &&
+-	    nvmet_tcp_verify_hdgst(queue, &queue->pdu, queue->offset)) {
++	    nvmet_tcp_verify_hdgst(queue, &queue->pdu, hdr->hlen)) {
+ 		nvmet_tcp_fatal_error(queue); /* fatal */
+ 		return -EPROTO;
+ 	}
+@@ -1428,6 +1428,7 @@ static void nvmet_tcp_uninit_data_in_cmds(struct nvmet_tcp_queue *queue)
+ 
+ static void nvmet_tcp_release_queue_work(struct work_struct *w)
+ {
++	struct page *page;
+ 	struct nvmet_tcp_queue *queue =
+ 		container_of(w, struct nvmet_tcp_queue, release_work);
+ 
+@@ -1447,6 +1448,8 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w)
+ 		nvmet_tcp_free_crypto(queue);
+ 	ida_simple_remove(&nvmet_tcp_queue_ida, queue->idx);
+ 
++	page = virt_to_head_page(queue->pf_cache.va);
++	__page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias);
+ 	kfree(queue);
+ }
+ 
+@@ -1737,6 +1740,17 @@ err_port:
+ 	return ret;
+ }
+ 
++static void nvmet_tcp_destroy_port_queues(struct nvmet_tcp_port *port)
++{
++	struct nvmet_tcp_queue *queue;
++
++	mutex_lock(&nvmet_tcp_queue_mutex);
++	list_for_each_entry(queue, &nvmet_tcp_queue_list, queue_list)
++		if (queue->port == port)
++			kernel_sock_shutdown(queue->sock, SHUT_RDWR);
++	mutex_unlock(&nvmet_tcp_queue_mutex);
++}
++
+ static void nvmet_tcp_remove_port(struct nvmet_port *nport)
+ {
+ 	struct nvmet_tcp_port *port = nport->priv;
+@@ -1746,6 +1760,11 @@ static void nvmet_tcp_remove_port(struct nvmet_port *nport)
+ 	port->sock->sk->sk_user_data = NULL;
+ 	write_unlock_bh(&port->sock->sk->sk_callback_lock);
+ 	cancel_work_sync(&port->accept_work);
++	/*
++	 * Destroy the remaining queues, which are not belong to any
++	 * controller yet.
++	 */
++	nvmet_tcp_destroy_port_queues(port);
+ 
+ 	sock_release(port->sock);
+ 	kfree(port);
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 8c056972a6ddc..5b85a2a3792ae 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -1688,19 +1688,19 @@ static void __init of_unittest_overlay_gpio(void)
+ 	 */
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-B-input) hogged as input\n");
++		     "gpio-<<int>> (line-B-input): hogged as input\n");
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-A-input) hogged as input\n");
++		     "gpio-<<int>> (line-A-input): hogged as input\n");
+ 
+ 	ret = platform_driver_register(&unittest_gpio_driver);
+ 	if (unittest(ret == 0, "could not register unittest gpio driver\n"))
+ 		return;
+ 
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-A-input) hogged as input\n");
++		   "gpio-<<int>> (line-A-input): hogged as input\n");
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-B-input) hogged as input\n");
++		   "gpio-<<int>> (line-B-input): hogged as input\n");
+ 
+ 	unittest(probe_pass_count + 2 == unittest_gpio_probe_pass_count,
+ 		 "unittest_gpio_probe() failed or not called\n");
+@@ -1727,7 +1727,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 	chip_request_count = unittest_gpio_chip_request_count;
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-D-input) hogged as input\n");
++		     "gpio-<<int>> (line-D-input): hogged as input\n");
+ 
+ 	/* overlay_gpio_03 contains gpio node and child gpio hog node */
+ 
+@@ -1735,7 +1735,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 		 "Adding overlay 'overlay_gpio_03' failed\n");
+ 
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-D-input) hogged as input\n");
++		   "gpio-<<int>> (line-D-input): hogged as input\n");
+ 
+ 	unittest(probe_pass_count + 1 == unittest_gpio_probe_pass_count,
+ 		 "unittest_gpio_probe() failed or not called\n");
+@@ -1774,7 +1774,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 	 */
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+-		     "GPIO line <<int>> (line-C-input) hogged as input\n");
++		     "gpio-<<int>> (line-C-input): hogged as input\n");
+ 
+ 	/* overlay_gpio_04b contains child gpio hog node */
+ 
+@@ -1782,7 +1782,7 @@ static void __init of_unittest_overlay_gpio(void)
+ 		 "Adding overlay 'overlay_gpio_04b' failed\n");
+ 
+ 	EXPECT_END(KERN_INFO,
+-		   "GPIO line <<int>> (line-C-input) hogged as input\n");
++		   "gpio-<<int>> (line-C-input): hogged as input\n");
+ 
+ 	unittest(chip_request_count + 1 == unittest_gpio_chip_request_count,
+ 		 "unittest_gpio_chip_request() called %d times (expected 1 time)\n",
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 2a97c6535c4c6..c32ae7497392b 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -921,7 +921,7 @@ free_required_opps:
+ free_opp:
+ 	_opp_free(new_opp);
+ 
+-	return ERR_PTR(ret);
++	return ret ? ERR_PTR(ret) : NULL;
+ }
+ 
+ /* Initializes OPP tables based on new bindings */
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index ffb176d288cd9..918e11082e6a7 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -474,7 +474,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 		ret = clk_prepare_enable(clk);
+ 		if (ret) {
+ 			dev_err(dev, "failed to enable pcie_refclk\n");
+-			goto err_get_sync;
++			goto err_pcie_setup;
+ 		}
+ 		pcie->refclk = clk;
+ 
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
+index 5fee0f89ab594..a224afadbcc00 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
+@@ -127,6 +127,8 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
+ 			goto err_init;
+ 	}
+ 
++	return 0;
++
+  err_init:
+  err_get_sync:
+ 	pm_runtime_put_sync(dev);
+diff --git a/drivers/pci/controller/dwc/pcie-uniphier.c b/drivers/pci/controller/dwc/pcie-uniphier.c
+index 7e8bad3267701..5cf699dbee7ca 100644
+--- a/drivers/pci/controller/dwc/pcie-uniphier.c
++++ b/drivers/pci/controller/dwc/pcie-uniphier.c
+@@ -168,30 +168,21 @@ static void uniphier_pcie_irq_enable(struct uniphier_pcie_priv *priv)
+ 	writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX);
+ }
+ 
+-static void uniphier_pcie_irq_ack(struct irq_data *d)
+-{
+-	struct pcie_port *pp = irq_data_get_irq_chip_data(d);
+-	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+-	struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
+-	u32 val;
+-
+-	val = readl(priv->base + PCL_RCV_INTX);
+-	val &= ~PCL_RCV_INTX_ALL_STATUS;
+-	val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_STATUS_SHIFT);
+-	writel(val, priv->base + PCL_RCV_INTX);
+-}
+-
+ static void uniphier_pcie_irq_mask(struct irq_data *d)
+ {
+ 	struct pcie_port *pp = irq_data_get_irq_chip_data(d);
+ 	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ 	struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
++	unsigned long flags;
+ 	u32 val;
+ 
++	raw_spin_lock_irqsave(&pp->lock, flags);
++
+ 	val = readl(priv->base + PCL_RCV_INTX);
+-	val &= ~PCL_RCV_INTX_ALL_MASK;
+ 	val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
+ 	writel(val, priv->base + PCL_RCV_INTX);
++
++	raw_spin_unlock_irqrestore(&pp->lock, flags);
+ }
+ 
+ static void uniphier_pcie_irq_unmask(struct irq_data *d)
+@@ -199,17 +190,20 @@ static void uniphier_pcie_irq_unmask(struct irq_data *d)
+ 	struct pcie_port *pp = irq_data_get_irq_chip_data(d);
+ 	struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ 	struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
++	unsigned long flags;
+ 	u32 val;
+ 
++	raw_spin_lock_irqsave(&pp->lock, flags);
++
+ 	val = readl(priv->base + PCL_RCV_INTX);
+-	val &= ~PCL_RCV_INTX_ALL_MASK;
+ 	val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
+ 	writel(val, priv->base + PCL_RCV_INTX);
++
++	raw_spin_unlock_irqrestore(&pp->lock, flags);
+ }
+ 
+ static struct irq_chip uniphier_pcie_irq_chip = {
+ 	.name = "PCI",
+-	.irq_ack = uniphier_pcie_irq_ack,
+ 	.irq_mask = uniphier_pcie_irq_mask,
+ 	.irq_unmask = uniphier_pcie_irq_unmask,
+ };
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 0e4a46af82288..b153d21dd579d 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -31,10 +31,8 @@
+ /* PCIe core registers */
+ #define PCIE_CORE_DEV_ID_REG					0x0
+ #define PCIE_CORE_CMD_STATUS_REG				0x4
+-#define     PCIE_CORE_CMD_IO_ACCESS_EN				BIT(0)
+-#define     PCIE_CORE_CMD_MEM_ACCESS_EN				BIT(1)
+-#define     PCIE_CORE_CMD_MEM_IO_REQ_EN				BIT(2)
+ #define PCIE_CORE_DEV_REV_REG					0x8
++#define PCIE_CORE_EXP_ROM_BAR_REG				0x30
+ #define PCIE_CORE_PCIEXP_CAP					0xc0
+ #define PCIE_CORE_ERR_CAPCTL_REG				0x118
+ #define     PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX			BIT(5)
+@@ -99,6 +97,7 @@
+ #define     PCIE_CORE_CTRL2_MSI_ENABLE		BIT(10)
+ #define PCIE_CORE_REF_CLK_REG			(CONTROL_BASE_ADDR + 0x14)
+ #define     PCIE_CORE_REF_CLK_TX_ENABLE		BIT(1)
++#define     PCIE_CORE_REF_CLK_RX_ENABLE		BIT(2)
+ #define PCIE_MSG_LOG_REG			(CONTROL_BASE_ADDR + 0x30)
+ #define PCIE_ISR0_REG				(CONTROL_BASE_ADDR + 0x40)
+ #define PCIE_MSG_PM_PME_MASK			BIT(7)
+@@ -106,18 +105,19 @@
+ #define     PCIE_ISR0_MSI_INT_PENDING		BIT(24)
+ #define     PCIE_ISR0_INTX_ASSERT(val)		BIT(16 + (val))
+ #define     PCIE_ISR0_INTX_DEASSERT(val)	BIT(20 + (val))
+-#define	    PCIE_ISR0_ALL_MASK			GENMASK(26, 0)
++#define     PCIE_ISR0_ALL_MASK			GENMASK(31, 0)
+ #define PCIE_ISR1_REG				(CONTROL_BASE_ADDR + 0x48)
+ #define PCIE_ISR1_MASK_REG			(CONTROL_BASE_ADDR + 0x4C)
+ #define     PCIE_ISR1_POWER_STATE_CHANGE	BIT(4)
+ #define     PCIE_ISR1_FLUSH			BIT(5)
+ #define     PCIE_ISR1_INTX_ASSERT(val)		BIT(8 + (val))
+-#define     PCIE_ISR1_ALL_MASK			GENMASK(11, 4)
++#define     PCIE_ISR1_ALL_MASK			GENMASK(31, 0)
+ #define PCIE_MSI_ADDR_LOW_REG			(CONTROL_BASE_ADDR + 0x50)
+ #define PCIE_MSI_ADDR_HIGH_REG			(CONTROL_BASE_ADDR + 0x54)
+ #define PCIE_MSI_STATUS_REG			(CONTROL_BASE_ADDR + 0x58)
+ #define PCIE_MSI_MASK_REG			(CONTROL_BASE_ADDR + 0x5C)
+ #define PCIE_MSI_PAYLOAD_REG			(CONTROL_BASE_ADDR + 0x9C)
++#define     PCIE_MSI_DATA_MASK			GENMASK(15, 0)
+ 
+ /* PCIe window configuration */
+ #define OB_WIN_BASE_ADDR			0x4c00
+@@ -164,8 +164,50 @@
+ #define CFG_REG					(LMI_BASE_ADDR + 0x0)
+ #define     LTSSM_SHIFT				24
+ #define     LTSSM_MASK				0x3f
+-#define     LTSSM_L0				0x10
+ #define     RC_BAR_CONFIG			0x300
++
++/* LTSSM values in CFG_REG */
++enum {
++	LTSSM_DETECT_QUIET			= 0x0,
++	LTSSM_DETECT_ACTIVE			= 0x1,
++	LTSSM_POLLING_ACTIVE			= 0x2,
++	LTSSM_POLLING_COMPLIANCE		= 0x3,
++	LTSSM_POLLING_CONFIGURATION		= 0x4,
++	LTSSM_CONFIG_LINKWIDTH_START		= 0x5,
++	LTSSM_CONFIG_LINKWIDTH_ACCEPT		= 0x6,
++	LTSSM_CONFIG_LANENUM_ACCEPT		= 0x7,
++	LTSSM_CONFIG_LANENUM_WAIT		= 0x8,
++	LTSSM_CONFIG_COMPLETE			= 0x9,
++	LTSSM_CONFIG_IDLE			= 0xa,
++	LTSSM_RECOVERY_RCVR_LOCK		= 0xb,
++	LTSSM_RECOVERY_SPEED			= 0xc,
++	LTSSM_RECOVERY_RCVR_CFG			= 0xd,
++	LTSSM_RECOVERY_IDLE			= 0xe,
++	LTSSM_L0				= 0x10,
++	LTSSM_RX_L0S_ENTRY			= 0x11,
++	LTSSM_RX_L0S_IDLE			= 0x12,
++	LTSSM_RX_L0S_FTS			= 0x13,
++	LTSSM_TX_L0S_ENTRY			= 0x14,
++	LTSSM_TX_L0S_IDLE			= 0x15,
++	LTSSM_TX_L0S_FTS			= 0x16,
++	LTSSM_L1_ENTRY				= 0x17,
++	LTSSM_L1_IDLE				= 0x18,
++	LTSSM_L2_IDLE				= 0x19,
++	LTSSM_L2_TRANSMIT_WAKE			= 0x1a,
++	LTSSM_DISABLED				= 0x20,
++	LTSSM_LOOPBACK_ENTRY_MASTER		= 0x21,
++	LTSSM_LOOPBACK_ACTIVE_MASTER		= 0x22,
++	LTSSM_LOOPBACK_EXIT_MASTER		= 0x23,
++	LTSSM_LOOPBACK_ENTRY_SLAVE		= 0x24,
++	LTSSM_LOOPBACK_ACTIVE_SLAVE		= 0x25,
++	LTSSM_LOOPBACK_EXIT_SLAVE		= 0x26,
++	LTSSM_HOT_RESET				= 0x27,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE0	= 0x28,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE1	= 0x29,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE2	= 0x2a,
++	LTSSM_RECOVERY_EQUALIZATION_PHASE3	= 0x2b,
++};
++
+ #define VENDOR_ID_REG				(LMI_BASE_ADDR + 0x44)
+ 
+ /* PCIe core controller registers */
+@@ -198,7 +240,7 @@
+ #define     PCIE_IRQ_MSI_INT2_DET		BIT(21)
+ #define     PCIE_IRQ_RC_DBELL_DET		BIT(22)
+ #define     PCIE_IRQ_EP_STATUS			BIT(23)
+-#define     PCIE_IRQ_ALL_MASK			0xfff0fb
++#define     PCIE_IRQ_ALL_MASK			GENMASK(31, 0)
+ #define     PCIE_IRQ_ENABLE_INTS_MASK		PCIE_IRQ_CORE_INT
+ 
+ /* Transaction types */
+@@ -262,13 +304,49 @@ static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg)
+ 	return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8);
+ }
+ 
+-static int advk_pcie_link_up(struct advk_pcie *pcie)
++static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie)
+ {
+-	u32 val, ltssm_state;
++	u32 val;
++	u8 ltssm_state;
+ 
+ 	val = advk_readl(pcie, CFG_REG);
+ 	ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK;
+-	return ltssm_state >= LTSSM_L0;
++	return ltssm_state;
++}
++
++static inline bool advk_pcie_link_up(struct advk_pcie *pcie)
++{
++	/* check if LTSSM is in normal operation - some L* state */
++	u8 ltssm_state = advk_pcie_ltssm_state(pcie);
++	return ltssm_state >= LTSSM_L0 && ltssm_state < LTSSM_DISABLED;
++}
++
++static inline bool advk_pcie_link_active(struct advk_pcie *pcie)
++{
++	/*
++	 * According to PCIe Base specification 3.0, Table 4-14: Link
++	 * Status Mapped to the LTSSM, and 4.2.6.3.6 Configuration.Idle
++	 * is Link Up mapped to LTSSM Configuration.Idle, Recovery, L0,
++	 * L0s, L1 and L2 states. And according to 3.2.1. Data Link
++	 * Control and Management State Machine Rules is DL Up status
++	 * reported in DL Active state.
++	 */
++	u8 ltssm_state = advk_pcie_ltssm_state(pcie);
++	return ltssm_state >= LTSSM_CONFIG_IDLE && ltssm_state < LTSSM_DISABLED;
++}
++
++static inline bool advk_pcie_link_training(struct advk_pcie *pcie)
++{
++	/*
++	 * According to PCIe Base specification 3.0, Table 4-14: Link
++	 * Status Mapped to the LTSSM is Link Training mapped to LTSSM
++	 * Configuration and Recovery states.
++	 */
++	u8 ltssm_state = advk_pcie_ltssm_state(pcie);
++	return ((ltssm_state >= LTSSM_CONFIG_LINKWIDTH_START &&
++		 ltssm_state < LTSSM_L0) ||
++		(ltssm_state >= LTSSM_RECOVERY_EQUALIZATION_PHASE0 &&
++		 ltssm_state <= LTSSM_RECOVERY_EQUALIZATION_PHASE3));
+ }
+ 
+ static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
+@@ -291,7 +369,7 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
+ 	size_t retries;
+ 
+ 	for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) {
+-		if (!advk_pcie_link_up(pcie))
++		if (advk_pcie_link_training(pcie))
+ 			break;
+ 		udelay(RETRAIN_WAIT_USLEEP_US);
+ 	}
+@@ -451,9 +529,15 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	u32 reg;
+ 	int i;
+ 
+-	/* Enable TX */
++	/*
++	 * Configure PCIe Reference clock. Direction is from the PCIe
++	 * controller to the endpoint card, so enable transmitting of
++	 * Reference clock differential signal off-chip and disable
++	 * receiving off-chip differential signal.
++	 */
+ 	reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
+ 	reg |= PCIE_CORE_REF_CLK_TX_ENABLE;
++	reg &= ~PCIE_CORE_REF_CLK_RX_ENABLE;
+ 	advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG);
+ 
+ 	/* Set to Direct mode */
+@@ -477,6 +561,31 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL;
+ 	advk_writel(pcie, reg, VENDOR_ID_REG);
+ 
++	/*
++	 * Change Class Code of PCI Bridge device to PCI Bridge (0x600400),
++	 * because the default value is Mass storage controller (0x010400).
++	 *
++	 * Note that this Aardvark PCI Bridge does not have compliant Type 1
++	 * Configuration Space and it even cannot be accessed via Aardvark's
++	 * PCI config space access method. Something like config space is
++	 * available in internal Aardvark registers starting at offset 0x0
++	 * and is reported as Type 0. In range 0x10 - 0x34 it has totally
++	 * different registers.
++	 *
++	 * Therefore driver uses emulation of PCI Bridge which emulates
++	 * access to configuration space via internal Aardvark registers or
++	 * emulated configuration buffer.
++	 */
++	reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG);
++	reg &= ~0xffffff00;
++	reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8;
++	advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG);
++
++	/* Disable Root Bridge I/O space, memory space and bus mastering */
++	reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
++	reg &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
++	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
++
+ 	/* Set Advanced Error Capabilities and Control PF0 register */
+ 	reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX |
+ 		PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN |
+@@ -488,8 +597,9 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL);
+ 	reg &= ~PCI_EXP_DEVCTL_RELAX_EN;
+ 	reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN;
++	reg &= ~PCI_EXP_DEVCTL_PAYLOAD;
+ 	reg &= ~PCI_EXP_DEVCTL_READRQ;
+-	reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */
++	reg |= PCI_EXP_DEVCTL_PAYLOAD_512B;
+ 	reg |= PCI_EXP_DEVCTL_READRQ_512B;
+ 	advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL);
+ 
+@@ -574,19 +684,6 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 		advk_pcie_disable_ob_win(pcie, i);
+ 
+ 	advk_pcie_train_link(pcie);
+-
+-	/*
+-	 * FIXME: The following register update is suspicious. This register is
+-	 * applicable only when the PCI controller is configured for Endpoint
+-	 * mode, not as a Root Complex. But apparently when this code is
+-	 * removed, some cards stop working. This should be investigated and
+-	 * a comment explaining this should be put here.
+-	 */
+-	reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
+-	reg |= PCIE_CORE_CMD_MEM_ACCESS_EN |
+-		PCIE_CORE_CMD_IO_ACCESS_EN |
+-		PCIE_CORE_CMD_MEM_IO_REQ_EN;
+-	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val)
+@@ -682,7 +779,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
+ 	else
+ 		str_posted = "Posted";
+ 
+-	dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
++	dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
+ 		str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
+ 
+ 	return -EFAULT;
+@@ -707,6 +804,72 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+ 	return -ETIMEDOUT;
+ }
+ 
++static pci_bridge_emul_read_status_t
++advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
++				    int reg, u32 *value)
++{
++	struct advk_pcie *pcie = bridge->data;
++
++	switch (reg) {
++	case PCI_COMMAND:
++		*value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
++		return PCI_BRIDGE_EMUL_HANDLED;
++
++	case PCI_ROM_ADDRESS1:
++		*value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG);
++		return PCI_BRIDGE_EMUL_HANDLED;
++
++	case PCI_INTERRUPT_LINE: {
++		/*
++		 * From the whole 32bit register we support reading from HW only
++		 * one bit: PCI_BRIDGE_CTL_BUS_RESET.
++		 * Other bits are retrieved only from emulated config buffer.
++		 */
++		__le32 *cfgspace = (__le32 *)&bridge->conf;
++		u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
++		if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN)
++			val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
++		else
++			val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16);
++		*value = val;
++		return PCI_BRIDGE_EMUL_HANDLED;
++	}
++
++	default:
++		return PCI_BRIDGE_EMUL_NOT_HANDLED;
++	}
++}
++
++static void
++advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
++				     int reg, u32 old, u32 new, u32 mask)
++{
++	struct advk_pcie *pcie = bridge->data;
++
++	switch (reg) {
++	case PCI_COMMAND:
++		advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG);
++		break;
++
++	case PCI_ROM_ADDRESS1:
++		advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG);
++		break;
++
++	case PCI_INTERRUPT_LINE:
++		if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
++			u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG);
++			if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
++				val |= HOT_RESET_GEN;
++			else
++				val &= ~HOT_RESET_GEN;
++			advk_writel(pcie, val, PCIE_CORE_CTRL1_REG);
++		}
++		break;
++
++	default:
++		break;
++	}
++}
+ 
+ static pci_bridge_emul_read_status_t
+ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+@@ -723,6 +886,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_EXP_RTCTL: {
+ 		u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+ 		*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
++		*value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE;
+ 		*value |= PCI_EXP_RTCAP_CRSVIS << 16;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+@@ -734,12 +898,26 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+ 
++	case PCI_EXP_LNKCAP: {
++		u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
++		/*
++		 * PCI_EXP_LNKCAP_DLLLARC bit is hardwired in aardvark HW to 0.
++		 * But support for PCI_EXP_LNKSTA_DLLLA is emulated via ltssm
++		 * state so explicitly enable PCI_EXP_LNKCAP_DLLLARC flag.
++		 */
++		val |= PCI_EXP_LNKCAP_DLLLARC;
++		*value = val;
++		return PCI_BRIDGE_EMUL_HANDLED;
++	}
++
+ 	case PCI_EXP_LNKCTL: {
+ 		/* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */
+ 		u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) &
+ 			~(PCI_EXP_LNKSTA_LT << 16);
+-		if (!advk_pcie_link_up(pcie))
++		if (advk_pcie_link_training(pcie))
+ 			val |= (PCI_EXP_LNKSTA_LT << 16);
++		if (advk_pcie_link_active(pcie))
++			val |= (PCI_EXP_LNKSTA_DLLLA << 16);
+ 		*value = val;
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	}
+@@ -747,7 +925,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ 	case PCI_CAP_LIST_ID:
+ 	case PCI_EXP_DEVCAP:
+ 	case PCI_EXP_DEVCTL:
+-	case PCI_EXP_LNKCAP:
+ 		*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
+ 		return PCI_BRIDGE_EMUL_HANDLED;
+ 	default:
+@@ -794,6 +971,8 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ }
+ 
+ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
++	.read_base = advk_pci_bridge_emul_base_conf_read,
++	.write_base = advk_pci_bridge_emul_base_conf_write,
+ 	.read_pcie = advk_pci_bridge_emul_pcie_conf_read,
+ 	.write_pcie = advk_pci_bridge_emul_pcie_conf_write,
+ };
+@@ -1082,7 +1261,7 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+ 				    domain->host_data, handle_simple_irq,
+ 				    NULL, NULL);
+ 
+-	return hwirq;
++	return 0;
+ }
+ 
+ static void advk_msi_irq_domain_free(struct irq_domain *domain,
+@@ -1263,8 +1442,12 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ 		if (!(BIT(msi_idx) & msi_status))
+ 			continue;
+ 
++		/*
++		 * msi_idx contains bits [4:0] of the msi_data and msi_data
++		 * contains 16bit MSI interrupt number
++		 */
+ 		advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
+-		msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & 0xFF;
++		msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK;
+ 		generic_handle_irq(msi_data);
+ 	}
+ 
+@@ -1286,12 +1469,6 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
+ 	isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK);
+ 
+-	if (!isr0_status && !isr1_status) {
+-		advk_writel(pcie, isr0_val, PCIE_ISR0_REG);
+-		advk_writel(pcie, isr1_val, PCIE_ISR1_REG);
+-		return;
+-	}
+-
+ 	/* Process MSI interrupts */
+ 	if (isr0_status & PCIE_ISR0_MSI_INT_PENDING)
+ 		advk_pcie_handle_msi(pcie);
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index fdaf86a888b73..db97cddfc85e1 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -431,8 +431,21 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ 	/* Clear the W1C bits */
+ 	new &= ~((value << shift) & (behavior[reg / 4].w1c & mask));
+ 
++	/* Save the new value with the cleared W1C bits into the cfgspace */
+ 	cfgspace[reg / 4] = cpu_to_le32(new);
+ 
++	/*
++	 * Clear the W1C bits not specified by the write mask, so that the
++	 * write_op() does not clear them.
++	 */
++	new &= ~(behavior[reg / 4].w1c & ~mask);
++
++	/*
++	 * Set the W1C bits specified by the write mask, so that write_op()
++	 * knows about that they are to be cleared.
++	 */
++	new |= (value << shift) & (behavior[reg / 4].w1c & mask);
++
+ 	if (write_op)
+ 		write_op(bridge, reg, old, new, mask);
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index a4eb0c042ca3e..e1de19fad9875 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -3709,6 +3709,14 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask)
+ 	struct pci_dev *bridge;
+ 	u32 cap, ctl2;
+ 
++	/*
++	 * Per PCIe r5.0, sec 9.3.5.10, the AtomicOp Requester Enable bit
++	 * in Device Control 2 is reserved in VFs and the PF value applies
++	 * to all associated VFs.
++	 */
++	if (dev->is_virtfn)
++		return -EINVAL;
++
+ 	if (!pci_is_pcie(dev))
+ 		return -EINVAL;
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 8c3c1ef92171f..cef69b71a6f12 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3573,6 +3573,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003e, quirk_no_bus_reset);
+ 
+ /*
+  * Root port on some Cavium CN8xxx chips do not successfully complete a bus
+diff --git a/drivers/phy/microchip/sparx5_serdes.c b/drivers/phy/microchip/sparx5_serdes.c
+index 4076580fc2cd9..ab1b0986aa671 100644
+--- a/drivers/phy/microchip/sparx5_serdes.c
++++ b/drivers/phy/microchip/sparx5_serdes.c
+@@ -2475,10 +2475,10 @@ static int sparx5_serdes_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 	iomem = devm_ioremap(priv->dev, iores->start, resource_size(iores));
+-	if (IS_ERR(iomem)) {
++	if (!iomem) {
+ 		dev_err(priv->dev, "Unable to get serdes registers: %s\n",
+ 			iores->name);
+-		return PTR_ERR(iomem);
++		return -ENOMEM;
+ 	}
+ 	for (idx = 0; idx < ARRAY_SIZE(sparx5_serdes_iomap); idx++) {
+ 		struct sparx5_serdes_io_resource *iomap = &sparx5_serdes_iomap[idx];
+diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+index 3c1d3b71c825b..f1d97fbd13318 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+@@ -561,7 +561,7 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ {
+ 	struct device *dev = &qphy->phy->dev;
+ 	const struct qusb2_phy_cfg *cfg = qphy->cfg;
+-	u8 *val;
++	u8 *val, hstx_trim;
+ 
+ 	/* efuse register is optional */
+ 	if (!qphy->cell)
+@@ -575,7 +575,13 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ 	 * set while configuring the phy.
+ 	 */
+ 	val = nvmem_cell_read(qphy->cell, NULL);
+-	if (IS_ERR(val) || !val[0]) {
++	if (IS_ERR(val)) {
++		dev_dbg(dev, "failed to read a valid hs-tx trim value\n");
++		return;
++	}
++	hstx_trim = val[0];
++	kfree(val);
++	if (!hstx_trim) {
+ 		dev_dbg(dev, "failed to read a valid hs-tx trim value\n");
+ 		return;
+ 	}
+@@ -583,12 +589,10 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
+ 	/* Fused TUNE1/2 value is the higher nibble only */
+ 	if (cfg->update_tune1_with_efuse)
+ 		qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1],
+-				 val[0] << HSTX_TRIM_SHIFT,
+-				 HSTX_TRIM_MASK);
++				 hstx_trim << HSTX_TRIM_SHIFT, HSTX_TRIM_MASK);
+ 	else
+ 		qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2],
+-				 val[0] << HSTX_TRIM_SHIFT,
+-				 HSTX_TRIM_MASK);
++				 hstx_trim << HSTX_TRIM_SHIFT, HSTX_TRIM_MASK);
+ }
+ 
+ static int qusb2_phy_set_mode(struct phy *phy,
+diff --git a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+index ae4bac024c7b1..7e61202aa234e 100644
+--- a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
++++ b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
+@@ -33,7 +33,7 @@
+ 
+ #define USB2_PHY_USB_PHY_HS_PHY_CTRL_COMMON0	(0x54)
+ #define RETENABLEN				BIT(3)
+-#define FSEL_MASK				GENMASK(7, 5)
++#define FSEL_MASK				GENMASK(6, 4)
+ #define FSEL_DEFAULT				(0x3 << 4)
+ 
+ #define USB2_PHY_USB_PHY_HS_PHY_CTRL_COMMON1	(0x58)
+diff --git a/drivers/phy/ti/phy-gmii-sel.c b/drivers/phy/ti/phy-gmii-sel.c
+index 5fd2e8a08bfcf..d0ab69750c6b4 100644
+--- a/drivers/phy/ti/phy-gmii-sel.c
++++ b/drivers/phy/ti/phy-gmii-sel.c
+@@ -320,6 +320,8 @@ static int phy_gmii_sel_init_ports(struct phy_gmii_sel_priv *priv)
+ 		u64 size;
+ 
+ 		offset = of_get_address(dev->of_node, 0, &size, NULL);
++		if (!offset)
++			return -EINVAL;
+ 		priv->num_ports = size / sizeof(u32);
+ 		if (!priv->num_ports)
+ 			return -EINVAL;
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index a4ac87c8b4f8d..81dbb1723ff2e 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -2100,6 +2100,8 @@ int pinctrl_enable(struct pinctrl_dev *pctldev)
+ 	if (error) {
+ 		dev_err(pctldev->dev, "could not claim hogs: %i\n",
+ 			error);
++		pinctrl_free_pindescs(pctldev, pctldev->desc->pins,
++				      pctldev->desc->npins);
+ 		mutex_destroy(&pctldev->mutex);
+ 		kfree(pctldev);
+ 
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index c5fd75bbf5d97..80b67cd7c0086 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -445,6 +445,7 @@ static int amd_gpio_irq_set_wake(struct irq_data *d, unsigned int on)
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+ 	u32 wake_mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3);
++	int err;
+ 
+ 	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 	pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
+@@ -457,6 +458,15 @@ static int amd_gpio_irq_set_wake(struct irq_data *d, unsigned int on)
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 
++	if (on)
++		err = enable_irq_wake(gpio_dev->irq);
++	else
++		err = disable_irq_wake(gpio_dev->irq);
++
++	if (err)
++		dev_err(&gpio_dev->pdev->dev, "failed to %s wake-up interrupt\n",
++			on ? "enable" : "disable");
++
+ 	return 0;
+ }
+ 
+@@ -932,7 +942,6 @@ static struct pinctrl_desc amd_pinctrl_desc = {
+ static int amd_gpio_probe(struct platform_device *pdev)
+ {
+ 	int ret = 0;
+-	int irq_base;
+ 	struct resource *res;
+ 	struct amd_gpio *gpio_dev;
+ 	struct gpio_irq_chip *girq;
+@@ -955,9 +964,9 @@ static int amd_gpio_probe(struct platform_device *pdev)
+ 	if (!gpio_dev->base)
+ 		return -ENOMEM;
+ 
+-	irq_base = platform_get_irq(pdev, 0);
+-	if (irq_base < 0)
+-		return irq_base;
++	gpio_dev->irq = platform_get_irq(pdev, 0);
++	if (gpio_dev->irq < 0)
++		return gpio_dev->irq;
+ 
+ #ifdef CONFIG_PM_SLEEP
+ 	gpio_dev->saved_regs = devm_kcalloc(&pdev->dev, amd_pinctrl_desc.npins,
+@@ -1020,7 +1029,7 @@ static int amd_gpio_probe(struct platform_device *pdev)
+ 		goto out2;
+ 	}
+ 
+-	ret = devm_request_irq(&pdev->dev, irq_base, amd_gpio_irq_handler,
++	ret = devm_request_irq(&pdev->dev, gpio_dev->irq, amd_gpio_irq_handler,
+ 			       IRQF_SHARED, KBUILD_MODNAME, gpio_dev);
+ 	if (ret)
+ 		goto out2;
+diff --git a/drivers/pinctrl/pinctrl-amd.h b/drivers/pinctrl/pinctrl-amd.h
+index 95e7634240422..1d43170736545 100644
+--- a/drivers/pinctrl/pinctrl-amd.h
++++ b/drivers/pinctrl/pinctrl-amd.h
+@@ -98,6 +98,7 @@ struct amd_gpio {
+ 	struct resource         *res;
+ 	struct platform_device  *pdev;
+ 	u32			*saved_regs;
++	int			irq;
+ };
+ 
+ /*  KERNCZ configuration*/
+diff --git a/drivers/pinctrl/pinctrl-equilibrium.c b/drivers/pinctrl/pinctrl-equilibrium.c
+index 38cc20fa9d5af..44e6973b2ea93 100644
+--- a/drivers/pinctrl/pinctrl-equilibrium.c
++++ b/drivers/pinctrl/pinctrl-equilibrium.c
+@@ -675,6 +675,11 @@ static int eqbr_build_functions(struct eqbr_pinctrl_drv_data *drvdata)
+ 		return ret;
+ 
+ 	for (i = 0; i < nr_funcs; i++) {
++
++		/* Ignore the same function with multiple groups */
++		if (funcs[i].name == NULL)
++			continue;
++
+ 		ret = pinmux_generic_add_function(drvdata->pctl_dev,
+ 						  funcs[i].name,
+ 						  funcs[i].groups,
+@@ -815,7 +820,7 @@ static int pinctrl_reg(struct eqbr_pinctrl_drv_data *drvdata)
+ 
+ 	ret = eqbr_build_functions(drvdata);
+ 	if (ret) {
+-		dev_err(dev, "Failed to build groups\n");
++		dev_err(dev, "Failed to build functions\n");
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 5ccc49b387f17..77d1dc0f5b9ba 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -886,7 +886,7 @@ static void __init sh_pfc_check_drive_reg(const struct sh_pfc_soc_info *info,
+ 		if (!field->pin && !field->offset && !field->size)
+ 			continue;
+ 
+-		mask = GENMASK(field->offset + field->size, field->offset);
++		mask = GENMASK(field->offset + field->size - 1, field->offset);
+ 		if (mask & seen)
+ 			sh_pfc_err("drive_reg 0x%x: field %u overlap\n",
+ 				   drive->reg, i);
+diff --git a/drivers/platform/surface/surface_aggregator_registry.c b/drivers/platform/surface/surface_aggregator_registry.c
+index 4428c4330229a..1679811eff502 100644
+--- a/drivers/platform/surface/surface_aggregator_registry.c
++++ b/drivers/platform/surface/surface_aggregator_registry.c
+@@ -77,6 +77,42 @@ static const struct software_node ssam_node_bas_dtx = {
+ 	.parent = &ssam_node_root,
+ };
+ 
++/* HID keyboard (TID1). */
++static const struct software_node ssam_node_hid_tid1_keyboard = {
++	.name = "ssam:01:15:01:01:00",
++	.parent = &ssam_node_root,
++};
++
++/* HID pen stash (TID1; pen taken / stashed away evens). */
++static const struct software_node ssam_node_hid_tid1_penstash = {
++	.name = "ssam:01:15:01:02:00",
++	.parent = &ssam_node_root,
++};
++
++/* HID touchpad (TID1). */
++static const struct software_node ssam_node_hid_tid1_touchpad = {
++	.name = "ssam:01:15:01:03:00",
++	.parent = &ssam_node_root,
++};
++
++/* HID device instance 6 (TID1, unknown HID device). */
++static const struct software_node ssam_node_hid_tid1_iid6 = {
++	.name = "ssam:01:15:01:06:00",
++	.parent = &ssam_node_root,
++};
++
++/* HID device instance 7 (TID1, unknown HID device). */
++static const struct software_node ssam_node_hid_tid1_iid7 = {
++	.name = "ssam:01:15:01:07:00",
++	.parent = &ssam_node_root,
++};
++
++/* HID system controls (TID1). */
++static const struct software_node ssam_node_hid_tid1_sysctrl = {
++	.name = "ssam:01:15:01:08:00",
++	.parent = &ssam_node_root,
++};
++
+ /* HID keyboard. */
+ static const struct software_node ssam_node_hid_main_keyboard = {
+ 	.name = "ssam:01:15:02:01:00",
+@@ -159,6 +195,21 @@ static const struct software_node *ssam_node_group_sl3[] = {
+ 	NULL,
+ };
+ 
++/* Devices for Surface Laptop Studio. */
++static const struct software_node *ssam_node_group_sls[] = {
++	&ssam_node_root,
++	&ssam_node_bat_ac,
++	&ssam_node_bat_main,
++	&ssam_node_tmp_pprof,
++	&ssam_node_hid_tid1_keyboard,
++	&ssam_node_hid_tid1_penstash,
++	&ssam_node_hid_tid1_touchpad,
++	&ssam_node_hid_tid1_iid6,
++	&ssam_node_hid_tid1_iid7,
++	&ssam_node_hid_tid1_sysctrl,
++	NULL,
++};
++
+ /* Devices for Surface Laptop Go. */
+ static const struct software_node *ssam_node_group_slg1[] = {
+ 	&ssam_node_root,
+@@ -507,6 +558,9 @@ static const struct acpi_device_id ssam_platform_hub_match[] = {
+ 	/* Surface Laptop Go 1 */
+ 	{ "MSHW0118", (unsigned long)ssam_node_group_slg1 },
+ 
++	/* Surface Laptop Studio */
++	{ "MSHW0123", (unsigned long)ssam_node_group_sls },
++
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, ssam_platform_hub_match);
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 50ff04c84650c..27595aba214d9 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -9145,7 +9145,7 @@ static int fan_write_cmd_level(const char *cmd, int *rc)
+ 
+ 	if (strlencmp(cmd, "level auto") == 0)
+ 		level = TP_EC_FAN_AUTO;
+-	else if ((strlencmp(cmd, "level disengaged") == 0) |
++	else if ((strlencmp(cmd, "level disengaged") == 0) ||
+ 			(strlencmp(cmd, "level full-speed") == 0))
+ 		level = TP_EC_FAN_FULLSPEED;
+ 	else if (sscanf(cmd, "level %d", &level) != 1)
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 62e0d56a3332b..1d983de615fcd 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -353,7 +353,14 @@ static acpi_status __query_block(struct wmi_block *wblock, u8 instance,
+ 	 * the WQxx method failed - we should disable collection anyway.
+ 	 */
+ 	if ((block->flags & ACPI_WMI_EXPENSIVE) && ACPI_SUCCESS(wc_status)) {
+-		status = acpi_execute_simple_method(handle, wc_method, 0);
++		/*
++		 * Ignore whether this WCxx call succeeds or not since
++		 * the previously executed WQxx method call might have
++		 * succeeded, and returning the failing status code
++		 * of this call would throw away the result of the WQxx
++		 * call, potentially leaking memory.
++		 */
++		acpi_execute_simple_method(handle, wc_method, 0);
+ 	}
+ 
+ 	return status;
+diff --git a/drivers/power/reset/at91-reset.c b/drivers/power/reset/at91-reset.c
+index 026649409135c..64def79d557a8 100644
+--- a/drivers/power/reset/at91-reset.c
++++ b/drivers/power/reset/at91-reset.c
+@@ -193,7 +193,7 @@ static int __init at91_reset_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	reset->rstc_base = devm_of_iomap(&pdev->dev, pdev->dev.of_node, 0, NULL);
+-	if (!reset->rstc_base) {
++	if (IS_ERR(reset->rstc_base)) {
+ 		dev_err(&pdev->dev, "Could not map reset controller address\n");
+ 		return -ENODEV;
+ 	}
+@@ -203,7 +203,7 @@ static int __init at91_reset_probe(struct platform_device *pdev)
+ 		for_each_matching_node_and_match(np, at91_ramc_of_match, &match) {
+ 			reset->ramc_lpr = (u32)match->data;
+ 			reset->ramc_base[idx] = devm_of_iomap(&pdev->dev, np, 0, NULL);
+-			if (!reset->ramc_base[idx]) {
++			if (IS_ERR(reset->ramc_base[idx])) {
+ 				dev_err(&pdev->dev, "Could not map ram controller address\n");
+ 				of_node_put(np);
+ 				return -ENODEV;
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index 46f078350fd3f..cf38cbfe13e9d 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -187,7 +187,8 @@ static int bq27xxx_battery_i2c_probe(struct i2c_client *client,
+ 			dev_err(&client->dev,
+ 				"Unable to register IRQ %d error %d\n",
+ 				client->irq, ret);
+-			return ret;
++			bq27xxx_battery_teardown(di);
++			goto err_failed;
+ 		}
+ 	}
+ 
+diff --git a/drivers/power/supply/max17040_battery.c b/drivers/power/supply/max17040_battery.c
+index 3cea92e28dc3e..a9aef1e8b186e 100644
+--- a/drivers/power/supply/max17040_battery.c
++++ b/drivers/power/supply/max17040_battery.c
+@@ -449,6 +449,8 @@ static int max17040_probe(struct i2c_client *client,
+ 
+ 	chip->client = client;
+ 	chip->regmap = devm_regmap_init_i2c(client, &max17040_regmap);
++	if (IS_ERR(chip->regmap))
++		return PTR_ERR(chip->regmap);
+ 	chip_id = (enum chip_id) id->driver_data;
+ 	if (client->dev.of_node) {
+ 		ret = max17040_get_of_data(chip);
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 622bdae6182c0..58b1c2bd8a1cf 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -317,7 +317,10 @@ static int max17042_get_property(struct power_supply *psy,
+ 		val->intval = data * 625 / 8;
+ 		break;
+ 	case POWER_SUPPLY_PROP_CAPACITY:
+-		ret = regmap_read(map, MAX17042_RepSOC, &data);
++		if (chip->pdata->enable_current_sense)
++			ret = regmap_read(map, MAX17042_RepSOC, &data);
++		else
++			ret = regmap_read(map, MAX17042_VFSOC, &data);
+ 		if (ret < 0)
+ 			return ret;
+ 
+@@ -861,7 +864,8 @@ static void max17042_set_soc_threshold(struct max17042_chip *chip, u16 off)
+ 	regmap_read(map, MAX17042_RepSOC, &soc);
+ 	soc >>= 8;
+ 	soc_tr = (soc + off) << 8;
+-	soc_tr |= (soc - off);
++	if (off < soc)
++		soc_tr |= soc - off;
+ 	regmap_write(map, MAX17042_SALRT_Th, soc_tr);
+ }
+ 
+@@ -881,6 +885,10 @@ static irqreturn_t max17042_thread_handler(int id, void *dev)
+ 		max17042_set_soc_threshold(chip, 1);
+ 	}
+ 
++	/* we implicitly handle all alerts via power_supply_changed */
++	regmap_clear_bits(chip->regmap, MAX17042_STATUS,
++			  0xFFFF & ~(STATUS_POR_BIT | STATUS_BST_BIT));
++
+ 	power_supply_changed(chip->battery);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/power/supply/rt5033_battery.c b/drivers/power/supply/rt5033_battery.c
+index 9ad0afe83d1b7..7a23c70f48791 100644
+--- a/drivers/power/supply/rt5033_battery.c
++++ b/drivers/power/supply/rt5033_battery.c
+@@ -60,7 +60,7 @@ static int rt5033_battery_get_watt_prop(struct i2c_client *client,
+ 	regmap_read(battery->regmap, regh, &msb);
+ 	regmap_read(battery->regmap, regl, &lsb);
+ 
+-	ret = ((msb << 4) + (lsb >> 4)) * 1250 / 1000;
++	ret = ((msb << 4) + (lsb >> 4)) * 1250;
+ 
+ 	return ret;
+ }
+diff --git a/drivers/ptp/ptp_kvm_x86.c b/drivers/ptp/ptp_kvm_x86.c
+index d0096cd7096a8..4991054a21350 100644
+--- a/drivers/ptp/ptp_kvm_x86.c
++++ b/drivers/ptp/ptp_kvm_x86.c
+@@ -31,10 +31,10 @@ int kvm_arch_ptp_init(void)
+ 
+ 	ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa,
+ 			     KVM_CLOCK_PAIRING_WALLCLOCK);
+-	if (ret == -KVM_ENOSYS || ret == -KVM_EOPNOTSUPP)
++	if (ret == -KVM_ENOSYS)
+ 		return -ENODEV;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int kvm_arch_ptp_get_clock(struct timespec64 *ts)
+diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
+index 7c111bbdc2afa..35269f9982105 100644
+--- a/drivers/regulator/s5m8767.c
++++ b/drivers/regulator/s5m8767.c
+@@ -850,18 +850,15 @@ static int s5m8767_pmic_probe(struct platform_device *pdev)
+ 	/* DS4 GPIO */
+ 	gpio_direction_output(pdata->buck_ds[2], 0x0);
+ 
+-	if (pdata->buck2_gpiodvs || pdata->buck3_gpiodvs ||
+-	   pdata->buck4_gpiodvs) {
+-		regmap_update_bits(s5m8767->iodev->regmap_pmic,
+-				S5M8767_REG_BUCK2CTRL, 1 << 1,
+-				(pdata->buck2_gpiodvs) ? (1 << 1) : (0 << 1));
+-		regmap_update_bits(s5m8767->iodev->regmap_pmic,
+-				S5M8767_REG_BUCK3CTRL, 1 << 1,
+-				(pdata->buck3_gpiodvs) ? (1 << 1) : (0 << 1));
+-		regmap_update_bits(s5m8767->iodev->regmap_pmic,
+-				S5M8767_REG_BUCK4CTRL, 1 << 1,
+-				(pdata->buck4_gpiodvs) ? (1 << 1) : (0 << 1));
+-	}
++	regmap_update_bits(s5m8767->iodev->regmap_pmic,
++			   S5M8767_REG_BUCK2CTRL, 1 << 1,
++			   (pdata->buck2_gpiodvs) ? (1 << 1) : (0 << 1));
++	regmap_update_bits(s5m8767->iodev->regmap_pmic,
++			   S5M8767_REG_BUCK3CTRL, 1 << 1,
++			   (pdata->buck3_gpiodvs) ? (1 << 1) : (0 << 1));
++	regmap_update_bits(s5m8767->iodev->regmap_pmic,
++			   S5M8767_REG_BUCK4CTRL, 1 << 1,
++			   (pdata->buck4_gpiodvs) ? (1 << 1) : (0 << 1));
+ 
+ 	/* Initialize GPIO DVS registers */
+ 	for (i = 0; i < 8; i++) {
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index d88f76f5305eb..ff620688fad94 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -71,6 +71,7 @@ struct imx_rproc_mem {
+ /* att flags */
+ /* M4 own area. Can be mapped at probe */
+ #define ATT_OWN		BIT(1)
++#define ATT_IOMEM	BIT(2)
+ 
+ /* address translation table */
+ struct imx_rproc_att {
+@@ -117,7 +118,7 @@ struct imx_rproc {
+ static const struct imx_rproc_att imx_rproc_att_imx8mn[] = {
+ 	/* dev addr , sys addr  , size	    , flags */
+ 	/* ITCM   */
+-	{ 0x00000000, 0x007E0000, 0x00020000, ATT_OWN },
++	{ 0x00000000, 0x007E0000, 0x00020000, ATT_OWN | ATT_IOMEM },
+ 	/* OCRAM_S */
+ 	{ 0x00180000, 0x00180000, 0x00009000, 0 },
+ 	/* OCRAM */
+@@ -131,7 +132,7 @@ static const struct imx_rproc_att imx_rproc_att_imx8mn[] = {
+ 	/* DDR (Code) - alias */
+ 	{ 0x10000000, 0x40000000, 0x0FFE0000, 0 },
+ 	/* DTCM */
+-	{ 0x20000000, 0x00800000, 0x00020000, ATT_OWN },
++	{ 0x20000000, 0x00800000, 0x00020000, ATT_OWN | ATT_IOMEM },
+ 	/* OCRAM_S - alias */
+ 	{ 0x20180000, 0x00180000, 0x00008000, ATT_OWN },
+ 	/* OCRAM */
+@@ -147,7 +148,7 @@ static const struct imx_rproc_att imx_rproc_att_imx8mn[] = {
+ static const struct imx_rproc_att imx_rproc_att_imx8mq[] = {
+ 	/* dev addr , sys addr  , size	    , flags */
+ 	/* TCML - alias */
+-	{ 0x00000000, 0x007e0000, 0x00020000, 0 },
++	{ 0x00000000, 0x007e0000, 0x00020000, ATT_IOMEM},
+ 	/* OCRAM_S */
+ 	{ 0x00180000, 0x00180000, 0x00008000, 0 },
+ 	/* OCRAM */
+@@ -159,9 +160,9 @@ static const struct imx_rproc_att imx_rproc_att_imx8mq[] = {
+ 	/* DDR (Code) - alias */
+ 	{ 0x10000000, 0x80000000, 0x0FFE0000, 0 },
+ 	/* TCML */
+-	{ 0x1FFE0000, 0x007E0000, 0x00020000, ATT_OWN },
++	{ 0x1FFE0000, 0x007E0000, 0x00020000, ATT_OWN  | ATT_IOMEM},
+ 	/* TCMU */
+-	{ 0x20000000, 0x00800000, 0x00020000, ATT_OWN },
++	{ 0x20000000, 0x00800000, 0x00020000, ATT_OWN  | ATT_IOMEM},
+ 	/* OCRAM_S */
+ 	{ 0x20180000, 0x00180000, 0x00008000, ATT_OWN },
+ 	/* OCRAM */
+@@ -199,12 +200,12 @@ static const struct imx_rproc_att imx_rproc_att_imx7d[] = {
+ 	/* OCRAM_PXP (Code) - alias */
+ 	{ 0x00940000, 0x00940000, 0x00008000, 0 },
+ 	/* TCML (Code) */
+-	{ 0x1FFF8000, 0x007F8000, 0x00008000, ATT_OWN },
++	{ 0x1FFF8000, 0x007F8000, 0x00008000, ATT_OWN | ATT_IOMEM },
+ 	/* DDR (Code) - alias, first part of DDR (Data) */
+ 	{ 0x10000000, 0x80000000, 0x0FFF0000, 0 },
+ 
+ 	/* TCMU (Data) */
+-	{ 0x20000000, 0x00800000, 0x00008000, ATT_OWN },
++	{ 0x20000000, 0x00800000, 0x00008000, ATT_OWN | ATT_IOMEM },
+ 	/* OCRAM (Data) */
+ 	{ 0x20200000, 0x00900000, 0x00020000, 0 },
+ 	/* OCRAM_EPDC (Data) */
+@@ -218,18 +219,18 @@ static const struct imx_rproc_att imx_rproc_att_imx7d[] = {
+ static const struct imx_rproc_att imx_rproc_att_imx6sx[] = {
+ 	/* dev addr , sys addr  , size	    , flags */
+ 	/* TCML (M4 Boot Code) - alias */
+-	{ 0x00000000, 0x007F8000, 0x00008000, 0 },
++	{ 0x00000000, 0x007F8000, 0x00008000, ATT_IOMEM },
+ 	/* OCRAM_S (Code) */
+ 	{ 0x00180000, 0x008F8000, 0x00004000, 0 },
+ 	/* OCRAM_S (Code) - alias */
+ 	{ 0x00180000, 0x008FC000, 0x00004000, 0 },
+ 	/* TCML (Code) */
+-	{ 0x1FFF8000, 0x007F8000, 0x00008000, ATT_OWN },
++	{ 0x1FFF8000, 0x007F8000, 0x00008000, ATT_OWN | ATT_IOMEM },
+ 	/* DDR (Code) - alias, first part of DDR (Data) */
+ 	{ 0x10000000, 0x80000000, 0x0FFF8000, 0 },
+ 
+ 	/* TCMU (Data) */
+-	{ 0x20000000, 0x00800000, 0x00008000, ATT_OWN },
++	{ 0x20000000, 0x00800000, 0x00008000, ATT_OWN | ATT_IOMEM },
+ 	/* OCRAM_S (Data) - alias? */
+ 	{ 0x208F8000, 0x008F8000, 0x00004000, 0 },
+ 	/* DDR (Data) */
+@@ -341,7 +342,7 @@ static int imx_rproc_stop(struct rproc *rproc)
+ }
+ 
+ static int imx_rproc_da_to_sys(struct imx_rproc *priv, u64 da,
+-			       size_t len, u64 *sys)
++			       size_t len, u64 *sys, bool *is_iomem)
+ {
+ 	const struct imx_rproc_dcfg *dcfg = priv->dcfg;
+ 	int i;
+@@ -354,6 +355,8 @@ static int imx_rproc_da_to_sys(struct imx_rproc *priv, u64 da,
+ 			unsigned int offset = da - att->da;
+ 
+ 			*sys = att->sa + offset;
++			if (is_iomem)
++				*is_iomem = att->flags & ATT_IOMEM;
+ 			return 0;
+ 		}
+ 	}
+@@ -377,7 +380,7 @@ static void *imx_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *i
+ 	 * On device side we have many aliases, so we need to convert device
+ 	 * address (M4) to system bus address first.
+ 	 */
+-	if (imx_rproc_da_to_sys(priv, da, len, &sys))
++	if (imx_rproc_da_to_sys(priv, da, len, &sys, is_iomem))
+ 		return NULL;
+ 
+ 	for (i = 0; i < IMX_RPROC_MEM_MAX; i++) {
+@@ -553,8 +556,12 @@ static int imx_rproc_addr_init(struct imx_rproc *priv,
+ 		if (b >= IMX_RPROC_MEM_MAX)
+ 			break;
+ 
+-		priv->mem[b].cpu_addr = devm_ioremap(&pdev->dev,
+-						     att->sa, att->size);
++		if (att->flags & ATT_IOMEM)
++			priv->mem[b].cpu_addr = devm_ioremap(&pdev->dev,
++							     att->sa, att->size);
++		else
++			priv->mem[b].cpu_addr = devm_ioremap_wc(&pdev->dev,
++								att->sa, att->size);
+ 		if (!priv->mem[b].cpu_addr) {
+ 			dev_err(dev, "failed to remap %#x bytes from %#x\n", att->size, att->sa);
+ 			return -ENOMEM;
+@@ -575,8 +582,8 @@ static int imx_rproc_addr_init(struct imx_rproc *priv,
+ 		struct resource res;
+ 
+ 		node = of_parse_phandle(np, "memory-region", a);
+-		/* Not map vdev region */
+-		if (!strcmp(node->name, "vdev"))
++		/* Not map vdevbuffer, vdevring region */
++		if (!strncmp(node->name, "vdev", strlen("vdev")))
+ 			continue;
+ 		err = of_address_to_resource(node, 0, &res);
+ 		if (err) {
+@@ -597,7 +604,7 @@ static int imx_rproc_addr_init(struct imx_rproc *priv,
+ 		}
+ 		priv->mem[b].sys_addr = res.start;
+ 		priv->mem[b].size = resource_size(&res);
+-		if (!strcmp(node->name, "rsc_table"))
++		if (!strcmp(node->name, "rsc-table"))
+ 			priv->rsc_table = priv->mem[b].cpu_addr;
+ 		b++;
+ 	}
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index 7de5905d276ac..f77b0ff55385e 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -556,9 +556,6 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr,
+ 	/* Initialise vdev subdevice */
+ 	snprintf(name, sizeof(name), "vdev%dbuffer", rvdev->index);
+ 	rvdev->dev.parent = &rproc->dev;
+-	ret = copy_dma_range_map(&rvdev->dev, rproc->dev.parent);
+-	if (ret)
+-		return ret;
+ 	rvdev->dev.release = rproc_rvdev_release;
+ 	dev_set_name(&rvdev->dev, "%s#%s", dev_name(rvdev->dev.parent), name);
+ 	dev_set_drvdata(&rvdev->dev, rvdev);
+@@ -568,6 +565,11 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr,
+ 		put_device(&rvdev->dev);
+ 		return ret;
+ 	}
++
++	ret = copy_dma_range_map(&rvdev->dev, rproc->dev.parent);
++	if (ret)
++		goto free_rvdev;
++
+ 	/* Make device dma capable by inheriting from parent's capabilities */
+ 	set_dma_ops(&rvdev->dev, get_dma_ops(rproc->dev.parent));
+ 
+diff --git a/drivers/remoteproc/remoteproc_coredump.c b/drivers/remoteproc/remoteproc_coredump.c
+index aee657cc08c6a..c892f433a323e 100644
+--- a/drivers/remoteproc/remoteproc_coredump.c
++++ b/drivers/remoteproc/remoteproc_coredump.c
+@@ -152,8 +152,8 @@ static void rproc_copy_segment(struct rproc *rproc, void *dest,
+ 			       struct rproc_dump_segment *segment,
+ 			       size_t offset, size_t size)
+ {
++	bool is_iomem = false;
+ 	void *ptr;
+-	bool is_iomem;
+ 
+ 	if (segment->dump) {
+ 		segment->dump(rproc, segment, dest, offset, size);
+diff --git a/drivers/remoteproc/remoteproc_elf_loader.c b/drivers/remoteproc/remoteproc_elf_loader.c
+index 469c52e62faff..d635d19a5aa8a 100644
+--- a/drivers/remoteproc/remoteproc_elf_loader.c
++++ b/drivers/remoteproc/remoteproc_elf_loader.c
+@@ -178,8 +178,8 @@ int rproc_elf_load_segments(struct rproc *rproc, const struct firmware *fw)
+ 		u64 filesz = elf_phdr_get_p_filesz(class, phdr);
+ 		u64 offset = elf_phdr_get_p_offset(class, phdr);
+ 		u32 type = elf_phdr_get_p_type(class, phdr);
++		bool is_iomem = false;
+ 		void *ptr;
+-		bool is_iomem;
+ 
+ 		if (type != PT_LOAD)
+ 			continue;
+@@ -220,7 +220,7 @@ int rproc_elf_load_segments(struct rproc *rproc, const struct firmware *fw)
+ 		/* put the segment where the remote processor expects it */
+ 		if (filesz) {
+ 			if (is_iomem)
+-				memcpy_fromio(ptr, (void __iomem *)(elf_data + offset), filesz);
++				memcpy_toio((void __iomem *)ptr, elf_data + offset, filesz);
+ 			else
+ 				memcpy(ptr, elf_data + offset, filesz);
+ 		}
+diff --git a/drivers/reset/reset-socfpga.c b/drivers/reset/reset-socfpga.c
+index 2a72f861f7983..8c6492e5693c7 100644
+--- a/drivers/reset/reset-socfpga.c
++++ b/drivers/reset/reset-socfpga.c
+@@ -92,3 +92,29 @@ void __init socfpga_reset_init(void)
+ 	for_each_matching_node(np, socfpga_early_reset_dt_ids)
+ 		a10_reset_init(np);
+ }
++
++/*
++ * The early driver is problematic, because it doesn't register
++ * itself as a driver. This causes certain device links to prevent
++ * consumer devices from probing. The hacky solution is to register
++ * an empty driver, whose only job is to attach itself to the reset
++ * manager and call probe.
++ */
++static const struct of_device_id socfpga_reset_dt_ids[] = {
++	{ .compatible = "altr,rst-mgr", },
++	{ /* sentinel */ },
++};
++
++static int reset_simple_probe(struct platform_device *pdev)
++{
++	return 0;
++}
++
++static struct platform_driver reset_socfpga_driver = {
++	.probe	= reset_simple_probe,
++	.driver = {
++		.name		= "socfpga-reset",
++		.of_match_table	= socfpga_reset_dt_ids,
++	},
++};
++builtin_platform_driver(reset_socfpga_driver);
+diff --git a/drivers/rtc/rtc-ds1302.c b/drivers/rtc/rtc-ds1302.c
+index b3de6d2e680a4..2f83adef966eb 100644
+--- a/drivers/rtc/rtc-ds1302.c
++++ b/drivers/rtc/rtc-ds1302.c
+@@ -199,11 +199,18 @@ static const struct of_device_id ds1302_dt_ids[] = {
+ MODULE_DEVICE_TABLE(of, ds1302_dt_ids);
+ #endif
+ 
++static const struct spi_device_id ds1302_spi_ids[] = {
++	{ .name = "ds1302", },
++	{ /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(spi, ds1302_spi_ids);
++
+ static struct spi_driver ds1302_driver = {
+ 	.driver.name	= "rtc-ds1302",
+ 	.driver.of_match_table = of_match_ptr(ds1302_dt_ids),
+ 	.probe		= ds1302_probe,
+ 	.remove		= ds1302_remove,
++	.id_table	= ds1302_spi_ids,
+ };
+ 
+ module_spi_driver(ds1302_driver);
+diff --git a/drivers/rtc/rtc-ds1390.c b/drivers/rtc/rtc-ds1390.c
+index 66fc8617d07ee..93ce72b9ae59e 100644
+--- a/drivers/rtc/rtc-ds1390.c
++++ b/drivers/rtc/rtc-ds1390.c
+@@ -219,12 +219,19 @@ static const struct of_device_id ds1390_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ds1390_of_match);
+ 
++static const struct spi_device_id ds1390_spi_ids[] = {
++	{ .name = "ds1390" },
++	{}
++};
++MODULE_DEVICE_TABLE(spi, ds1390_spi_ids);
++
+ static struct spi_driver ds1390_driver = {
+ 	.driver = {
+ 		.name	= "rtc-ds1390",
+ 		.of_match_table = of_match_ptr(ds1390_of_match),
+ 	},
+ 	.probe	= ds1390_probe,
++	.id_table = ds1390_spi_ids,
+ };
+ 
+ module_spi_driver(ds1390_driver);
+diff --git a/drivers/rtc/rtc-mcp795.c b/drivers/rtc/rtc-mcp795.c
+index bad7792b6ca58..0d515b3df5710 100644
+--- a/drivers/rtc/rtc-mcp795.c
++++ b/drivers/rtc/rtc-mcp795.c
+@@ -430,12 +430,19 @@ static const struct of_device_id mcp795_of_match[] = {
+ MODULE_DEVICE_TABLE(of, mcp795_of_match);
+ #endif
+ 
++static const struct spi_device_id mcp795_spi_ids[] = {
++	{ .name = "mcp795" },
++	{ }
++};
++MODULE_DEVICE_TABLE(spi, mcp795_spi_ids);
++
+ static struct spi_driver mcp795_driver = {
+ 		.driver = {
+ 				.name = "rtc-mcp795",
+ 				.of_match_table = of_match_ptr(mcp795_of_match),
+ 		},
+ 		.probe = mcp795_probe,
++		.id_table = mcp795_spi_ids,
+ };
+ 
+ module_spi_driver(mcp795_driver);
+diff --git a/drivers/rtc/rtc-pcf2123.c b/drivers/rtc/rtc-pcf2123.c
+index 0f58cac81d8c0..7473e6c8a183b 100644
+--- a/drivers/rtc/rtc-pcf2123.c
++++ b/drivers/rtc/rtc-pcf2123.c
+@@ -451,12 +451,21 @@ static const struct of_device_id pcf2123_dt_ids[] = {
+ MODULE_DEVICE_TABLE(of, pcf2123_dt_ids);
+ #endif
+ 
++static const struct spi_device_id pcf2123_spi_ids[] = {
++	{ .name = "pcf2123", },
++	{ .name = "rv2123", },
++	{ .name = "rtc-pcf2123", },
++	{ /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(spi, pcf2123_spi_ids);
++
+ static struct spi_driver pcf2123_driver = {
+ 	.driver	= {
+ 			.name	= "rtc-pcf2123",
+ 			.of_match_table = of_match_ptr(pcf2123_dt_ids),
+ 	},
+ 	.probe	= pcf2123_probe,
++	.id_table = pcf2123_spi_ids,
+ };
+ 
+ module_spi_driver(pcf2123_driver);
+diff --git a/drivers/rtc/rtc-rv3032.c b/drivers/rtc/rtc-rv3032.c
+index d63102d5cb1e4..1b62ed2f14594 100644
+--- a/drivers/rtc/rtc-rv3032.c
++++ b/drivers/rtc/rtc-rv3032.c
+@@ -617,11 +617,11 @@ static int rv3032_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	ret = rv3032_enter_eerd(rv3032, &eerd);
+ 	if (ret)
+-		goto exit_eerd;
++		return ret;
+ 
+ 	ret = regmap_write(rv3032->regmap, RV3032_CLKOUT1, hfd & 0xff);
+ 	if (ret)
+-		return ret;
++		goto exit_eerd;
+ 
+ 	ret = regmap_write(rv3032->regmap, RV3032_CLKOUT2, RV3032_CLKOUT2_OS |
+ 			    FIELD_PREP(RV3032_CLKOUT2_HFD_MSK, hfd >> 8));
+diff --git a/drivers/s390/char/tape_std.c b/drivers/s390/char/tape_std.c
+index 1f5fab617b679..f7e75d9fedf61 100644
+--- a/drivers/s390/char/tape_std.c
++++ b/drivers/s390/char/tape_std.c
+@@ -53,7 +53,6 @@ int
+ tape_std_assign(struct tape_device *device)
+ {
+ 	int                  rc;
+-	struct timer_list    timeout;
+ 	struct tape_request *request;
+ 
+ 	request = tape_alloc_request(2, 11);
+@@ -70,7 +69,7 @@ tape_std_assign(struct tape_device *device)
+ 	 * So we set up a timeout for this call.
+ 	 */
+ 	timer_setup(&request->timer, tape_std_assign_timeout, 0);
+-	mod_timer(&timeout, jiffies + 2 * HZ);
++	mod_timer(&request->timer, jiffies + msecs_to_jiffies(2000));
+ 
+ 	rc = tape_do_io_interruptible(device, request);
+ 
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index 9fcdb8d81eee6..3b912ef43170a 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -437,8 +437,8 @@ static ssize_t dev_busid_show(struct device *dev,
+ 	struct subchannel *sch = to_subchannel(dev);
+ 	struct pmcw *pmcw = &sch->schib.pmcw;
+ 
+-	if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
+-	     pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
++	if ((pmcw->st == SUBCHANNEL_TYPE_IO && pmcw->dnv) ||
++	    (pmcw->st == SUBCHANNEL_TYPE_MSG && pmcw->w))
+ 		return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
+ 				  pmcw->dev);
+ 	else
+diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
+index 0fe7b2f2e7f52..c533d1dadc6bb 100644
+--- a/drivers/s390/cio/device_ops.c
++++ b/drivers/s390/cio/device_ops.c
+@@ -825,13 +825,23 @@ EXPORT_SYMBOL_GPL(ccw_device_get_chid);
+  */
+ void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size)
+ {
+-	return cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
++	void *addr;
++
++	if (!get_device(&cdev->dev))
++		return NULL;
++	addr = cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
++	if (IS_ERR_OR_NULL(addr))
++		put_device(&cdev->dev);
++	return addr;
+ }
+ EXPORT_SYMBOL(ccw_device_dma_zalloc);
+ 
+ void ccw_device_dma_free(struct ccw_device *cdev, void *cpu_addr, size_t size)
+ {
++	if (!cpu_addr)
++		return;
+ 	cio_gp_dma_free(cdev->private->dma_pool, cpu_addr, size);
++	put_device(&cdev->dev);
+ }
+ EXPORT_SYMBOL(ccw_device_dma_free);
+ 
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index d70c4d3d0907f..26203b88ee178 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -157,6 +157,8 @@ static struct ap_queue_status ap_sm_recv(struct ap_queue *aq)
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+ 		aq->queue_count = max_t(int, 0, aq->queue_count - 1);
++		if (!status.queue_empty && !aq->queue_count)
++			aq->queue_count++;
+ 		if (aq->queue_count > 0)
+ 			mod_timer(&aq->timeout,
+ 				  jiffies + aq->request_timeout);
+diff --git a/drivers/scsi/csiostor/csio_lnode.c b/drivers/scsi/csiostor/csio_lnode.c
+index dc98f51f466fb..d5ac938970232 100644
+--- a/drivers/scsi/csiostor/csio_lnode.c
++++ b/drivers/scsi/csiostor/csio_lnode.c
+@@ -619,7 +619,7 @@ csio_ln_vnp_read_cbfn(struct csio_hw *hw, struct csio_mb *mbp)
+ 	struct fc_els_csp *csp;
+ 	struct fc_els_cssp *clsp;
+ 	enum fw_retval retval;
+-	__be32 nport_id;
++	__be32 nport_id = 0;
+ 
+ 	retval = FW_CMD_RETVAL_G(ntohl(rsp->alloc_to_len16));
+ 	if (retval != FW_SUCCESS) {
+diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
+index 24c7cefb0b78a..1c79e6c271630 100644
+--- a/drivers/scsi/dc395x.c
++++ b/drivers/scsi/dc395x.c
+@@ -4618,6 +4618,7 @@ static int dc395x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+ 	/* initialise the adapter and everything we need */
+  	if (adapter_init(acb, io_port_base, io_port_len, irq)) {
+ 		dprintkl(KERN_INFO, "adapter init failed\n");
++		acb = NULL;
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 24b72ee4246fb..0165dad803001 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -388,6 +388,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ 	shost->shost_state = SHOST_CREATED;
+ 	INIT_LIST_HEAD(&shost->__devices);
+ 	INIT_LIST_HEAD(&shost->__targets);
++	INIT_LIST_HEAD(&shost->eh_abort_list);
+ 	INIT_LIST_HEAD(&shost->eh_cmd_q);
+ 	INIT_LIST_HEAD(&shost->starved_list);
+ 	init_waitqueue_head(&shost->host_wait);
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index e481f5fe29d7f..7709bd117d634 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1056,9 +1056,10 @@ stop_rr_fcf_flogi:
+ 
+ 		lpfc_printf_vlog(vport, KERN_WARNING, LOG_TRACE_EVENT,
+ 				 "0150 FLOGI failure Status:x%x/x%x "
+-				 "xri x%x TMO:x%x\n",
++				 "xri x%x TMO:x%x refcnt %d\n",
+ 				 irsp->ulpStatus, irsp->un.ulpWord[4],
+-				 cmdiocb->sli4_xritag, irsp->ulpTimeout);
++				 cmdiocb->sli4_xritag, irsp->ulpTimeout,
++				 kref_read(&ndlp->kref));
+ 
+ 		/* If this is not a loop open failure, bail out */
+ 		if (!(irsp->ulpStatus == IOSTAT_LOCAL_REJECT &&
+@@ -1119,12 +1120,12 @@ stop_rr_fcf_flogi:
+ 	/* FLOGI completes successfully */
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ 			 "0101 FLOGI completes successfully, I/O tag:x%x, "
+-			 "xri x%x Data: x%x x%x x%x x%x x%x x%x x%x\n",
++			 "xri x%x Data: x%x x%x x%x x%x x%x x%x x%x %d\n",
+ 			 cmdiocb->iotag, cmdiocb->sli4_xritag,
+ 			 irsp->un.ulpWord[4], sp->cmn.e_d_tov,
+ 			 sp->cmn.w2.r_a_tov, sp->cmn.edtovResolution,
+ 			 vport->port_state, vport->fc_flag,
+-			 sp->cmn.priority_tagging);
++			 sp->cmn.priority_tagging, kref_read(&ndlp->kref));
+ 
+ 	if (sp->cmn.priority_tagging)
+ 		vport->vmid_flag |= LPFC_VMID_ISSUE_QFPA;
+@@ -1202,8 +1203,6 @@ flogifail:
+ 	phba->fcf.fcf_flag &= ~FCF_DISCOVERY;
+ 	spin_unlock_irq(&phba->hbalock);
+ 
+-	if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)))
+-		lpfc_nlp_put(ndlp);
+ 	if (!lpfc_error_lost_link(irsp)) {
+ 		/* FLOGI failed, so just use loop map to make discovery list */
+ 		lpfc_disc_list_loopmap(vport);
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 7cc5920979f8a..4bbb9645d42a1 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -4429,8 +4429,9 @@ lpfc_register_remote_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		fc_remote_port_rolechg(rport, rport_ids.roles);
+ 
+ 	lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
+-			 "3183 %s rport x%px DID x%x, role x%x\n",
+-			 __func__, rport, rport->port_id, rport->roles);
++			 "3183 %s rport x%px DID x%x, role x%x refcnt %d\n",
++			 __func__, rport, rport->port_id, rport->roles,
++			 kref_read(&ndlp->kref));
+ 
+ 	if ((rport->scsi_target_id != -1) &&
+ 	    (rport->scsi_target_id < LPFC_MAX_TARGET)) {
+@@ -4455,8 +4456,9 @@ lpfc_unregister_remote_port(struct lpfc_nodelist *ndlp)
+ 
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+ 			 "3184 rport unregister x%06x, rport x%px "
+-			 "xptflg x%x\n",
+-			 ndlp->nlp_DID, rport, ndlp->fc4_xpt_flags);
++			 "xptflg x%x refcnt %d\n",
++			 ndlp->nlp_DID, rport, ndlp->fc4_xpt_flags,
++			 kref_read(&ndlp->kref));
+ 
+ 	fc_remote_port_delete(rport);
+ 	lpfc_nlp_put(ndlp);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index bcc804cefd30b..340d2894f82db 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -209,8 +209,9 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
+ 	 * calling state machine to remove the node.
+ 	 */
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
+-			"6146 remoteport delete of remoteport x%px\n",
+-			remoteport);
++			 "6146 remoteport delete of remoteport x%px, ndlp x%px "
++			 "DID x%x xflags x%x\n",
++			 remoteport, ndlp, ndlp->nlp_DID, ndlp->fc4_xpt_flags);
+ 	spin_lock_irq(&ndlp->lock);
+ 
+ 	/* The register rebind might have occurred before the delete
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 1b248c237be1c..e80c3802d587a 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -6487,6 +6487,13 @@ lpfc_host_reset_handler(struct scsi_cmnd *cmnd)
+ 	if (rc)
+ 		goto error;
+ 
++	/* Wait for successful restart of adapter */
++	if (phba->sli_rev < LPFC_SLI_REV4) {
++		rc = lpfc_sli_chipset_init(phba);
++		if (rc)
++			goto error;
++	}
++
+ 	rc = lpfc_online(phba);
+ 	if (rc)
+ 		goto error;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index f530d8fe7a8ce..b7989514019ff 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -11806,15 +11806,54 @@ lpfc_sli_hba_iocb_abort(struct lpfc_hba *phba)
+ }
+ 
+ /**
+- * lpfc_sli_validate_fcp_iocb - find commands associated with a vport or LUN
++ * lpfc_sli_validate_fcp_iocb_for_abort - filter iocbs appropriate for FCP aborts
++ * @iocbq: Pointer to iocb object.
++ * @vport: Pointer to driver virtual port object.
++ *
++ * This function acts as an iocb filter for functions which abort FCP iocbs.
++ *
++ * Return values
++ * -ENODEV, if a null iocb or vport ptr is encountered
++ * -EINVAL, if the iocb is not an FCP I/O, not on the TX cmpl queue, premarked as
++ *          driver already started the abort process, or is an abort iocb itself
++ * 0, passes criteria for aborting the FCP I/O iocb
++ **/
++static int
++lpfc_sli_validate_fcp_iocb_for_abort(struct lpfc_iocbq *iocbq,
++				     struct lpfc_vport *vport)
++{
++	IOCB_t *icmd = NULL;
++
++	/* No null ptr vports */
++	if (!iocbq || iocbq->vport != vport)
++		return -ENODEV;
++
++	/* iocb must be for FCP IO, already exists on the TX cmpl queue,
++	 * can't be premarked as driver aborted, nor be an ABORT iocb itself
++	 */
++	icmd = &iocbq->iocb;
++	if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
++	    !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ) ||
++	    (iocbq->iocb_flag & LPFC_DRIVER_ABORTED) ||
++	    (icmd->ulpCommand == CMD_ABORT_XRI_CN ||
++	     icmd->ulpCommand == CMD_CLOSE_XRI_CN))
++		return -EINVAL;
++
++	return 0;
++}
++
++/**
++ * lpfc_sli_validate_fcp_iocb - validate commands associated with a SCSI target
+  * @iocbq: Pointer to driver iocb object.
+  * @vport: Pointer to driver virtual port object.
+  * @tgt_id: SCSI ID of the target.
+  * @lun_id: LUN ID of the scsi device.
+  * @ctx_cmd: LPFC_CTX_LUN/LPFC_CTX_TGT/LPFC_CTX_HOST
+  *
+- * This function acts as an iocb filter for functions which abort or count
+- * all FCP iocbs pending on a lun/SCSI target/SCSI host. It will return
++ * This function acts as an iocb filter for validating a lun/SCSI target/SCSI
++ * host.
++ *
++ * It will return
+  * 0 if the filtering criteria is met for the given iocb and will return
+  * 1 if the filtering criteria is not met.
+  * If ctx_cmd == LPFC_CTX_LUN, the function returns 0 only if the
+@@ -11833,22 +11872,8 @@ lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, struct lpfc_vport *vport,
+ 			   lpfc_ctx_cmd ctx_cmd)
+ {
+ 	struct lpfc_io_buf *lpfc_cmd;
+-	IOCB_t *icmd = NULL;
+ 	int rc = 1;
+ 
+-	if (!iocbq || iocbq->vport != vport)
+-		return rc;
+-
+-	if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
+-	    !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ) ||
+-	      iocbq->iocb_flag & LPFC_DRIVER_ABORTED)
+-		return rc;
+-
+-	icmd = &iocbq->iocb;
+-	if (icmd->ulpCommand == CMD_ABORT_XRI_CN ||
+-	    icmd->ulpCommand == CMD_CLOSE_XRI_CN)
+-		return rc;
+-
+ 	lpfc_cmd = container_of(iocbq, struct lpfc_io_buf, cur_iocbq);
+ 
+ 	if (lpfc_cmd->pCmd == NULL)
+@@ -11903,17 +11928,33 @@ lpfc_sli_sum_iocb(struct lpfc_vport *vport, uint16_t tgt_id, uint64_t lun_id,
+ {
+ 	struct lpfc_hba *phba = vport->phba;
+ 	struct lpfc_iocbq *iocbq;
++	IOCB_t *icmd = NULL;
+ 	int sum, i;
++	unsigned long iflags;
+ 
+-	spin_lock_irq(&phba->hbalock);
++	spin_lock_irqsave(&phba->hbalock, iflags);
+ 	for (i = 1, sum = 0; i <= phba->sli.last_iotag; i++) {
+ 		iocbq = phba->sli.iocbq_lookup[i];
+ 
+-		if (lpfc_sli_validate_fcp_iocb (iocbq, vport, tgt_id, lun_id,
+-						ctx_cmd) == 0)
++		if (!iocbq || iocbq->vport != vport)
++			continue;
++		if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
++		    !(iocbq->iocb_flag & LPFC_IO_ON_TXCMPLQ))
++			continue;
++
++		/* Include counting outstanding aborts */
++		icmd = &iocbq->iocb;
++		if (icmd->ulpCommand == CMD_ABORT_XRI_CN ||
++		    icmd->ulpCommand == CMD_CLOSE_XRI_CN) {
++			sum++;
++			continue;
++		}
++
++		if (lpfc_sli_validate_fcp_iocb(iocbq, vport, tgt_id, lun_id,
++					       ctx_cmd) == 0)
+ 			sum++;
+ 	}
+-	spin_unlock_irq(&phba->hbalock);
++	spin_unlock_irqrestore(&phba->hbalock, iflags);
+ 
+ 	return sum;
+ }
+@@ -11980,7 +12021,11 @@ lpfc_sli_abort_fcp_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+  *
+  * This function sends an abort command for every SCSI command
+  * associated with the given virtual port pending on the ring
+- * filtered by lpfc_sli_validate_fcp_iocb function.
++ * filtered by lpfc_sli_validate_fcp_iocb_for_abort and then
++ * lpfc_sli_validate_fcp_iocb function.  The ordering for validation before
++ * submitting abort iocbs must be lpfc_sli_validate_fcp_iocb_for_abort
++ * followed by lpfc_sli_validate_fcp_iocb.
++ *
+  * When abort_cmd == LPFC_CTX_LUN, the function sends abort only to the
+  * FCP iocbs associated with lun specified by tgt_id and lun_id
+  * parameters
+@@ -12012,6 +12057,9 @@ lpfc_sli_abort_iocb(struct lpfc_vport *vport, u16 tgt_id, u64 lun_id,
+ 	for (i = 1; i <= phba->sli.last_iotag; i++) {
+ 		iocbq = phba->sli.iocbq_lookup[i];
+ 
++		if (lpfc_sli_validate_fcp_iocb_for_abort(iocbq, vport))
++			continue;
++
+ 		if (lpfc_sli_validate_fcp_iocb(iocbq, vport, tgt_id, lun_id,
+ 					       abort_cmd) != 0)
+ 			continue;
+@@ -12044,7 +12092,11 @@ lpfc_sli_abort_iocb(struct lpfc_vport *vport, u16 tgt_id, u64 lun_id,
+  *
+  * This function sends an abort command for every SCSI command
+  * associated with the given virtual port pending on the ring
+- * filtered by lpfc_sli_validate_fcp_iocb function.
++ * filtered by lpfc_sli_validate_fcp_iocb_for_abort and then
++ * lpfc_sli_validate_fcp_iocb function.  The ordering for validation before
++ * submitting abort iocbs must be lpfc_sli_validate_fcp_iocb_for_abort
++ * followed by lpfc_sli_validate_fcp_iocb.
++ *
+  * When taskmgmt_cmd == LPFC_CTX_LUN, the function sends abort only to the
+  * FCP iocbs associated with lun specified by tgt_id and lun_id
+  * parameters
+@@ -12082,6 +12134,9 @@ lpfc_sli_abort_taskmgmt(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
+ 	for (i = 1; i <= phba->sli.last_iotag; i++) {
+ 		iocbq = phba->sli.iocbq_lookup[i];
+ 
++		if (lpfc_sli_validate_fcp_iocb_for_abort(iocbq, vport))
++			continue;
++
+ 		if (lpfc_sli_validate_fcp_iocb(iocbq, vport, tgt_id, lun_id,
+ 					       cmd) != 0)
+ 			continue;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 06399c026a8d5..1ff2198583a71 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -3530,6 +3530,9 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex,
+ 	if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR)
+ 		return IRQ_HANDLED;
+ 
++	if (irq_context && !atomic_add_unless(&irq_context->in_used, 1, 1))
++		return 0;
++
+ 	desc = fusion->reply_frames_desc[MSIxIndex] +
+ 				fusion->last_reply_idx[MSIxIndex];
+ 
+@@ -3540,11 +3543,11 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex,
+ 	reply_descript_type = reply_desc->ReplyFlags &
+ 		MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK;
+ 
+-	if (reply_descript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED)
++	if (reply_descript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED) {
++		if (irq_context)
++			atomic_dec(&irq_context->in_used);
+ 		return IRQ_NONE;
+-
+-	if (irq_context && !atomic_add_unless(&irq_context->in_used, 1, 1))
+-		return 0;
++	}
+ 
+ 	num_completed = 0;
+ 
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 17c0f26e683a9..bdbbd7b9988dc 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -3169,7 +3169,7 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 	 * fw_control_context->usrAddr
+ 	 */
+ 	complete(pm8001_ha->nvmd_completion);
+-	pm8001_dbg(pm8001_ha, MSG, "Set nvm data complete!\n");
++	pm8001_dbg(pm8001_ha, MSG, "Get nvmd data complete!\n");
+ 	ccb->task = NULL;
+ 	ccb->ccb_tag = 0xFFFFFFFF;
+ 	pm8001_tag_free(pm8001_ha, tag);
+diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h
+index 62d08b535a4b6..e18f2b60371db 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.h
++++ b/drivers/scsi/pm8001/pm8001_sas.h
+@@ -457,6 +457,7 @@ struct outbound_queue_table {
+ 	__le32			producer_index;
+ 	u32			consumer_idx;
+ 	spinlock_t		oq_lock;
++	unsigned long		lock_flags;
+ };
+ struct pm8001_hba_memspace {
+ 	void __iomem  		*memvirtaddr;
+@@ -738,9 +739,7 @@ pm8001_ccb_task_free_done(struct pm8001_hba_info *pm8001_ha,
+ {
+ 	pm8001_ccb_task_free(pm8001_ha, task, ccb, ccb_idx);
+ 	smp_mb(); /*in order to force CPU ordering*/
+-	spin_unlock(&pm8001_ha->lock);
+ 	task->task_done(task);
+-	spin_lock(&pm8001_ha->lock);
+ }
+ 
+ #endif
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 6ffe17b849ae8..ed02e1aaf868c 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -2379,7 +2379,8 @@ static void mpi_ssp_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 
+ /*See the comments for mpi_ssp_completion */
+ static void
+-mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
++mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
++		struct outbound_queue_table *circularQ, void *piomb)
+ {
+ 	struct sas_task *t;
+ 	struct pm8001_ccb_info *ccb;
+@@ -2616,7 +2617,11 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 			ts->resp = SAS_TASK_UNDELIVERED;
+ 			ts->stat = SAS_QUEUE_FULL;
++			spin_unlock_irqrestore(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++			spin_lock_irqsave(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -2632,7 +2637,11 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 			ts->resp = SAS_TASK_UNDELIVERED;
+ 			ts->stat = SAS_QUEUE_FULL;
++			spin_unlock_irqrestore(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++			spin_lock_irqsave(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -2656,7 +2665,11 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_STP_RESOURCES_BUSY);
+ 			ts->resp = SAS_TASK_UNDELIVERED;
+ 			ts->stat = SAS_QUEUE_FULL;
++			spin_unlock_irqrestore(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++			spin_lock_irqsave(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -2727,7 +2740,11 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 					IO_DS_NON_OPERATIONAL);
+ 			ts->resp = SAS_TASK_UNDELIVERED;
+ 			ts->stat = SAS_QUEUE_FULL;
++			spin_unlock_irqrestore(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++			spin_lock_irqsave(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -2747,7 +2764,11 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 					IO_DS_IN_ERROR);
+ 			ts->resp = SAS_TASK_UNDELIVERED;
+ 			ts->stat = SAS_QUEUE_FULL;
++			spin_unlock_irqrestore(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++			spin_lock_irqsave(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -2785,12 +2806,17 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
++		spin_unlock_irqrestore(&circularQ->oq_lock,
++				circularQ->lock_flags);
+ 		pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++		spin_lock_irqsave(&circularQ->oq_lock,
++				circularQ->lock_flags);
+ 	}
+ }
+ 
+ /*See the comments for mpi_ssp_completion */
+-static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
++static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha,
++		struct outbound_queue_table *circularQ, void *piomb)
+ {
+ 	struct sas_task *t;
+ 	struct task_status_struct *ts;
+@@ -2890,7 +2916,11 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 				IO_OPEN_CNX_ERROR_IT_NEXUS_LOSS);
+ 			ts->resp = SAS_TASK_COMPLETE;
+ 			ts->stat = SAS_QUEUE_FULL;
++			spin_unlock_irqrestore(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++			spin_lock_irqsave(&circularQ->oq_lock,
++					circularQ->lock_flags);
+ 			return;
+ 		}
+ 		break;
+@@ -3002,7 +3032,11 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
+ 	} else {
+ 		spin_unlock_irqrestore(&t->task_state_lock, flags);
++		spin_unlock_irqrestore(&circularQ->oq_lock,
++				circularQ->lock_flags);
+ 		pm8001_ccb_task_free_done(pm8001_ha, t, ccb, tag);
++		spin_lock_irqsave(&circularQ->oq_lock,
++				circularQ->lock_flags);
+ 	}
+ }
+ 
+@@ -3902,7 +3936,8 @@ static int ssp_coalesced_comp_resp(struct pm8001_hba_info *pm8001_ha,
+  * @pm8001_ha: our hba card information
+  * @piomb: IO message buffer
+  */
+-static void process_one_iomb(struct pm8001_hba_info *pm8001_ha, void *piomb)
++static void process_one_iomb(struct pm8001_hba_info *pm8001_ha,
++		struct outbound_queue_table *circularQ, void *piomb)
+ {
+ 	__le32 pHeader = *(__le32 *)piomb;
+ 	u32 opc = (u32)((le32_to_cpu(pHeader)) & 0xFFF);
+@@ -3944,11 +3979,11 @@ static void process_one_iomb(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ 		break;
+ 	case OPC_OUB_SATA_COMP:
+ 		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_COMP\n");
+-		mpi_sata_completion(pm8001_ha, piomb);
++		mpi_sata_completion(pm8001_ha, circularQ, piomb);
+ 		break;
+ 	case OPC_OUB_SATA_EVENT:
+ 		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SATA_EVENT\n");
+-		mpi_sata_event(pm8001_ha, piomb);
++		mpi_sata_event(pm8001_ha, circularQ, piomb);
+ 		break;
+ 	case OPC_OUB_SSP_EVENT:
+ 		pm8001_dbg(pm8001_ha, MSG, "OPC_OUB_SSP_EVENT\n");
+@@ -4117,7 +4152,6 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 	void *pMsg1 = NULL;
+ 	u8 bc;
+ 	u32 ret = MPI_IO_STATUS_FAIL;
+-	unsigned long flags;
+ 	u32 regval;
+ 
+ 	if (vec == (pm8001_ha->max_q_num - 1)) {
+@@ -4134,7 +4168,7 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 		}
+ 	}
+ 	circularQ = &pm8001_ha->outbnd_q_tbl[vec];
+-	spin_lock_irqsave(&circularQ->oq_lock, flags);
++	spin_lock_irqsave(&circularQ->oq_lock, circularQ->lock_flags);
+ 	do {
+ 		/* spurious interrupt during setup if kexec-ing and
+ 		 * driver doing a doorbell access w/ the pre-kexec oq
+@@ -4145,7 +4179,8 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 		ret = pm8001_mpi_msg_consume(pm8001_ha, circularQ, &pMsg1, &bc);
+ 		if (MPI_IO_STATUS_SUCCESS == ret) {
+ 			/* process the outbound message */
+-			process_one_iomb(pm8001_ha, (void *)(pMsg1 - 4));
++			process_one_iomb(pm8001_ha, circularQ,
++						(void *)(pMsg1 - 4));
+ 			/* free the message from the outbound circular buffer */
+ 			pm8001_mpi_msg_free_set(pm8001_ha, pMsg1,
+ 							circularQ, bc);
+@@ -4160,7 +4195,7 @@ static int process_oq(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ 				break;
+ 		}
+ 	} while (1);
+-	spin_unlock_irqrestore(&circularQ->oq_lock, flags);
++	spin_unlock_irqrestore(&circularQ->oq_lock, circularQ->lock_flags);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 42d0d941dba5c..94ee08fab46a5 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3416,7 +3416,9 @@ retry_probe:
+ 		qedf->devlink = qed_ops->common->devlink_register(qedf->cdev);
+ 		if (IS_ERR(qedf->devlink)) {
+ 			QEDF_ERR(&qedf->dbg_ctx, "Cannot register devlink\n");
++			rc = PTR_ERR(qedf->devlink);
+ 			qedf->devlink = NULL;
++			goto err2;
+ 		}
+ 	}
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 3aa9869f6fae2..f86e5d2339ec5 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -1868,6 +1868,18 @@ qla2x00_port_speed_store(struct device *dev, struct device_attribute *attr,
+ 	return strlen(buf);
+ }
+ 
++static const struct {
++	u16 rate;
++	char *str;
++} port_speed_str[] = {
++	{ PORT_SPEED_4GB, "4" },
++	{ PORT_SPEED_8GB, "8" },
++	{ PORT_SPEED_16GB, "16" },
++	{ PORT_SPEED_32GB, "32" },
++	{ PORT_SPEED_64GB, "64" },
++	{ PORT_SPEED_10GB, "10" },
++};
++
+ static ssize_t
+ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+     char *buf)
+@@ -1875,7 +1887,8 @@ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+ 	struct scsi_qla_host *vha = shost_priv(dev_to_shost(dev));
+ 	struct qla_hw_data *ha = vha->hw;
+ 	ssize_t rval;
+-	char *spd[7] = {"0", "0", "0", "4", "8", "16", "32"};
++	u16 i;
++	char *speed = "Unknown";
+ 
+ 	rval = qla2x00_get_data_rate(vha);
+ 	if (rval != QLA_SUCCESS) {
+@@ -1884,7 +1897,14 @@ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+ 		return -EINVAL;
+ 	}
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%s\n", spd[ha->link_data_rate]);
++	for (i = 0; i < ARRAY_SIZE(port_speed_str); i++) {
++		if (port_speed_str[i].rate != ha->link_data_rate)
++			continue;
++		speed = port_speed_str[i].str;
++		break;
++	}
++
++	return scnprintf(buf, PAGE_SIZE, "%s\n", speed);
+ }
+ 
+ /* ----- */
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 2f867da822aee..9d394dce5a5c4 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -158,7 +158,6 @@ extern int ql2xasynctmfenable;
+ extern int ql2xgffidenable;
+ extern int ql2xenabledif;
+ extern int ql2xenablehba_err_chk;
+-extern int ql2xtargetreset;
+ extern int ql2xdontresethba;
+ extern uint64_t ql2xmaxlun;
+ extern int ql2xmdcapmask;
+@@ -792,7 +791,6 @@ extern void qlafx00_abort_iocb(srb_t *, struct abort_iocb_entry_fx00 *);
+ extern void qlafx00_fxdisc_iocb(srb_t *, struct fxdisc_entry_fx00 *);
+ extern void qlafx00_timer_routine(scsi_qla_host_t *);
+ extern int qlafx00_rescan_isp(scsi_qla_host_t *);
+-extern int qlafx00_loop_reset(scsi_qla_host_t *vha);
+ 
+ /* qla82xx related functions */
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 70b507d177f14..5ed7cc3fb2884 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -981,8 +981,6 @@ static void qla24xx_async_gnl_sp_done(srb_t *sp, int res)
+ 	    sp->name, res, sp->u.iocb_cmd.u.mbx.in_mb[1],
+ 	    sp->u.iocb_cmd.u.mbx.in_mb[2]);
+ 
+-	if (res == QLA_FUNCTION_TIMEOUT)
+-		return;
+ 
+ 	sp->fcport->flags &= ~(FCF_ASYNC_SENT|FCF_ASYNC_ACTIVE);
+ 	memset(&ea, 0, sizeof(ea));
+@@ -1020,8 +1018,8 @@ static void qla24xx_async_gnl_sp_done(srb_t *sp, int res)
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 
+ 	list_for_each_entry_safe(fcport, tf, &h, gnl_entry) {
+-		list_del_init(&fcport->gnl_entry);
+ 		spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
++		list_del_init(&fcport->gnl_entry);
+ 		fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 		spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 		ea.fcport = fcport;
+diff --git a/drivers/scsi/qla2xxx/qla_mr.c b/drivers/scsi/qla2xxx/qla_mr.c
+index 6e920da64863e..350b0c4346fb6 100644
+--- a/drivers/scsi/qla2xxx/qla_mr.c
++++ b/drivers/scsi/qla2xxx/qla_mr.c
+@@ -738,29 +738,6 @@ qlafx00_lun_reset(fc_port_t *fcport, uint64_t l, int tag)
+ 	return qla2x00_async_tm_cmd(fcport, TCF_LUN_RESET, l, tag);
+ }
+ 
+-int
+-qlafx00_loop_reset(scsi_qla_host_t *vha)
+-{
+-	int ret;
+-	struct fc_port *fcport;
+-	struct qla_hw_data *ha = vha->hw;
+-
+-	if (ql2xtargetreset) {
+-		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->port_type != FCT_TARGET)
+-				continue;
+-
+-			ret = ha->isp_ops->target_reset(fcport, 0, 0);
+-			if (ret != QLA_SUCCESS) {
+-				ql_dbg(ql_dbg_taskm, vha, 0x803d,
+-				    "Bus Reset failed: Reset=%d "
+-				    "d_id=%x.\n", ret, fcport->d_id.b24);
+-			}
+-		}
+-	}
+-	return QLA_SUCCESS;
+-}
+-
+ int
+ qlafx00_iospace_config(struct qla_hw_data *ha)
+ {
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 37ab71b6a8a78..4f1647701175a 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -197,12 +197,6 @@ MODULE_PARM_DESC(ql2xdbwr,
+ 		" 0 -- Regular doorbell.\n"
+ 		" 1 -- CAMRAM doorbell (faster).\n");
+ 
+-int ql2xtargetreset = 1;
+-module_param(ql2xtargetreset, int, S_IRUGO);
+-MODULE_PARM_DESC(ql2xtargetreset,
+-		 "Enable target reset."
+-		 "Default is 1 - use hw defaults.");
+-
+ int ql2xgffidenable;
+ module_param(ql2xgffidenable, int, S_IRUGO);
+ MODULE_PARM_DESC(ql2xgffidenable,
+@@ -1237,6 +1231,7 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 	uint32_t ratov_j;
+ 	struct qla_qpair *qpair;
+ 	unsigned long flags;
++	int fast_fail_status = SUCCESS;
+ 
+ 	if (qla2x00_isp_reg_stat(ha)) {
+ 		ql_log(ql_log_info, vha, 0x8042,
+@@ -1245,9 +1240,10 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 		return FAILED;
+ 	}
+ 
++	/* Save any FAST_IO_FAIL value to return later if abort succeeds */
+ 	ret = fc_block_scsi_eh(cmd);
+ 	if (ret != 0)
+-		return ret;
++		fast_fail_status = ret;
+ 
+ 	sp = scsi_cmd_priv(cmd);
+ 	qpair = sp->qpair;
+@@ -1255,7 +1251,7 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 	vha->cmd_timeout_cnt++;
+ 
+ 	if ((sp->fcport && sp->fcport->deleted) || !qpair)
+-		return SUCCESS;
++		return fast_fail_status != SUCCESS ? fast_fail_status : FAILED;
+ 
+ 	spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ 	sp->comp = &comp;
+@@ -1290,7 +1286,7 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd)
+ 			    __func__, ha->r_a_tov/10);
+ 			ret = FAILED;
+ 		} else {
+-			ret = SUCCESS;
++			ret = fast_fail_status;
+ 		}
+ 		break;
+ 	default:
+@@ -1640,27 +1636,10 @@ int
+ qla2x00_loop_reset(scsi_qla_host_t *vha)
+ {
+ 	int ret;
+-	struct fc_port *fcport;
+ 	struct qla_hw_data *ha = vha->hw;
+ 
+-	if (IS_QLAFX00(ha)) {
+-		return qlafx00_loop_reset(vha);
+-	}
+-
+-	if (ql2xtargetreset == 1 && ha->flags.enable_target_reset) {
+-		list_for_each_entry(fcport, &vha->vp_fcports, list) {
+-			if (fcport->port_type != FCT_TARGET)
+-				continue;
+-
+-			ret = ha->isp_ops->target_reset(fcport, 0, 0);
+-			if (ret != QLA_SUCCESS) {
+-				ql_dbg(ql_dbg_taskm, vha, 0x802c,
+-				    "Bus Reset failed: Reset=%d "
+-				    "d_id=%x.\n", ret, fcport->d_id.b24);
+-			}
+-		}
+-	}
+-
++	if (IS_QLAFX00(ha))
++		return QLA_SUCCESS;
+ 
+ 	if (ha->flags.enable_lip_full_login && !IS_CNA_CAPABLE(ha)) {
+ 		atomic_set(&vha->loop_state, LOOP_DOWN);
+@@ -4071,7 +4050,7 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len,
+ 					ql_dbg_pci(ql_dbg_init, ha->pdev,
+ 					    0xe0ee, "%s: failed alloc dsd\n",
+ 					    __func__);
+-					return 1;
++					return -ENOMEM;
+ 				}
+ 				ha->dif_bundle_kallocs++;
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index eb47140a899fe..41219f4f1e114 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3261,8 +3261,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ 			"RESET-RSP online/active/old-count/new-count = %d/%d/%d/%d.\n",
+ 			vha->flags.online, qla2x00_reset_active(vha),
+ 			cmd->reset_count, qpair->chip_reset);
+-		spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+-		return 0;
++		goto out_unmap_unlock;
+ 	}
+ 
+ 	/* Does F/W have an IOCBs for this request */
+@@ -3385,10 +3384,6 @@ int qlt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
+ 	prm.sg = NULL;
+ 	prm.req_cnt = 1;
+ 
+-	/* Calculate number of entries and segments required */
+-	if (qlt_pci_map_calc_cnt(&prm) != 0)
+-		return -EAGAIN;
+-
+ 	if (!qpair->fw_started || (cmd->reset_count != qpair->chip_reset) ||
+ 	    (cmd->sess && cmd->sess->deleted)) {
+ 		/*
+@@ -3406,6 +3401,10 @@ int qlt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
+ 		return 0;
+ 	}
+ 
++	/* Calculate number of entries and segments required */
++	if (qlt_pci_map_calc_cnt(&prm) != 0)
++		return -EAGAIN;
++
+ 	spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ 	/* Does F/W have an IOCBs for this request */
+ 	res = qlt_check_reserve_free_req(qpair, prm.req_cnt);
+@@ -3810,9 +3809,6 @@ void qlt_free_cmd(struct qla_tgt_cmd *cmd)
+ 
+ 	BUG_ON(cmd->cmd_in_wq);
+ 
+-	if (cmd->sg_mapped)
+-		qlt_unmap_sg(cmd->vha, cmd);
+-
+ 	if (!cmd->q_full)
+ 		qlt_decr_num_pend_cmds(cmd->vha);
+ 
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 58a252c389920..15083157a4e4f 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -135,6 +135,23 @@ static bool scsi_eh_should_retry_cmd(struct scsi_cmnd *cmd)
+ 	return true;
+ }
+ 
++static void scsi_eh_complete_abort(struct scsi_cmnd *scmd, struct Scsi_Host *shost)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(shost->host_lock, flags);
++	list_del_init(&scmd->eh_entry);
++	/*
++	 * If the abort succeeds, and there is no further
++	 * EH action, clear the ->last_reset time.
++	 */
++	if (list_empty(&shost->eh_abort_list) &&
++	    list_empty(&shost->eh_cmd_q))
++		if (shost->eh_deadline != -1)
++			shost->last_reset = 0;
++	spin_unlock_irqrestore(shost->host_lock, flags);
++}
++
+ /**
+  * scmd_eh_abort_handler - Handle command aborts
+  * @work:	command to be aborted.
+@@ -152,6 +169,7 @@ scmd_eh_abort_handler(struct work_struct *work)
+ 		container_of(work, struct scsi_cmnd, abort_work.work);
+ 	struct scsi_device *sdev = scmd->device;
+ 	enum scsi_disposition rtn;
++	unsigned long flags;
+ 
+ 	if (scsi_host_eh_past_deadline(sdev->host)) {
+ 		SCSI_LOG_ERROR_RECOVERY(3,
+@@ -175,12 +193,14 @@ scmd_eh_abort_handler(struct work_struct *work)
+ 				SCSI_LOG_ERROR_RECOVERY(3,
+ 					scmd_printk(KERN_WARNING, scmd,
+ 						    "retry aborted command\n"));
++				scsi_eh_complete_abort(scmd, sdev->host);
+ 				scsi_queue_insert(scmd, SCSI_MLQUEUE_EH_RETRY);
+ 				return;
+ 			} else {
+ 				SCSI_LOG_ERROR_RECOVERY(3,
+ 					scmd_printk(KERN_WARNING, scmd,
+ 						    "finish aborted command\n"));
++				scsi_eh_complete_abort(scmd, sdev->host);
+ 				scsi_finish_command(scmd);
+ 				return;
+ 			}
+@@ -193,6 +213,9 @@ scmd_eh_abort_handler(struct work_struct *work)
+ 		}
+ 	}
+ 
++	spin_lock_irqsave(sdev->host->host_lock, flags);
++	list_del_init(&scmd->eh_entry);
++	spin_unlock_irqrestore(sdev->host->host_lock, flags);
+ 	scsi_eh_scmd_add(scmd);
+ }
+ 
+@@ -223,6 +246,8 @@ scsi_abort_command(struct scsi_cmnd *scmd)
+ 	spin_lock_irqsave(shost->host_lock, flags);
+ 	if (shost->eh_deadline != -1 && !shost->last_reset)
+ 		shost->last_reset = jiffies;
++	BUG_ON(!list_empty(&scmd->eh_entry));
++	list_add_tail(&scmd->eh_entry, &shost->eh_abort_list);
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+ 
+ 	scmd->eh_eflags |= SCSI_EH_ABORT_SCHEDULED;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 7456a26aef513..300b611482a98 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1136,6 +1136,7 @@ void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
+ 	cmd->sense_buffer = buf;
+ 	cmd->prot_sdb = prot;
+ 	cmd->flags = flags;
++	INIT_LIST_HEAD(&cmd->eh_entry);
+ 	INIT_DELAYED_WORK(&cmd->abort_work, scmd_eh_abort_handler);
+ 	cmd->jiffies_at_alloc = jiffies_at_alloc;
+ 	cmd->retries = retries;
+@@ -1167,8 +1168,6 @@ static blk_status_t scsi_setup_scsi_cmnd(struct scsi_device *sdev,
+ 	}
+ 
+ 	cmd->cmd_len = scsi_req(req)->cmd_len;
+-	if (cmd->cmd_len == 0)
+-		cmd->cmd_len = scsi_command_size(cmd->cmnd);
+ 	cmd->cmnd = scsi_req(req)->cmd;
+ 	cmd->transfersize = blk_rq_bytes(req);
+ 	cmd->allowed = scsi_req(req)->retries;
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 298e22ef907e8..1add38b28ec4f 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -91,7 +91,7 @@ static int ufshcd_parse_clock_info(struct ufs_hba *hba)
+ 
+ 		clki->min_freq = clkfreq[i];
+ 		clki->max_freq = clkfreq[i+1];
+-		clki->name = kstrdup(name, GFP_KERNEL);
++		clki->name = devm_kstrdup(dev, name, GFP_KERNEL);
+ 		if (!strcmp(name, "ref_clk"))
+ 			clki->keep_link_active = true;
+ 		dev_dbg(dev, "%s: min %u max %u name %s\n", "freq-table-hz",
+@@ -126,7 +126,7 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
+ 	if (!vreg)
+ 		return -ENOMEM;
+ 
+-	vreg->name = kstrdup(name, GFP_KERNEL);
++	vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
+ 
+ 	snprintf(prop_name, MAX_PROP_SIZE, "%s-max-microamp", name);
+ 	if (of_property_read_u32(np, prop_name, &vreg->max_uA)) {
+diff --git a/drivers/soc/fsl/dpaa2-console.c b/drivers/soc/fsl/dpaa2-console.c
+index 27243f706f376..53917410f2bdb 100644
+--- a/drivers/soc/fsl/dpaa2-console.c
++++ b/drivers/soc/fsl/dpaa2-console.c
+@@ -231,6 +231,7 @@ static ssize_t dpaa2_console_read(struct file *fp, char __user *buf,
+ 	cd->cur_ptr += bytes;
+ 	written += bytes;
+ 
++	kfree(kbuf);
+ 	return written;
+ 
+ err_free_buf:
+diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c
+index 7351f30305506..779c319a4b820 100644
+--- a/drivers/soc/fsl/dpio/dpio-service.c
++++ b/drivers/soc/fsl/dpio/dpio-service.c
+@@ -59,7 +59,7 @@ static inline struct dpaa2_io *service_select_by_cpu(struct dpaa2_io *d,
+ 	 * potentially being migrated away.
+ 	 */
+ 	if (cpu < 0)
+-		cpu = smp_processor_id();
++		cpu = raw_smp_processor_id();
+ 
+ 	/* If a specific cpu was requested, pick it up immediately */
+ 	return dpio_by_cpu[cpu];
+diff --git a/drivers/soc/fsl/dpio/qbman-portal.c b/drivers/soc/fsl/dpio/qbman-portal.c
+index f13da4d7d1c52..3ec8ab08b9889 100644
+--- a/drivers/soc/fsl/dpio/qbman-portal.c
++++ b/drivers/soc/fsl/dpio/qbman-portal.c
+@@ -732,8 +732,7 @@ int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
+ 	int i, num_enqueued = 0;
+ 	unsigned long irq_flags;
+ 
+-	spin_lock(&s->access_spinlock);
+-	local_irq_save(irq_flags);
++	spin_lock_irqsave(&s->access_spinlock, irq_flags);
+ 
+ 	half_mask = (s->eqcr.pi_ci_mask>>1);
+ 	full_mask = s->eqcr.pi_ci_mask;
+@@ -744,8 +743,7 @@ int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
+ 		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ 					eqcr_ci, s->eqcr.ci);
+ 		if (!s->eqcr.available) {
+-			local_irq_restore(irq_flags);
+-			spin_unlock(&s->access_spinlock);
++			spin_unlock_irqrestore(&s->access_spinlock, irq_flags);
+ 			return 0;
+ 		}
+ 	}
+@@ -784,8 +782,7 @@ int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
+ 	dma_wmb();
+ 	qbman_write_register(s, QBMAN_CINH_SWP_EQCR_PI,
+ 				(QB_RT_BIT)|(s->eqcr.pi)|s->eqcr.pi_vb);
+-	local_irq_restore(irq_flags);
+-	spin_unlock(&s->access_spinlock);
++	spin_unlock_irqrestore(&s->access_spinlock, irq_flags);
+ 
+ 	return num_enqueued;
+ }
+diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
+index 7abfc8c4fdc72..f736d208362c9 100644
+--- a/drivers/soc/qcom/apr.c
++++ b/drivers/soc/qcom/apr.c
+@@ -323,12 +323,14 @@ static int of_apr_add_pd_lookups(struct device *dev)
+ 						    1, &service_path);
+ 		if (ret < 0) {
+ 			dev_err(dev, "pdr service path missing: %d\n", ret);
++			of_node_put(node);
+ 			return ret;
+ 		}
+ 
+ 		pds = pdr_add_lookup(apr->pdr, service_name, service_path);
+ 		if (IS_ERR(pds) && PTR_ERR(pds) != -EALREADY) {
+ 			dev_err(dev, "pdr add lookup failed: %ld\n", PTR_ERR(pds));
++			of_node_put(node);
+ 			return PTR_ERR(pds);
+ 		}
+ 	}
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 15a36dcab990e..e53109a5c3da9 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -115,7 +115,7 @@ static const struct llcc_slice_config sc7280_data[] =  {
+ 	{ LLCC_CMPT,     10, 768, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+ 	{ LLCC_GPUHTW,   11, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+ 	{ LLCC_GPU,      12, 512, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+-	{ LLCC_MMUHWT,   13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 1, 0},
++	{ LLCC_MMUHWT,   13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 0, 1, 0},
+ 	{ LLCC_MDMPNG,   21, 768, 0, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+ 	{ LLCC_WLHW,     24, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+ 	{ LLCC_MODPE,    29, 64,  1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
+index fa209b479ab35..d98cc8c2e5d5c 100644
+--- a/drivers/soc/qcom/rpmhpd.c
++++ b/drivers/soc/qcom/rpmhpd.c
+@@ -30,6 +30,7 @@
+  * @active_only:	True if it represents an Active only peer
+  * @corner:		current corner
+  * @active_corner:	current active corner
++ * @enable_corner:	lowest non-zero corner
+  * @level:		An array of level (vlvl) to corner (hlvl) mappings
+  *			derived from cmd-db
+  * @level_count:	Number of levels supported by the power domain. max
+@@ -47,6 +48,7 @@ struct rpmhpd {
+ 	const bool	active_only;
+ 	unsigned int	corner;
+ 	unsigned int	active_corner;
++	unsigned int	enable_corner;
+ 	u32		level[RPMH_ARC_MAX_LEVELS];
+ 	size_t		level_count;
+ 	bool		enabled;
+@@ -204,7 +206,7 @@ static const struct rpmhpd_desc sm8250_desc = {
+ static struct rpmhpd sm8350_mxc_ao;
+ static struct rpmhpd sm8350_mxc = {
+ 	.pd = { .name = "mxc", },
+-	.peer = &sm8150_mmcx_ao,
++	.peer = &sm8350_mxc_ao,
+ 	.res_name = "mxc.lvl",
+ };
+ 
+@@ -385,13 +387,13 @@ static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner)
+ static int rpmhpd_power_on(struct generic_pm_domain *domain)
+ {
+ 	struct rpmhpd *pd = domain_to_rpmhpd(domain);
+-	int ret = 0;
++	unsigned int corner;
++	int ret;
+ 
+ 	mutex_lock(&rpmhpd_lock);
+ 
+-	if (pd->corner)
+-		ret = rpmhpd_aggregate_corner(pd, pd->corner);
+-
++	corner = max(pd->corner, pd->enable_corner);
++	ret = rpmhpd_aggregate_corner(pd, corner);
+ 	if (!ret)
+ 		pd->enabled = true;
+ 
+@@ -436,6 +438,10 @@ static int rpmhpd_set_performance_state(struct generic_pm_domain *domain,
+ 		i--;
+ 
+ 	if (pd->enabled) {
++		/* Ensure that the domain isn't turn off */
++		if (i < pd->enable_corner)
++			i = pd->enable_corner;
++
+ 		ret = rpmhpd_aggregate_corner(pd, i);
+ 		if (ret)
+ 			goto out;
+@@ -472,6 +478,10 @@ static int rpmhpd_update_level_mapping(struct rpmhpd *rpmhpd)
+ 	for (i = 0; i < rpmhpd->level_count; i++) {
+ 		rpmhpd->level[i] = buf[i];
+ 
++		/* Remember the first corner with non-zero level */
++		if (!rpmhpd->level[rpmhpd->enable_corner] && rpmhpd->level[i])
++			rpmhpd->enable_corner = i;
++
+ 		/*
+ 		 * The AUX data may be zero padded.  These 0 valued entries at
+ 		 * the end of the map must be ignored.
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index a6cffd57d3c7b..0818eee31b295 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -87,8 +87,8 @@ static const char *const pmic_models[] = {
+ 	[15] = "PM8901",
+ 	[16] = "PM8950/PM8027",
+ 	[17] = "PMI8950/ISL9519",
+-	[18] = "PM8921",
+-	[19] = "PM8018",
++	[18] = "PMK8001/PM8921",
++	[19] = "PMI8996/PM8018",
+ 	[20] = "PM8998/PM8015",
+ 	[21] = "PMI8998/PM8014",
+ 	[22] = "PM8821",
+diff --git a/drivers/soc/samsung/Kconfig b/drivers/soc/samsung/Kconfig
+index 5745d7e5908e9..1f643c0f5c93f 100644
+--- a/drivers/soc/samsung/Kconfig
++++ b/drivers/soc/samsung/Kconfig
+@@ -25,6 +25,7 @@ config EXYNOS_PMU
+ 	bool "Exynos PMU controller driver" if COMPILE_TEST
+ 	depends on ARCH_EXYNOS || ((ARM || ARM64) && COMPILE_TEST)
+ 	select EXYNOS_PMU_ARM_DRIVERS if ARM && ARCH_EXYNOS
++	select MFD_CORE
+ 
+ # There is no need to enable these drivers for ARMv8
+ config EXYNOS_PMU_ARM_DRIVERS
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index ea62f84d1c8bd..b8ef9506f3dee 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -782,7 +782,7 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg,
+ 
+ 	err = reset_control_deassert(pg->reset);
+ 	if (err)
+-		goto powergate_off;
++		goto disable_clks;
+ 
+ 	usleep_range(10, 20);
+ 
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 3e6d4addac2f6..98c526aaf24b8 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -1103,7 +1103,7 @@ int sdw_bus_exit_clk_stop(struct sdw_bus *bus)
+ 	if (!simple_clk_stop) {
+ 		ret = sdw_bus_wait_for_clk_prep_deprep(bus, SDW_BROADCAST_DEV_NUM);
+ 		if (ret < 0)
+-			dev_warn(&slave->dev, "clock stop deprepare wait failed:%d\n", ret);
++			dev_warn(bus->dev, "clock stop deprepare wait failed:%d\n", ret);
+ 	}
+ 
+ 	list_for_each_entry(slave, &bus->slaves, node) {
+diff --git a/drivers/soundwire/debugfs.c b/drivers/soundwire/debugfs.c
+index b6cad0d59b7b9..49900cd207bc7 100644
+--- a/drivers/soundwire/debugfs.c
++++ b/drivers/soundwire/debugfs.c
+@@ -19,7 +19,7 @@ void sdw_bus_debugfs_init(struct sdw_bus *bus)
+ 		return;
+ 
+ 	/* create the debugfs master-N */
+-	snprintf(name, sizeof(name), "master-%d", bus->link_id);
++	snprintf(name, sizeof(name), "master-%d-%d", bus->id, bus->link_id);
+ 	bus->debugfs = debugfs_create_dir(name, sdw_debugfs_root);
+ }
+ 
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 95d4fa32c2995..92d9610df1fd8 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -310,7 +310,7 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ 		return mode;
+ 	ifr |= atmel_qspi_modes[mode].config;
+ 
+-	if (op->dummy.buswidth && op->dummy.nbytes)
++	if (op->dummy.nbytes)
+ 		dummy_cycles = op->dummy.nbytes * 8 / op->dummy.buswidth;
+ 
+ 	/*
+diff --git a/drivers/spi/spi-altera-dfl.c b/drivers/spi/spi-altera-dfl.c
+index 39a3e1a032e04..59a5b42c71226 100644
+--- a/drivers/spi/spi-altera-dfl.c
++++ b/drivers/spi/spi-altera-dfl.c
+@@ -140,7 +140,7 @@ static int dfl_spi_altera_probe(struct dfl_device *dfl_dev)
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+-	master->bus_num = dfl_dev->id;
++	master->bus_num = -1;
+ 
+ 	hw = spi_master_get_devdata(master);
+ 
+diff --git a/drivers/spi/spi-altera-platform.c b/drivers/spi/spi-altera-platform.c
+index f7a7c14e36790..65147aae82a1a 100644
+--- a/drivers/spi/spi-altera-platform.c
++++ b/drivers/spi/spi-altera-platform.c
+@@ -48,7 +48,7 @@ static int altera_spi_probe(struct platform_device *pdev)
+ 		return err;
+ 
+ 	/* setup the master state. */
+-	master->bus_num = pdev->id;
++	master->bus_num = -1;
+ 
+ 	if (pdata) {
+ 		if (pdata->num_chipselect > ALTERA_SPI_MAX_CS) {
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 3043677ba2226..151e154284bde 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -395,7 +395,8 @@ static int bcm_qspi_bspi_set_flex_mode(struct bcm_qspi *qspi,
+ 	if (addrlen == BSPI_ADDRLEN_4BYTES)
+ 		bpp = BSPI_BPP_ADDR_SELECT_MASK;
+ 
+-	bpp |= (op->dummy.nbytes * 8) / op->dummy.buswidth;
++	if (op->dummy.nbytes)
++		bpp |= (op->dummy.nbytes * 8) / op->dummy.buswidth;
+ 
+ 	switch (width) {
+ 	case SPI_NBITS_SINGLE:
+@@ -1460,7 +1461,7 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 					       &qspi->dev_ids[val]);
+ 			if (ret < 0) {
+ 				dev_err(&pdev->dev, "IRQ %s not found\n", name);
+-				goto qspi_probe_err;
++				goto qspi_unprepare_err;
+ 			}
+ 
+ 			qspi->dev_ids[val].dev = qspi;
+@@ -1475,7 +1476,7 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 	if (!num_ints) {
+ 		dev_err(&pdev->dev, "no IRQs registered, cannot init driver\n");
+ 		ret = -EINVAL;
+-		goto qspi_probe_err;
++		goto qspi_unprepare_err;
+ 	}
+ 
+ 	bcm_qspi_hw_init(qspi);
+@@ -1499,6 +1500,7 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 
+ qspi_reg_err:
+ 	bcm_qspi_hw_uninit(qspi);
++qspi_unprepare_err:
+ 	clk_disable_unprepare(qspi->clk);
+ qspi_probe_err:
+ 	kfree(qspi->dev_ids);
+diff --git a/drivers/spi/spi-mtk-nor.c b/drivers/spi/spi-mtk-nor.c
+index 41e7b341d2616..5c93730615f8d 100644
+--- a/drivers/spi/spi-mtk-nor.c
++++ b/drivers/spi/spi-mtk-nor.c
+@@ -160,7 +160,7 @@ static bool mtk_nor_match_read(const struct spi_mem_op *op)
+ {
+ 	int dummy = 0;
+ 
+-	if (op->dummy.buswidth)
++	if (op->dummy.nbytes)
+ 		dummy = op->dummy.nbytes * BITS_PER_BYTE / op->dummy.buswidth;
+ 
+ 	if ((op->data.buswidth == 2) || (op->data.buswidth == 4)) {
+diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c
+index feebda66f56eb..e4484ace584e4 100644
+--- a/drivers/spi/spi-pl022.c
++++ b/drivers/spi/spi-pl022.c
+@@ -1716,12 +1716,13 @@ static int verify_controller_parameters(struct pl022 *pl022,
+ 				return -EINVAL;
+ 			}
+ 		} else {
+-			if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX)
++			if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX) {
+ 				dev_err(&pl022->adev->dev,
+ 					"Microwire half duplex mode requested,"
+ 					" but this is only available in the"
+ 					" ST version of PL022\n");
+-			return -EINVAL;
++				return -EINVAL;
++			}
+ 		}
+ 	}
+ 	return 0;
+diff --git a/drivers/spi/spi-rpc-if.c b/drivers/spi/spi-rpc-if.c
+index c53138ce00309..83796a4ead34a 100644
+--- a/drivers/spi/spi-rpc-if.c
++++ b/drivers/spi/spi-rpc-if.c
+@@ -139,7 +139,9 @@ static int rpcif_spi_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	rpc = spi_controller_get_devdata(ctlr);
+-	rpcif_sw_init(rpc, parent);
++	error = rpcif_sw_init(rpc, parent);
++	if (error)
++		return error;
+ 
+ 	platform_set_drvdata(pdev, ctlr);
+ 
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 27f35aa2d746d..514337c86d2c3 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -397,7 +397,7 @@ static int stm32_qspi_send(struct spi_mem *mem, const struct spi_mem_op *op)
+ 		ccr |= FIELD_PREP(CCR_ADSIZE_MASK, op->addr.nbytes - 1);
+ 	}
+ 
+-	if (op->dummy.buswidth && op->dummy.nbytes)
++	if (op->dummy.nbytes)
+ 		ccr |= FIELD_PREP(CCR_DCYC_MASK,
+ 				  op->dummy.nbytes * 8 / op->dummy.buswidth);
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 3093e0041158c..f6c8565d9300a 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -453,6 +453,47 @@ int __spi_register_driver(struct module *owner, struct spi_driver *sdrv)
+ {
+ 	sdrv->driver.owner = owner;
+ 	sdrv->driver.bus = &spi_bus_type;
++
++	/*
++	 * For Really Good Reasons we use spi: modaliases not of:
++	 * modaliases for DT so module autoloading won't work if we
++	 * don't have a spi_device_id as well as a compatible string.
++	 */
++	if (sdrv->driver.of_match_table) {
++		const struct of_device_id *of_id;
++
++		for (of_id = sdrv->driver.of_match_table; of_id->compatible[0];
++		     of_id++) {
++			const char *of_name;
++
++			/* Strip off any vendor prefix */
++			of_name = strnchr(of_id->compatible,
++					  sizeof(of_id->compatible), ',');
++			if (of_name)
++				of_name++;
++			else
++				of_name = of_id->compatible;
++
++			if (sdrv->id_table) {
++				const struct spi_device_id *spi_id;
++
++				for (spi_id = sdrv->id_table; spi_id->name[0];
++				     spi_id++)
++					if (strcmp(spi_id->name, of_name) == 0)
++						break;
++
++				if (spi_id->name[0])
++					continue;
++			} else {
++				if (strcmp(sdrv->driver.name, of_name) == 0)
++					continue;
++			}
++
++			pr_warn("SPI driver %s has no spi_device_id for %s\n",
++				sdrv->driver.name, of_id->compatible);
++		}
++	}
++
+ 	return driver_register(&sdrv->driver);
+ }
+ EXPORT_SYMBOL_GPL(__spi_register_driver);
+diff --git a/drivers/staging/ks7010/Kconfig b/drivers/staging/ks7010/Kconfig
+index 0987fdc2f70db..8ea6c09286798 100644
+--- a/drivers/staging/ks7010/Kconfig
++++ b/drivers/staging/ks7010/Kconfig
+@@ -5,6 +5,9 @@ config KS7010
+ 	select WIRELESS_EXT
+ 	select WEXT_PRIV
+ 	select FW_LOADER
++	select CRYPTO
++	select CRYPTO_HASH
++	select CRYPTO_MICHAEL_MIC
+ 	help
+ 	  This is a driver for KeyStream KS7010 based SDIO WIFI cards. It is
+ 	  found on at least later Spectec SDW-821 (FCC-ID "S2Y-WLAN-11G-K" only,
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+index 362ed44b4effa..e046489cd253b 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+@@ -835,7 +835,6 @@ static int lm3554_probe(struct i2c_client *client)
+ 	int err = 0;
+ 	struct lm3554 *flash;
+ 	unsigned int i;
+-	int ret;
+ 
+ 	flash = kzalloc(sizeof(*flash), GFP_KERNEL);
+ 	if (!flash)
+@@ -844,7 +843,7 @@ static int lm3554_probe(struct i2c_client *client)
+ 	flash->pdata = lm3554_platform_data_func(client);
+ 	if (IS_ERR(flash->pdata)) {
+ 		err = PTR_ERR(flash->pdata);
+-		goto fail1;
++		goto free_flash;
+ 	}
+ 
+ 	v4l2_i2c_subdev_init(&flash->sd, client, &lm3554_ops);
+@@ -852,12 +851,12 @@ static int lm3554_probe(struct i2c_client *client)
+ 	flash->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ 	flash->mode = ATOMISP_FLASH_MODE_OFF;
+ 	flash->timeout = LM3554_MAX_TIMEOUT / LM3554_TIMEOUT_STEPSIZE - 1;
+-	ret =
++	err =
+ 	    v4l2_ctrl_handler_init(&flash->ctrl_handler,
+ 				   ARRAY_SIZE(lm3554_controls));
+-	if (ret) {
++	if (err) {
+ 		dev_err(&client->dev, "error initialize a ctrl_handler.\n");
+-		goto fail3;
++		goto unregister_subdev;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(lm3554_controls); i++)
+@@ -866,14 +865,15 @@ static int lm3554_probe(struct i2c_client *client)
+ 
+ 	if (flash->ctrl_handler.error) {
+ 		dev_err(&client->dev, "ctrl_handler error.\n");
+-		goto fail3;
++		err = flash->ctrl_handler.error;
++		goto free_handler;
+ 	}
+ 
+ 	flash->sd.ctrl_handler = &flash->ctrl_handler;
+ 	err = media_entity_pads_init(&flash->sd.entity, 0, NULL);
+ 	if (err) {
+ 		dev_err(&client->dev, "error initialize a media entity.\n");
+-		goto fail2;
++		goto free_handler;
+ 	}
+ 
+ 	flash->sd.entity.function = MEDIA_ENT_F_FLASH;
+@@ -884,16 +884,27 @@ static int lm3554_probe(struct i2c_client *client)
+ 
+ 	err = lm3554_gpio_init(client);
+ 	if (err) {
+-		dev_err(&client->dev, "gpio request/direction_output fail");
+-		goto fail3;
++		dev_err(&client->dev, "gpio request/direction_output fail.\n");
++		goto cleanup_media;
++	}
++
++	err = atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
++	if (err) {
++		dev_err(&client->dev, "fail to register atomisp i2c module.\n");
++		goto uninit_gpio;
+ 	}
+-	return atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
+-fail3:
++
++	return 0;
++
++uninit_gpio:
++	lm3554_gpio_uninit(client);
++cleanup_media:
+ 	media_entity_cleanup(&flash->sd.entity);
++free_handler:
+ 	v4l2_ctrl_handler_free(&flash->ctrl_handler);
+-fail2:
++unregister_subdev:
+ 	v4l2_device_unregister_subdev(&flash->sd);
+-fail1:
++free_flash:
+ 	kfree(flash);
+ 
+ 	return err;
+diff --git a/drivers/staging/media/imx/imx-media-dev-common.c b/drivers/staging/media/imx/imx-media-dev-common.c
+index d186179388d03..4d873726a461b 100644
+--- a/drivers/staging/media/imx/imx-media-dev-common.c
++++ b/drivers/staging/media/imx/imx-media-dev-common.c
+@@ -367,6 +367,8 @@ struct imx_media_dev *imx_media_dev_init(struct device *dev,
+ 	imxmd->v4l2_dev.notify = imx_media_notify;
+ 	strscpy(imxmd->v4l2_dev.name, "imx-media",
+ 		sizeof(imxmd->v4l2_dev.name));
++	snprintf(imxmd->md.bus_info, sizeof(imxmd->md.bus_info),
++		 "platform:%s", dev_name(imxmd->md.dev));
+ 
+ 	media_device_init(&imxmd->md);
+ 
+diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
+index 38a2407645096..90c86ba5040e3 100644
+--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
++++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
+@@ -592,11 +592,12 @@ static const struct imgu_fmt *find_format(struct v4l2_format *f, u32 type)
+ static int imgu_vidioc_querycap(struct file *file, void *fh,
+ 				struct v4l2_capability *cap)
+ {
+-	struct imgu_video_device *node = file_to_intel_imgu_node(file);
++	struct imgu_device *imgu = video_drvdata(file);
+ 
+ 	strscpy(cap->driver, IMGU_NAME, sizeof(cap->driver));
+ 	strscpy(cap->card, IMGU_NAME, sizeof(cap->card));
+-	snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", node->name);
++	snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s",
++		 pci_name(imgu->pci_dev));
+ 
+ 	return 0;
+ }
+@@ -696,7 +697,7 @@ static int imgu_fmt(struct imgu_device *imgu, unsigned int pipe, int node,
+ 
+ 		/* CSS expects some format on OUT queue */
+ 		if (i != IPU3_CSS_QUEUE_OUT &&
+-		    !imgu_pipe->nodes[inode].enabled) {
++		    !imgu_pipe->nodes[inode].enabled && !try) {
+ 			fmts[i] = NULL;
+ 			continue;
+ 		}
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 76e97cbe25123..951e19231da21 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -1015,8 +1015,9 @@ static int rkvdec_h264_adjust_fmt(struct rkvdec_ctx *ctx,
+ 	struct v4l2_pix_format_mplane *fmt = &f->fmt.pix_mp;
+ 
+ 	fmt->num_planes = 1;
+-	fmt->plane_fmt[0].sizeimage = fmt->width * fmt->height *
+-				      RKVDEC_H264_MAX_DEPTH_IN_BYTES;
++	if (!fmt->plane_fmt[0].sizeimage)
++		fmt->plane_fmt[0].sizeimage = fmt->width * fmt->height *
++					      RKVDEC_H264_MAX_DEPTH_IN_BYTES;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index 7131156c1f2cf..3f3f96488d741 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -280,31 +280,20 @@ static int rkvdec_try_output_fmt(struct file *file, void *priv,
+ 	return 0;
+ }
+ 
+-static int rkvdec_s_fmt(struct file *file, void *priv,
+-			struct v4l2_format *f,
+-			int (*try_fmt)(struct file *, void *,
+-				       struct v4l2_format *))
++static int rkvdec_s_capture_fmt(struct file *file, void *priv,
++				struct v4l2_format *f)
+ {
+ 	struct rkvdec_ctx *ctx = fh_to_rkvdec_ctx(priv);
+ 	struct vb2_queue *vq;
++	int ret;
+ 
+-	if (!try_fmt)
+-		return -EINVAL;
+-
+-	vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
++	/* Change not allowed if queue is busy */
++	vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx,
++			     V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ 	if (vb2_is_busy(vq))
+ 		return -EBUSY;
+ 
+-	return try_fmt(file, priv, f);
+-}
+-
+-static int rkvdec_s_capture_fmt(struct file *file, void *priv,
+-				struct v4l2_format *f)
+-{
+-	struct rkvdec_ctx *ctx = fh_to_rkvdec_ctx(priv);
+-	int ret;
+-
+-	ret = rkvdec_s_fmt(file, priv, f, rkvdec_try_capture_fmt);
++	ret = rkvdec_try_capture_fmt(file, priv, f);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -319,9 +308,20 @@ static int rkvdec_s_output_fmt(struct file *file, void *priv,
+ 	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+ 	const struct rkvdec_coded_fmt_desc *desc;
+ 	struct v4l2_format *cap_fmt;
+-	struct vb2_queue *peer_vq;
++	struct vb2_queue *peer_vq, *vq;
+ 	int ret;
+ 
++	/*
++	 * In order to support dynamic resolution change, the decoder admits
++	 * a resolution change, as long as the pixelformat remains. Can't be
++	 * done if streaming.
++	 */
++	vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
++	if (vb2_is_streaming(vq) ||
++	    (vb2_is_busy(vq) &&
++	     f->fmt.pix_mp.pixelformat != ctx->coded_fmt.fmt.pix_mp.pixelformat))
++		return -EBUSY;
++
+ 	/*
+ 	 * Since format change on the OUTPUT queue will reset the CAPTURE
+ 	 * queue, we can't allow doing so when the CAPTURE queue has buffers
+@@ -331,7 +331,7 @@ static int rkvdec_s_output_fmt(struct file *file, void *priv,
+ 	if (vb2_is_busy(peer_vq))
+ 		return -EBUSY;
+ 
+-	ret = rkvdec_s_fmt(file, priv, f, rkvdec_try_output_fmt);
++	ret = rkvdec_try_output_fmt(file, priv, f);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/staging/most/dim2/Makefile b/drivers/staging/most/dim2/Makefile
+index 861adacf6c729..5f9612af3fa3c 100644
+--- a/drivers/staging/most/dim2/Makefile
++++ b/drivers/staging/most/dim2/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_MOST_DIM2) += most_dim2.o
+ 
+-most_dim2-objs := dim2.o hal.o sysfs.o
++most_dim2-objs := dim2.o hal.o
+diff --git a/drivers/staging/most/dim2/dim2.c b/drivers/staging/most/dim2/dim2.c
+index 093ef9a2b2919..b72d7b9b45ea9 100644
+--- a/drivers/staging/most/dim2/dim2.c
++++ b/drivers/staging/most/dim2/dim2.c
+@@ -117,7 +117,8 @@ struct dim2_platform_data {
+ 	(((p)[1] == 0x18) && ((p)[2] == 0x05) && ((p)[3] == 0x0C) && \
+ 	 ((p)[13] == 0x3C) && ((p)[14] == 0x00) && ((p)[15] == 0x0A))
+ 
+-bool dim2_sysfs_get_state_cb(void)
++static ssize_t state_show(struct device *dev, struct device_attribute *attr,
++			  char *buf)
+ {
+ 	bool state;
+ 	unsigned long flags;
+@@ -126,9 +127,18 @@ bool dim2_sysfs_get_state_cb(void)
+ 	state = dim_get_lock_state();
+ 	spin_unlock_irqrestore(&dim_lock, flags);
+ 
+-	return state;
++	return sysfs_emit(buf, "%s\n", state ? "locked" : "");
+ }
+ 
++static DEVICE_ATTR_RO(state);
++
++static struct attribute *dim2_attrs[] = {
++	&dev_attr_state.attr,
++	NULL,
++};
++
++ATTRIBUTE_GROUPS(dim2);
++
+ /**
+  * dimcb_on_error - callback from HAL to report miscommunication between
+  * HDM and HAL
+@@ -866,16 +876,8 @@ static int dim2_probe(struct platform_device *pdev)
+ 		goto err_stop_thread;
+ 	}
+ 
+-	ret = dim2_sysfs_probe(&dev->dev);
+-	if (ret) {
+-		dev_err(&pdev->dev, "failed to create sysfs attribute\n");
+-		goto err_unreg_iface;
+-	}
+-
+ 	return 0;
+ 
+-err_unreg_iface:
+-	most_deregister_interface(&dev->most_iface);
+ err_stop_thread:
+ 	kthread_stop(dev->netinfo_task);
+ err_shutdown_dim:
+@@ -898,7 +900,6 @@ static int dim2_remove(struct platform_device *pdev)
+ 	struct dim2_hdm *dev = platform_get_drvdata(pdev);
+ 	unsigned long flags;
+ 
+-	dim2_sysfs_destroy(&dev->dev);
+ 	most_deregister_interface(&dev->most_iface);
+ 	kthread_stop(dev->netinfo_task);
+ 
+@@ -1082,6 +1083,7 @@ static struct platform_driver dim2_driver = {
+ 	.driver = {
+ 		.name = "hdm_dim2",
+ 		.of_match_table = dim2_of_match,
++		.dev_groups = dim2_groups,
+ 	},
+ };
+ 
+diff --git a/drivers/staging/most/dim2/sysfs.c b/drivers/staging/most/dim2/sysfs.c
+deleted file mode 100644
+index c85b2cdcdca3d..0000000000000
+--- a/drivers/staging/most/dim2/sysfs.c
++++ /dev/null
+@@ -1,49 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * sysfs.c - MediaLB sysfs information
+- *
+- * Copyright (C) 2015, Microchip Technology Germany II GmbH & Co. KG
+- */
+-
+-/* Author: Andrey Shvetsov <andrey.shvetsov@k2l.de> */
+-
+-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+-
+-#include <linux/kernel.h>
+-#include "sysfs.h"
+-#include <linux/device.h>
+-
+-static ssize_t state_show(struct device *dev, struct device_attribute *attr,
+-			  char *buf)
+-{
+-	bool state = dim2_sysfs_get_state_cb();
+-
+-	return sprintf(buf, "%s\n", state ? "locked" : "");
+-}
+-
+-static DEVICE_ATTR_RO(state);
+-
+-static struct attribute *dev_attrs[] = {
+-	&dev_attr_state.attr,
+-	NULL,
+-};
+-
+-static struct attribute_group dev_attr_group = {
+-	.attrs = dev_attrs,
+-};
+-
+-static const struct attribute_group *dev_attr_groups[] = {
+-	&dev_attr_group,
+-	NULL,
+-};
+-
+-int dim2_sysfs_probe(struct device *dev)
+-{
+-	dev->groups = dev_attr_groups;
+-	return device_register(dev);
+-}
+-
+-void dim2_sysfs_destroy(struct device *dev)
+-{
+-	device_unregister(dev);
+-}
+diff --git a/drivers/staging/most/dim2/sysfs.h b/drivers/staging/most/dim2/sysfs.h
+index 24277a17cff3d..09115cf4ed00e 100644
+--- a/drivers/staging/most/dim2/sysfs.h
++++ b/drivers/staging/most/dim2/sysfs.h
+@@ -16,15 +16,4 @@ struct medialb_bus {
+ 	struct kobject kobj_group;
+ };
+ 
+-struct device;
+-
+-int dim2_sysfs_probe(struct device *dev);
+-void dim2_sysfs_destroy(struct device *dev);
+-
+-/*
+- * callback,
+- * must deliver MediaLB state as true if locked or false if unlocked
+- */
+-bool dim2_sysfs_get_state_cb(void);
+-
+ #endif	/* DIM2_SYSFS_H */
+diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
+index e7fcbc09f9dbc..bac111456fa1d 100644
+--- a/drivers/target/target_core_tmr.c
++++ b/drivers/target/target_core_tmr.c
+@@ -50,15 +50,6 @@ EXPORT_SYMBOL(core_tmr_alloc_req);
+ 
+ void core_tmr_release_req(struct se_tmr_req *tmr)
+ {
+-	struct se_device *dev = tmr->tmr_dev;
+-	unsigned long flags;
+-
+-	if (dev) {
+-		spin_lock_irqsave(&dev->se_tmr_lock, flags);
+-		list_del_init(&tmr->tmr_list);
+-		spin_unlock_irqrestore(&dev->se_tmr_lock, flags);
+-	}
+-
+ 	kfree(tmr);
+ }
+ 
+@@ -156,13 +147,6 @@ void core_tmr_abort_task(
+ 			se_cmd->state_active = false;
+ 			spin_unlock_irqrestore(&dev->queues[i].lock, flags);
+ 
+-			/*
+-			 * Ensure that this ABORT request is visible to the LU
+-			 * RESET code.
+-			 */
+-			if (!tmr->tmr_dev)
+-				WARN_ON_ONCE(transport_lookup_tmr_lun(tmr->task_cmd) < 0);
+-
+ 			if (dev->transport->tmr_notify)
+ 				dev->transport->tmr_notify(dev, TMR_ABORT_TASK,
+ 							   &aborted_list);
+@@ -234,6 +218,7 @@ static void core_tmr_drain_tmr_list(
+ 		}
+ 
+ 		list_move_tail(&tmr_p->tmr_list, &drain_tmr_list);
++		tmr_p->tmr_dev = NULL;
+ 	}
+ 	spin_unlock_irqrestore(&dev->se_tmr_lock, flags);
+ 
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 26ceabe34de55..4b41fbc54fa5e 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -676,6 +676,21 @@ static void target_remove_from_state_list(struct se_cmd *cmd)
+ 	spin_unlock_irqrestore(&dev->queues[cmd->cpuid].lock, flags);
+ }
+ 
++static void target_remove_from_tmr_list(struct se_cmd *cmd)
++{
++	struct se_device *dev = NULL;
++	unsigned long flags;
++
++	if (cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)
++		dev = cmd->se_tmr_req->tmr_dev;
++
++	if (dev) {
++		spin_lock_irqsave(&dev->se_tmr_lock, flags);
++		if (cmd->se_tmr_req->tmr_dev)
++			list_del_init(&cmd->se_tmr_req->tmr_list);
++		spin_unlock_irqrestore(&dev->se_tmr_lock, flags);
++	}
++}
+ /*
+  * This function is called by the target core after the target core has
+  * finished processing a SCSI command or SCSI TMF. Both the regular command
+@@ -687,13 +702,6 @@ static int transport_cmd_check_stop_to_fabric(struct se_cmd *cmd)
+ {
+ 	unsigned long flags;
+ 
+-	target_remove_from_state_list(cmd);
+-
+-	/*
+-	 * Clear struct se_cmd->se_lun before the handoff to FE.
+-	 */
+-	cmd->se_lun = NULL;
+-
+ 	spin_lock_irqsave(&cmd->t_state_lock, flags);
+ 	/*
+ 	 * Determine if frontend context caller is requesting the stopping of
+@@ -728,8 +736,16 @@ static void transport_lun_remove_cmd(struct se_cmd *cmd)
+ 	if (!lun)
+ 		return;
+ 
++	target_remove_from_state_list(cmd);
++	target_remove_from_tmr_list(cmd);
++
+ 	if (cmpxchg(&cmd->lun_ref_active, true, false))
+ 		percpu_ref_put(&lun->lun_ref);
++
++	/*
++	 * Clear struct se_cmd->se_lun before the handoff to FE.
++	 */
++	cmd->se_lun = NULL;
+ }
+ 
+ static void target_complete_failure_work(struct work_struct *work)
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index b1162e566a707..99a8d9f3e03ca 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -603,22 +603,21 @@ int get_temp_tsens_valid(const struct tsens_sensor *s, int *temp)
+ 	int ret;
+ 
+ 	/* VER_0 doesn't have VALID bit */
+-	if (tsens_version(priv) >= VER_0_1) {
+-		ret = regmap_field_read(priv->rf[valid_idx], &valid);
+-		if (ret)
+-			return ret;
+-		while (!valid) {
+-			/* Valid bit is 0 for 6 AHB clock cycles.
+-			 * At 19.2MHz, 1 AHB clock is ~60ns.
+-			 * We should enter this loop very, very rarely.
+-			 */
+-			ndelay(400);
+-			ret = regmap_field_read(priv->rf[valid_idx], &valid);
+-			if (ret)
+-				return ret;
+-		}
+-	}
++	if (tsens_version(priv) == VER_0)
++		goto get_temp;
++
++	/* Valid bit is 0 for 6 AHB clock cycles.
++	 * At 19.2MHz, 1 AHB clock is ~60ns.
++	 * We should enter this loop very, very rarely.
++	 * Wait 1 us since it's the min of poll_timeout macro.
++	 * Old value was 400 ns.
++	 */
++	ret = regmap_field_read_poll_timeout(priv->rf[valid_idx], valid,
++					     valid, 1, 20 * USEC_PER_MSEC);
++	if (ret)
++		return ret;
+ 
++get_temp:
+ 	/* Valid bit is set, OK to read the temperature */
+ 	*temp = tsens_hw_to_mC(s, temp_idx);
+ 
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 51374f4e1ccaf..30134f49b037a 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -887,7 +887,7 @@ __thermal_cooling_device_register(struct device_node *np,
+ {
+ 	struct thermal_cooling_device *cdev;
+ 	struct thermal_zone_device *pos = NULL;
+-	int ret;
++	int id, ret;
+ 
+ 	if (!ops || !ops->get_max_state || !ops->get_cur_state ||
+ 	    !ops->set_cur_state)
+@@ -901,6 +901,11 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	if (ret < 0)
+ 		goto out_kfree_cdev;
+ 	cdev->id = ret;
++	id = ret;
++
++	ret = dev_set_name(&cdev->device, "cooling_device%d", cdev->id);
++	if (ret)
++		goto out_ida_remove;
+ 
+ 	cdev->type = kstrdup(type ? type : "", GFP_KERNEL);
+ 	if (!cdev->type) {
+@@ -916,7 +921,6 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	cdev->device.class = &thermal_class;
+ 	cdev->devdata = devdata;
+ 	thermal_cooling_device_setup_sysfs(cdev);
+-	dev_set_name(&cdev->device, "cooling_device%d", cdev->id);
+ 	ret = device_register(&cdev->device);
+ 	if (ret)
+ 		goto out_kfree_type;
+@@ -941,8 +945,9 @@ __thermal_cooling_device_register(struct device_node *np,
+ out_kfree_type:
+ 	kfree(cdev->type);
+ 	put_device(&cdev->device);
++	cdev = NULL;
+ out_ida_remove:
+-	ida_simple_remove(&thermal_cdev_ida, cdev->id);
++	ida_simple_remove(&thermal_cdev_ida, id);
+ out_kfree_cdev:
+ 	kfree(cdev);
+ 	return ERR_PTR(ret);
+@@ -1227,6 +1232,10 @@ thermal_zone_device_register(const char *type, int trips, int mask,
+ 	tz->id = id;
+ 	strlcpy(tz->type, type, sizeof(tz->type));
+ 
++	result = dev_set_name(&tz->device, "thermal_zone%d", tz->id);
++	if (result)
++		goto remove_id;
++
+ 	if (!ops->critical)
+ 		ops->critical = thermal_zone_device_critical;
+ 
+@@ -1248,7 +1257,6 @@ thermal_zone_device_register(const char *type, int trips, int mask,
+ 	/* A new thermal zone needs to be updated anyway. */
+ 	atomic_set(&tz->need_update, 1);
+ 
+-	dev_set_name(&tz->device, "thermal_zone%d", tz->id);
+ 	result = device_register(&tz->device);
+ 	if (result)
+ 		goto release_device;
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index a3a0154da567d..49559731bbcf1 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -726,7 +726,7 @@ static struct platform_driver dw8250_platform_driver = {
+ 		.name		= "dw-apb-uart",
+ 		.pm		= &dw8250_pm_ops,
+ 		.of_match_table	= dw8250_of_match,
+-		.acpi_match_table = ACPI_PTR(dw8250_acpi_match),
++		.acpi_match_table = dw8250_acpi_match,
+ 	},
+ 	.probe			= dw8250_probe,
+ 	.remove			= dw8250_remove,
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 66374704747ec..e4dd82fd7c2a5 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2696,21 +2696,32 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ void serial8250_update_uartclk(struct uart_port *port, unsigned int uartclk)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
++	struct tty_port *tport = &port->state->port;
+ 	unsigned int baud, quot, frac = 0;
+ 	struct ktermios *termios;
++	struct tty_struct *tty;
+ 	unsigned long flags;
+ 
+-	mutex_lock(&port->state->port.mutex);
++	tty = tty_port_tty_get(tport);
++	if (!tty) {
++		mutex_lock(&tport->mutex);
++		port->uartclk = uartclk;
++		mutex_unlock(&tport->mutex);
++		return;
++	}
++
++	down_write(&tty->termios_rwsem);
++	mutex_lock(&tport->mutex);
+ 
+ 	if (port->uartclk == uartclk)
+ 		goto out_lock;
+ 
+ 	port->uartclk = uartclk;
+ 
+-	if (!tty_port_initialized(&port->state->port))
++	if (!tty_port_initialized(tport))
+ 		goto out_lock;
+ 
+-	termios = &port->state->port.tty->termios;
++	termios = &tty->termios;
+ 
+ 	baud = serial8250_get_baud_rate(port, termios, NULL);
+ 	quot = serial8250_get_divisor(port, baud, &frac);
+@@ -2727,7 +2738,9 @@ void serial8250_update_uartclk(struct uart_port *port, unsigned int uartclk)
+ 	serial8250_rpm_put(up);
+ 
+ out_lock:
+-	mutex_unlock(&port->state->port.mutex);
++	mutex_unlock(&tport->mutex);
++	up_write(&tty->termios_rwsem);
++	tty_kref_put(tty);
+ }
+ EXPORT_SYMBOL_GPL(serial8250_update_uartclk);
+ 
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index c719aa2b18328..d6d3db9c3b1f8 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1090,6 +1090,7 @@ static void cpm_put_poll_char(struct uart_port *port,
+ 	cpm_uart_early_write(pinfo, ch, 1, false);
+ }
+ 
++#ifdef CONFIG_SERIAL_CPM_CONSOLE
+ static struct uart_port *udbg_port;
+ 
+ static void udbg_cpm_putc(char c)
+@@ -1114,6 +1115,7 @@ static int udbg_cpm_getc(void)
+ 		cpu_relax();
+ 	return c;
+ }
++#endif /* CONFIG_SERIAL_CPM_CONSOLE */
+ 
+ #endif /* CONFIG_CONSOLE_POLL */
+ 
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 8b121cd869e94..51a9f9423b1a6 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2017,7 +2017,7 @@ imx_uart_console_write(struct console *co, const char *s, unsigned int count)
+  * If the port was already initialised (eg, by a boot loader),
+  * try to determine the current setup.
+  */
+-static void __init
++static void
+ imx_uart_console_get_options(struct imx_port *sport, int *baud,
+ 			     int *parity, int *bits)
+ {
+@@ -2076,7 +2076,7 @@ imx_uart_console_get_options(struct imx_port *sport, int *baud,
+ 	}
+ }
+ 
+-static int __init
++static int
+ imx_uart_console_setup(struct console *co, char *options)
+ {
+ 	struct imx_port *sport;
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 69092deba11f1..c0a8ca6eaec81 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -222,7 +222,11 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ 	if (retval == 0) {
+ 		if (uart_console(uport) && uport->cons->cflag) {
+ 			tty->termios.c_cflag = uport->cons->cflag;
++			tty->termios.c_ispeed = uport->cons->ispeed;
++			tty->termios.c_ospeed = uport->cons->ospeed;
+ 			uport->cons->cflag = 0;
++			uport->cons->ispeed = 0;
++			uport->cons->ospeed = 0;
+ 		}
+ 		/*
+ 		 * Initialise the hardware port settings.
+@@ -290,8 +294,11 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ 		/*
+ 		 * Turn off DTR and RTS early.
+ 		 */
+-		if (uport && uart_console(uport) && tty)
++		if (uport && uart_console(uport) && tty) {
+ 			uport->cons->cflag = tty->termios.c_cflag;
++			uport->cons->ispeed = tty->termios.c_ispeed;
++			uport->cons->ospeed = tty->termios.c_ospeed;
++		}
+ 
+ 		if (!tty || C_HUPCL(tty))
+ 			uart_port_dtr_rts(uport, 0);
+@@ -2094,8 +2101,11 @@ uart_set_options(struct uart_port *port, struct console *co,
+ 	 * Allow the setting of the UART parameters with a NULL console
+ 	 * too:
+ 	 */
+-	if (co)
++	if (co) {
+ 		co->cflag = termios.c_cflag;
++		co->ispeed = termios.c_ispeed;
++		co->ospeed = termios.c_ospeed;
++	}
+ 
+ 	return 0;
+ }
+@@ -2229,6 +2239,8 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
+ 		 */
+ 		memset(&termios, 0, sizeof(struct ktermios));
+ 		termios.c_cflag = uport->cons->cflag;
++		termios.c_ispeed = uport->cons->ispeed;
++		termios.c_ospeed = uport->cons->ospeed;
+ 
+ 		/*
+ 		 * If that's unset, use the tty termios setting.
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 962e522ccc45c..d5e243908d9fd 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -601,9 +601,10 @@ static void cdns_uart_start_tx(struct uart_port *port)
+ 	if (uart_circ_empty(&port->state->xmit))
+ 		return;
+ 
++	writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_ISR);
++
+ 	cdns_uart_handle_tx(port);
+ 
+-	writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_ISR);
+ 	/* Enable the TX Empty interrupt */
+ 	writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IER);
+ }
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 2b18f5088ae4a..a56f06368d142 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -514,7 +514,7 @@ int hw_device_reset(struct ci_hdrc *ci)
+ 	return 0;
+ }
+ 
+-static irqreturn_t ci_irq(int irq, void *data)
++static irqreturn_t ci_irq_handler(int irq, void *data)
+ {
+ 	struct ci_hdrc *ci = data;
+ 	irqreturn_t ret = IRQ_NONE;
+@@ -567,6 +567,15 @@ static irqreturn_t ci_irq(int irq, void *data)
+ 	return ret;
+ }
+ 
++static void ci_irq(struct ci_hdrc *ci)
++{
++	unsigned long flags;
++
++	local_irq_save(flags);
++	ci_irq_handler(ci->irq, ci);
++	local_irq_restore(flags);
++}
++
+ static int ci_cable_notifier(struct notifier_block *nb, unsigned long event,
+ 			     void *ptr)
+ {
+@@ -576,7 +585,7 @@ static int ci_cable_notifier(struct notifier_block *nb, unsigned long event,
+ 	cbl->connected = event;
+ 	cbl->changed = true;
+ 
+-	ci_irq(ci->irq, ci);
++	ci_irq(ci);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -617,7 +626,7 @@ static int ci_usb_role_switch_set(struct usb_role_switch *sw,
+ 	if (cable) {
+ 		cable->changed = true;
+ 		cable->connected = false;
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ 		spin_unlock_irqrestore(&ci->lock, flags);
+ 		if (ci->wq && role != USB_ROLE_NONE)
+ 			flush_workqueue(ci->wq);
+@@ -635,7 +644,7 @@ static int ci_usb_role_switch_set(struct usb_role_switch *sw,
+ 	if (cable) {
+ 		cable->changed = true;
+ 		cable->connected = true;
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ 	}
+ 	spin_unlock_irqrestore(&ci->lock, flags);
+ 	pm_runtime_put_sync(ci->dev);
+@@ -1174,7 +1183,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	ret = devm_request_irq(dev, ci->irq, ci_irq, IRQF_SHARED,
++	ret = devm_request_irq(dev, ci->irq, ci_irq_handler, IRQF_SHARED,
+ 			ci->platdata->name, ci);
+ 	if (ret)
+ 		goto stop;
+@@ -1295,11 +1304,11 @@ static void ci_extcon_wakeup_int(struct ci_hdrc *ci)
+ 
+ 	if (!IS_ERR(cable_id->edev) && ci->is_otg &&
+ 		(otgsc & OTGSC_IDIE) && (otgsc & OTGSC_IDIS))
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ 
+ 	if (!IS_ERR(cable_vbus->edev) && ci->is_otg &&
+ 		(otgsc & OTGSC_BSVIE) && (otgsc & OTGSC_BSVIS))
+-		ci_irq(ci->irq, ci);
++		ci_irq(ci);
+ }
+ 
+ static int ci_controller_resume(struct device *dev)
+diff --git a/drivers/usb/dwc2/drd.c b/drivers/usb/dwc2/drd.c
+index 2d4176f5788eb..aa6eb76f64ddc 100644
+--- a/drivers/usb/dwc2/drd.c
++++ b/drivers/usb/dwc2/drd.c
+@@ -7,6 +7,7 @@
+  * Author(s): Amelie Delaunay <amelie.delaunay@st.com>
+  */
+ 
++#include <linux/clk.h>
+ #include <linux/iopoll.h>
+ #include <linux/platform_device.h>
+ #include <linux/usb/role.h>
+@@ -25,9 +26,9 @@ static void dwc2_ovr_init(struct dwc2_hsotg *hsotg)
+ 	gotgctl &= ~(GOTGCTL_BVALOVAL | GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL);
+ 	dwc2_writel(hsotg, gotgctl, GOTGCTL);
+ 
+-	dwc2_force_mode(hsotg, false);
+-
+ 	spin_unlock_irqrestore(&hsotg->lock, flags);
++
++	dwc2_force_mode(hsotg, (hsotg->dr_mode == USB_DR_MODE_HOST));
+ }
+ 
+ static int dwc2_ovr_avalid(struct dwc2_hsotg *hsotg, bool valid)
+@@ -39,6 +40,7 @@ static int dwc2_ovr_avalid(struct dwc2_hsotg *hsotg, bool valid)
+ 	    (!valid && !(gotgctl & GOTGCTL_ASESVLD)))
+ 		return -EALREADY;
+ 
++	gotgctl &= ~GOTGCTL_BVALOVAL;
+ 	if (valid)
+ 		gotgctl |= GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL;
+ 	else
+@@ -57,6 +59,7 @@ static int dwc2_ovr_bvalid(struct dwc2_hsotg *hsotg, bool valid)
+ 	    (!valid && !(gotgctl & GOTGCTL_BSESVLD)))
+ 		return -EALREADY;
+ 
++	gotgctl &= ~GOTGCTL_AVALOVAL;
+ 	if (valid)
+ 		gotgctl |= GOTGCTL_BVALOVAL | GOTGCTL_VBVALOVAL;
+ 	else
+@@ -86,6 +89,20 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 	}
+ #endif
+ 
++	/*
++	 * In case of USB_DR_MODE_PERIPHERAL, clock is disabled at the end of
++	 * the probe and enabled on udc_start.
++	 * If role-switch set is called before the udc_start, we need to enable
++	 * the clock to read/write GOTGCTL and GUSBCFG registers to override
++	 * mode and sessions. It is the case if cable is plugged at boot.
++	 */
++	if (!hsotg->ll_hw_enabled && hsotg->clk) {
++		int ret = clk_prepare_enable(hsotg->clk);
++
++		if (ret)
++			return ret;
++	}
++
+ 	spin_lock_irqsave(&hsotg->lock, flags);
+ 
+ 	if (role == USB_ROLE_HOST) {
+@@ -110,6 +127,9 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
+ 		/* This will raise a Connector ID Status Change Interrupt */
+ 		dwc2_force_mode(hsotg, role == USB_ROLE_HOST);
+ 
++	if (!hsotg->ll_hw_enabled && hsotg->clk)
++		clk_disable_unprepare(hsotg->clk);
++
+ 	dev_dbg(hsotg->dev, "%s-session valid\n",
+ 		role == USB_ROLE_NONE ? "No" :
+ 		role == USB_ROLE_HOST ? "A" : "B");
+diff --git a/drivers/usb/gadget/legacy/hid.c b/drivers/usb/gadget/legacy/hid.c
+index 5b27d289443fe..3912cc805f3af 100644
+--- a/drivers/usb/gadget/legacy/hid.c
++++ b/drivers/usb/gadget/legacy/hid.c
+@@ -99,8 +99,10 @@ static int do_config(struct usb_configuration *c)
+ 
+ 	list_for_each_entry(e, &hidg_func_list, node) {
+ 		e->f = usb_get_function(e->fi);
+-		if (IS_ERR(e->f))
++		if (IS_ERR(e->f)) {
++			status = PTR_ERR(e->f);
+ 			goto put;
++		}
+ 		status = usb_add_function(c, e->f);
+ 		if (status < 0) {
+ 			usb_put_function(e->f);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 151e93c4bd574..12a69ff91469f 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -257,7 +257,6 @@ static void xhci_common_hub_descriptor(struct xhci_hcd *xhci,
+ {
+ 	u16 temp;
+ 
+-	desc->bPwrOn2PwrGood = 10;	/* xhci section 5.4.9 says 20ms max */
+ 	desc->bHubContrCurrent = 0;
+ 
+ 	desc->bNbrPorts = ports;
+@@ -292,6 +291,7 @@ static void xhci_usb2_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+ 	desc->bDescriptorType = USB_DT_HUB;
+ 	temp = 1 + (ports / 8);
+ 	desc->bDescLength = USB_DT_HUB_NONVAR_SIZE + 2 * temp;
++	desc->bPwrOn2PwrGood = 10;	/* xhci section 5.4.8 says 20ms */
+ 
+ 	/* The Device Removable bits are reported on a byte granularity.
+ 	 * If the port doesn't exist within that byte, the bit is set to 0.
+@@ -344,6 +344,7 @@ static void xhci_usb3_hub_descriptor(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+ 	xhci_common_hub_descriptor(xhci, desc, ports);
+ 	desc->bDescriptorType = USB_DT_SS_HUB;
+ 	desc->bDescLength = USB_DT_SS_HUB_SIZE;
++	desc->bPwrOn2PwrGood = 50;	/* usb 3.1 may fail if less than 100ms */
+ 
+ 	/* header decode latency should be zero for roothubs,
+ 	 * see section 4.23.5.2.
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 003c5f0a8760f..1ccb926ef2427 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -65,6 +65,13 @@
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3			0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2			0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1			0x43bc
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1		0x161a
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2		0x161b
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3		0x161d
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4		0x161e
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5		0x15d6
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6		0x15d7
++
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI			0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+ #define PCI_DEVICE_ID_ASMEDIA_1142_XHCI			0x1242
+@@ -317,6 +324,15 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
+ 		xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ 
++	if (pdev->vendor == PCI_VENDOR_ID_AMD &&
++	    (pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_1 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_2 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6))
++		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
++
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+ 				"QUIRK: Resetting on resume");
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index efbd317f2f252..988a8c02e7e24 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -99,10 +99,6 @@ struct iowarrior {
+ /*    globals   */
+ /*--------------*/
+ 
+-/*
+- *  USB spec identifies 5 second timeouts.
+- */
+-#define GET_TIMEOUT 5
+ #define USB_REQ_GET_REPORT  0x01
+ //#if 0
+ static int usb_get_report(struct usb_device *dev,
+@@ -114,7 +110,7 @@ static int usb_get_report(struct usb_device *dev,
+ 			       USB_DIR_IN | USB_TYPE_CLASS |
+ 			       USB_RECIP_INTERFACE, (type << 8) + id,
+ 			       inter->desc.bInterfaceNumber, buf, size,
+-			       GET_TIMEOUT*HZ);
++			       USB_CTRL_GET_TIMEOUT);
+ }
+ //#endif
+ 
+@@ -129,7 +125,7 @@ static int usb_set_report(struct usb_interface *intf, unsigned char type,
+ 			       USB_TYPE_CLASS | USB_RECIP_INTERFACE,
+ 			       (type << 8) + id,
+ 			       intf->cur_altsetting->desc.bInterfaceNumber, buf,
+-			       size, HZ);
++			       size, 1000);
+ }
+ 
+ /*---------------------*/
+diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
+index 8de143807c1ae..4d61df6a9b5c8 100644
+--- a/drivers/usb/musb/Kconfig
++++ b/drivers/usb/musb/Kconfig
+@@ -120,7 +120,7 @@ config USB_MUSB_MEDIATEK
+ 	tristate "MediaTek platforms"
+ 	depends on ARCH_MEDIATEK || COMPILE_TEST
+ 	depends on NOP_USB_XCEIV
+-	depends on GENERIC_PHY
++	select GENERIC_PHY
+ 	select USB_ROLE_SWITCH
+ 
+ comment "MUSB DMA mode"
+diff --git a/drivers/usb/serial/keyspan.c b/drivers/usb/serial/keyspan.c
+index 87b89c99d5177..1cfcd805f2868 100644
+--- a/drivers/usb/serial/keyspan.c
++++ b/drivers/usb/serial/keyspan.c
+@@ -2890,22 +2890,22 @@ static int keyspan_port_probe(struct usb_serial_port *port)
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->in_buffer); ++i) {
+ 		p_priv->in_buffer[i] = kzalloc(IN_BUFLEN, GFP_KERNEL);
+ 		if (!p_priv->in_buffer[i])
+-			goto err_in_buffer;
++			goto err_free_in_buffer;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->out_buffer); ++i) {
+ 		p_priv->out_buffer[i] = kzalloc(OUT_BUFLEN, GFP_KERNEL);
+ 		if (!p_priv->out_buffer[i])
+-			goto err_out_buffer;
++			goto err_free_out_buffer;
+ 	}
+ 
+ 	p_priv->inack_buffer = kzalloc(INACK_BUFLEN, GFP_KERNEL);
+ 	if (!p_priv->inack_buffer)
+-		goto err_inack_buffer;
++		goto err_free_out_buffer;
+ 
+ 	p_priv->outcont_buffer = kzalloc(OUTCONT_BUFLEN, GFP_KERNEL);
+ 	if (!p_priv->outcont_buffer)
+-		goto err_outcont_buffer;
++		goto err_free_inack_buffer;
+ 
+ 	p_priv->device_details = d_details;
+ 
+@@ -2951,15 +2951,14 @@ static int keyspan_port_probe(struct usb_serial_port *port)
+ 
+ 	return 0;
+ 
+-err_outcont_buffer:
++err_free_inack_buffer:
+ 	kfree(p_priv->inack_buffer);
+-err_inack_buffer:
++err_free_out_buffer:
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->out_buffer); ++i)
+ 		kfree(p_priv->out_buffer[i]);
+-err_out_buffer:
++err_free_in_buffer:
+ 	for (i = 0; i < ARRAY_SIZE(p_priv->in_buffer); ++i)
+ 		kfree(p_priv->in_buffer[i]);
+-err_in_buffer:
+ 	kfree(p_priv);
+ 
+ 	return -ENOMEM;
+diff --git a/drivers/usb/typec/Kconfig b/drivers/usb/typec/Kconfig
+index a0418f23b4aae..ab480f38523aa 100644
+--- a/drivers/usb/typec/Kconfig
++++ b/drivers/usb/typec/Kconfig
+@@ -65,9 +65,9 @@ config TYPEC_HD3SS3220
+ 
+ config TYPEC_STUSB160X
+ 	tristate "STMicroelectronics STUSB160x Type-C controller driver"
+-	depends on I2C
+-	depends on REGMAP_I2C
+ 	depends on USB_ROLE_SWITCH || !USB_ROLE_SWITCH
++	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  Say Y or M here if your system has STMicroelectronics STUSB160x
+ 	  Type-C port controller.
+diff --git a/drivers/video/backlight/backlight.c b/drivers/video/backlight/backlight.c
+index 537fe1b376ad7..fc990e576340b 100644
+--- a/drivers/video/backlight/backlight.c
++++ b/drivers/video/backlight/backlight.c
+@@ -688,12 +688,6 @@ static struct backlight_device *of_find_backlight(struct device *dev)
+ 			of_node_put(np);
+ 			if (!bd)
+ 				return ERR_PTR(-EPROBE_DEFER);
+-			/*
+-			 * Note: gpio_backlight uses brightness as
+-			 * power state during probe
+-			 */
+-			if (!bd->props.brightness)
+-				bd->props.brightness = bd->props.max_brightness;
+ 		}
+ 	}
+ 
+diff --git a/drivers/video/fbdev/chipsfb.c b/drivers/video/fbdev/chipsfb.c
+index 998067b701fa0..393894af26f84 100644
+--- a/drivers/video/fbdev/chipsfb.c
++++ b/drivers/video/fbdev/chipsfb.c
+@@ -331,7 +331,7 @@ static const struct fb_var_screeninfo chipsfb_var = {
+ 
+ static void init_chips(struct fb_info *p, unsigned long addr)
+ {
+-	memset(p->screen_base, 0, 0x100000);
++	fb_memset(p->screen_base, 0, 0x100000);
+ 
+ 	p->fix = chipsfb_fix;
+ 	p->fix.smem_start = addr;
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 8ea8f079cde26..edca3703b9640 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -47,6 +47,8 @@ static bool use_bgrt = true;
+ static bool request_mem_succeeded = false;
+ static u64 mem_flags = EFI_MEMORY_WC | EFI_MEMORY_UC;
+ 
++static struct pci_dev *efifb_pci_dev;	/* dev with BAR covering the efifb */
++
+ static struct fb_var_screeninfo efifb_defined = {
+ 	.activate		= FB_ACTIVATE_NOW,
+ 	.height			= -1,
+@@ -243,6 +245,9 @@ static inline void efifb_show_boot_graphics(struct fb_info *info) {}
+ 
+ static void efifb_destroy(struct fb_info *info)
+ {
++	if (efifb_pci_dev)
++		pm_runtime_put(&efifb_pci_dev->dev);
++
+ 	if (info->screen_base) {
+ 		if (mem_flags & (EFI_MEMORY_UC | EFI_MEMORY_WC))
+ 			iounmap(info->screen_base);
+@@ -333,7 +338,6 @@ ATTRIBUTE_GROUPS(efifb);
+ 
+ static bool pci_dev_disabled;	/* FB base matches BAR of a disabled device */
+ 
+-static struct pci_dev *efifb_pci_dev;	/* dev with BAR covering the efifb */
+ static struct resource *bar_resource;
+ static u64 bar_offset;
+ 
+@@ -569,17 +573,22 @@ static int efifb_probe(struct platform_device *dev)
+ 		pr_err("efifb: cannot allocate colormap\n");
+ 		goto err_groups;
+ 	}
++
++	if (efifb_pci_dev)
++		WARN_ON(pm_runtime_get_sync(&efifb_pci_dev->dev) < 0);
++
+ 	err = register_framebuffer(info);
+ 	if (err < 0) {
+ 		pr_err("efifb: cannot register framebuffer\n");
+-		goto err_fb_dealoc;
++		goto err_put_rpm_ref;
+ 	}
+ 	fb_info(info, "%s frame buffer device\n", info->fix.id);
+-	if (efifb_pci_dev)
+-		pm_runtime_get_sync(&efifb_pci_dev->dev);
+ 	return 0;
+ 
+-err_fb_dealoc:
++err_put_rpm_ref:
++	if (efifb_pci_dev)
++		pm_runtime_put(&efifb_pci_dev->dev);
++
+ 	fb_dealloc_cmap(&info->cmap);
+ err_groups:
+ 	sysfs_remove_groups(&dev->dev.kobj, efifb_groups);
+@@ -603,8 +612,6 @@ static int efifb_remove(struct platform_device *pdev)
+ 	unregister_framebuffer(info);
+ 	sysfs_remove_groups(&pdev->dev.kobj, efifb_groups);
+ 	framebuffer_release(info);
+-	if (efifb_pci_dev)
+-		pm_runtime_put(&efifb_pci_dev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 3035bb6f54585..d1f47327f6cfe 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1065,6 +1065,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
+ 
+ 	head = vq->packed.next_avail_idx;
+ 	desc = alloc_indirect_packed(total_sg, gfp);
++	if (!desc)
++		return -ENOMEM;
+ 
+ 	if (unlikely(vq->vq.num_free < 1)) {
+ 		pr_debug("Can't add buf len 1 - avail = 0\n");
+@@ -1176,6 +1178,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 	unsigned int i, n, c, descs_used, err_idx;
+ 	__le16 head_flags, flags;
+ 	u16 head, id, prev, curr, avail_used_flags;
++	int err;
+ 
+ 	START_USE(vq);
+ 
+@@ -1191,9 +1194,14 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 
+ 	BUG_ON(total_sg == 0);
+ 
+-	if (virtqueue_use_indirect(_vq, total_sg))
+-		return virtqueue_add_indirect_packed(vq, sgs, total_sg,
+-				out_sgs, in_sgs, data, gfp);
++	if (virtqueue_use_indirect(_vq, total_sg)) {
++		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
++						    in_sgs, data, gfp);
++		if (err != -ENOMEM)
++			return err;
++
++		/* fall back on direct */
++	}
+ 
+ 	head = vq->packed.next_avail_idx;
+ 	avail_used_flags = vq->packed.avail_used_flags;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 71cf3f503f16b..c7be7dd56cde8 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1690,7 +1690,7 @@ config SIBYTE_WDOG
+ 
+ config AR7_WDT
+ 	tristate "TI AR7 Watchdog Timer"
+-	depends on AR7 || (MIPS && COMPILE_TEST)
++	depends on AR7 || (MIPS && 32BIT && COMPILE_TEST)
+ 	help
+ 	  Hardware driver for the TI AR7 Watchdog Timer.
+ 
+diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c
+index f60beec1bbaea..f7d82d2619133 100644
+--- a/drivers/watchdog/f71808e_wdt.c
++++ b/drivers/watchdog/f71808e_wdt.c
+@@ -228,15 +228,17 @@ static int watchdog_set_timeout(int timeout)
+ 
+ 	mutex_lock(&watchdog.lock);
+ 
+-	watchdog.timeout = timeout;
+ 	if (timeout > 0xff) {
+ 		watchdog.timer_val = DIV_ROUND_UP(timeout, 60);
+ 		watchdog.minutes_mode = true;
++		timeout = watchdog.timer_val * 60;
+ 	} else {
+ 		watchdog.timer_val = timeout;
+ 		watchdog.minutes_mode = false;
+ 	}
+ 
++	watchdog.timeout = timeout;
++
+ 	mutex_unlock(&watchdog.lock);
+ 
+ 	return 0;
+diff --git a/drivers/watchdog/omap_wdt.c b/drivers/watchdog/omap_wdt.c
+index 1616f93dfad7f..74d785b2b478f 100644
+--- a/drivers/watchdog/omap_wdt.c
++++ b/drivers/watchdog/omap_wdt.c
+@@ -268,8 +268,12 @@ static int omap_wdt_probe(struct platform_device *pdev)
+ 			wdev->wdog.bootstatus = WDIOF_CARDRESET;
+ 	}
+ 
+-	if (!early_enable)
++	if (early_enable) {
++		omap_wdt_start(&wdev->wdog);
++		set_bit(WDOG_HW_RUNNING, &wdev->wdog.status);
++	} else {
+ 		omap_wdt_disable(wdev);
++	}
+ 
+ 	ret = watchdog_register_device(&wdev->wdog);
+ 	if (ret) {
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 3a50f097ed3ed..8db96b5e72536 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -58,6 +58,7 @@
+ #include <linux/percpu-defs.h>
+ #include <linux/slab.h>
+ #include <linux/sysctl.h>
++#include <linux/moduleparam.h>
+ 
+ #include <asm/page.h>
+ #include <asm/tlb.h>
+@@ -73,6 +74,12 @@
+ #include <xen/page.h>
+ #include <xen/mem-reservation.h>
+ 
++#undef MODULE_PARAM_PREFIX
++#define MODULE_PARAM_PREFIX "xen."
++
++static uint __read_mostly balloon_boot_timeout = 180;
++module_param(balloon_boot_timeout, uint, 0444);
++
+ static int xen_hotplug_unpopulated;
+ 
+ #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
+@@ -125,12 +132,12 @@ static struct ctl_table xen_root[] = {
+  * BP_ECANCELED: error, balloon operation canceled.
+  */
+ 
+-enum bp_state {
++static enum bp_state {
+ 	BP_DONE,
+ 	BP_WAIT,
+ 	BP_EAGAIN,
+ 	BP_ECANCELED
+-};
++} balloon_state = BP_DONE;
+ 
+ /* Main waiting point for xen-balloon thread. */
+ static DECLARE_WAIT_QUEUE_HEAD(balloon_thread_wq);
+@@ -199,18 +206,15 @@ static struct page *balloon_next_page(struct page *page)
+ 	return list_entry(next, struct page, lru);
+ }
+ 
+-static enum bp_state update_schedule(enum bp_state state)
++static void update_schedule(void)
+ {
+-	if (state == BP_WAIT)
+-		return BP_WAIT;
+-
+-	if (state == BP_ECANCELED)
+-		return BP_ECANCELED;
++	if (balloon_state == BP_WAIT || balloon_state == BP_ECANCELED)
++		return;
+ 
+-	if (state == BP_DONE) {
++	if (balloon_state == BP_DONE) {
+ 		balloon_stats.schedule_delay = 1;
+ 		balloon_stats.retry_count = 1;
+-		return BP_DONE;
++		return;
+ 	}
+ 
+ 	++balloon_stats.retry_count;
+@@ -219,7 +223,8 @@ static enum bp_state update_schedule(enum bp_state state)
+ 			balloon_stats.retry_count > balloon_stats.max_retry_count) {
+ 		balloon_stats.schedule_delay = 1;
+ 		balloon_stats.retry_count = 1;
+-		return BP_ECANCELED;
++		balloon_state = BP_ECANCELED;
++		return;
+ 	}
+ 
+ 	balloon_stats.schedule_delay <<= 1;
+@@ -227,7 +232,7 @@ static enum bp_state update_schedule(enum bp_state state)
+ 	if (balloon_stats.schedule_delay > balloon_stats.max_schedule_delay)
+ 		balloon_stats.schedule_delay = balloon_stats.max_schedule_delay;
+ 
+-	return BP_EAGAIN;
++	balloon_state = BP_EAGAIN;
+ }
+ 
+ #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG
+@@ -494,9 +499,9 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
+  * Stop waiting if either state is BP_DONE and ballooning action is
+  * needed, or if the credit has changed while state is not BP_DONE.
+  */
+-static bool balloon_thread_cond(enum bp_state state, long credit)
++static bool balloon_thread_cond(long credit)
+ {
+-	if (state == BP_DONE)
++	if (balloon_state == BP_DONE)
+ 		credit = 0;
+ 
+ 	return current_credit() != credit || kthread_should_stop();
+@@ -510,13 +515,12 @@ static bool balloon_thread_cond(enum bp_state state, long credit)
+  */
+ static int balloon_thread(void *unused)
+ {
+-	enum bp_state state = BP_DONE;
+ 	long credit;
+ 	unsigned long timeout;
+ 
+ 	set_freezable();
+ 	for (;;) {
+-		switch (state) {
++		switch (balloon_state) {
+ 		case BP_DONE:
+ 		case BP_ECANCELED:
+ 			timeout = 3600 * HZ;
+@@ -532,7 +536,7 @@ static int balloon_thread(void *unused)
+ 		credit = current_credit();
+ 
+ 		wait_event_freezable_timeout(balloon_thread_wq,
+-			balloon_thread_cond(state, credit), timeout);
++			balloon_thread_cond(credit), timeout);
+ 
+ 		if (kthread_should_stop())
+ 			return 0;
+@@ -543,22 +547,23 @@ static int balloon_thread(void *unused)
+ 
+ 		if (credit > 0) {
+ 			if (balloon_is_inflated())
+-				state = increase_reservation(credit);
++				balloon_state = increase_reservation(credit);
+ 			else
+-				state = reserve_additional_memory();
++				balloon_state = reserve_additional_memory();
+ 		}
+ 
+ 		if (credit < 0) {
+ 			long n_pages;
+ 
+ 			n_pages = min(-credit, si_mem_available());
+-			state = decrease_reservation(n_pages, GFP_BALLOON);
+-			if (state == BP_DONE && n_pages != -credit &&
++			balloon_state = decrease_reservation(n_pages,
++							     GFP_BALLOON);
++			if (balloon_state == BP_DONE && n_pages != -credit &&
+ 			    n_pages < totalreserve_pages)
+-				state = BP_EAGAIN;
++				balloon_state = BP_EAGAIN;
+ 		}
+ 
+-		state = update_schedule(state);
++		update_schedule();
+ 
+ 		mutex_unlock(&balloon_mutex);
+ 
+@@ -765,3 +770,38 @@ static int __init balloon_init(void)
+ 	return 0;
+ }
+ subsys_initcall(balloon_init);
++
++static int __init balloon_wait_finish(void)
++{
++	long credit, last_credit = 0;
++	unsigned long last_changed = 0;
++
++	if (!xen_domain())
++		return -ENODEV;
++
++	/* PV guests don't need to wait. */
++	if (xen_pv_domain() || !current_credit())
++		return 0;
++
++	pr_notice("Waiting for initial ballooning down having finished.\n");
++
++	while ((credit = current_credit()) < 0) {
++		if (credit != last_credit) {
++			last_changed = jiffies;
++			last_credit = credit;
++		}
++		if (balloon_state == BP_ECANCELED) {
++			pr_warn_once("Initial ballooning failed, %ld pages need to be freed.\n",
++				     -credit);
++			if (jiffies - last_changed >= HZ * balloon_boot_timeout)
++				panic("Initial ballooning failed!\n");
++		}
++
++		schedule_timeout_interruptible(HZ / 10);
++	}
++
++	pr_notice("Initial ballooning down finished.\n");
++
++	return 0;
++}
++late_initcall_sync(balloon_wait_finish);
+diff --git a/drivers/xen/xen-pciback/conf_space_capability.c b/drivers/xen/xen-pciback/conf_space_capability.c
+index 22f13abbe9130..5e53b4817f167 100644
+--- a/drivers/xen/xen-pciback/conf_space_capability.c
++++ b/drivers/xen/xen-pciback/conf_space_capability.c
+@@ -160,7 +160,7 @@ static void *pm_ctrl_init(struct pci_dev *dev, int offset)
+ 	}
+ 
+ out:
+-	return ERR_PTR(err);
++	return err ? ERR_PTR(err) : NULL;
+ }
+ 
+ static const struct config_field caplist_pm[] = {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b2f713c759e87..a6cfc82338467 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3551,7 +3551,8 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		goto fail_sysfs;
+ 	}
+ 
+-	if (!sb_rdonly(sb) && !btrfs_check_rw_degradable(fs_info, NULL)) {
++	if (!sb_rdonly(sb) && fs_info->fs_devices->missing_devices &&
++	    !btrfs_check_rw_degradable(fs_info, NULL)) {
+ 		btrfs_warn(fs_info,
+ 		"writable mount is not allowed due to too many missing devices");
+ 		goto fail_sysfs;
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index 9b0814318e726..c71e49782e86d 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -649,7 +649,7 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 len,
+ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 			     struct inode *dst, u64 dst_loff)
+ {
+-	int ret;
++	int ret = 0;
+ 	u64 i, tail_len, chunk_count;
+ 	struct btrfs_root *root_dst = BTRFS_I(dst)->root;
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 539c5db2b22b8..92edac0c7c250 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -2505,7 +2505,9 @@ again:
+ 		else {
+ 			ret = find_dir_range(log, path, dirid, key_type,
+ 					     &range_start, &range_end);
+-			if (ret != 0)
++			if (ret < 0)
++				goto out;
++			else if (ret > 0)
+ 				break;
+ 		}
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 19c780242e127..ecc5641cf19a3 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1134,8 +1134,10 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 	if (device->devid == BTRFS_DEV_REPLACE_DEVID)
+ 		clear_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state);
+ 
+-	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
++	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) {
++		clear_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state);
+ 		fs_devices->missing_devices--;
++	}
+ 
+ 	btrfs_close_bdev(device);
+ 	if (device->bdev) {
+@@ -2144,8 +2146,11 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 	u64 num_devices;
+ 	int ret = 0;
+ 
+-	mutex_lock(&uuid_mutex);
+-
++	/*
++	 * The device list in fs_devices is accessed without locks (neither
++	 * uuid_mutex nor device_list_mutex) as it won't change on a mounted
++	 * filesystem and another device rm cannot run.
++	 */
+ 	num_devices = btrfs_num_devices(fs_info);
+ 
+ 	ret = btrfs_check_raid_min_devices(fs_info, num_devices - 1);
+@@ -2189,11 +2194,9 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 	}
+ 
+-	mutex_unlock(&uuid_mutex);
+ 	ret = btrfs_shrink_device(device, 0);
+ 	if (!ret)
+ 		btrfs_reada_remove_dev(device);
+-	mutex_lock(&uuid_mutex);
+ 	if (ret)
+ 		goto error_undo;
+ 
+@@ -2280,7 +2283,6 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 	}
+ 
+ out:
+-	mutex_unlock(&uuid_mutex);
+ 	return ret;
+ 
+ error_undo:
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index cf2141483b37f..3faf966df1839 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -75,7 +75,8 @@
+ #define SMB_ECHO_INTERVAL_MAX 600
+ #define SMB_ECHO_INTERVAL_DEFAULT 60
+ 
+-/* dns resolution interval in seconds */
++/* dns resolution intervals in seconds */
++#define SMB_DNS_RESOLVE_INTERVAL_MIN     120
+ #define SMB_DNS_RESOLVE_INTERVAL_DEFAULT 600
+ 
+ /* maximum number of PDUs in one compound */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 65d3cf80444bf..14c925565dbef 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -116,7 +116,7 @@ static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server)
+ 			 * To make sure we don't use the cached entry, retry 1s
+ 			 * after expiry.
+ 			 */
+-			ttl = (expiry - now + 1);
++			ttl = max_t(unsigned long, expiry - now, SMB_DNS_RESOLVE_INTERVAL_MIN) + 1;
+ 	}
+ 	rc = !rc ? -1 : 0;
+ 
+@@ -795,7 +795,6 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ 		 */
+ 	}
+ 
+-	kfree(server->hostname);
+ 	kfree(server);
+ 
+ 	length = atomic_dec_return(&tcpSesAllocCount);
+@@ -1236,6 +1235,9 @@ static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *
+ 	if (!net_eq(cifs_net_ns(server), current->nsproxy->net_ns))
+ 		return 0;
+ 
++	if (strcasecmp(server->hostname, ctx->server_hostname))
++		return 0;
++
+ 	if (!match_address(server, addr,
+ 			   (struct sockaddr *)&ctx->srcaddr))
+ 		return 0;
+@@ -1337,6 +1339,7 @@ cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ 	kfree(server->session_key.response);
+ 	server->session_key.response = NULL;
+ 	server->session_key.len = 0;
++	kfree(server->hostname);
+ 
+ 	task = xchg(&server->tsk, NULL);
+ 	if (task)
+@@ -1362,14 +1365,15 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx)
+ 		goto out_err;
+ 	}
+ 
++	tcp_ses->hostname = kstrdup(ctx->server_hostname, GFP_KERNEL);
++	if (!tcp_ses->hostname) {
++		rc = -ENOMEM;
++		goto out_err;
++	}
++
+ 	tcp_ses->ops = ctx->ops;
+ 	tcp_ses->vals = ctx->vals;
+ 	cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+-	tcp_ses->hostname = extract_hostname(ctx->UNC);
+-	if (IS_ERR(tcp_ses->hostname)) {
+-		rc = PTR_ERR(tcp_ses->hostname);
+-		goto out_err_crypto_release;
+-	}
+ 
+ 	tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId);
+ 	tcp_ses->noblockcnt = ctx->rootfs;
+@@ -1498,8 +1502,7 @@ out_err_crypto_release:
+ 
+ out_err:
+ 	if (tcp_ses) {
+-		if (!IS_ERR(tcp_ses->hostname))
+-			kfree(tcp_ses->hostname);
++		kfree(tcp_ses->hostname);
+ 		if (tcp_ses->ssocket)
+ 			sock_release(tcp_ses->ssocket);
+ 		kfree(tcp_ses);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index ab2734159c192..0cdf928b6339d 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -2689,12 +2689,23 @@ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end,
+ 	tcon = tlink_tcon(smbfile->tlink);
+ 	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
+ 		server = tcon->ses->server;
+-		if (server->ops->flush)
+-			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+-		else
++		if (server->ops->flush == NULL) {
+ 			rc = -ENOSYS;
++			goto strict_fsync_exit;
++		}
++
++		if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {
++			smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);
++			if (smbfile) {
++				rc = server->ops->flush(xid, tcon, &smbfile->fid);
++				cifsFileInfo_put(smbfile);
++			} else
++				cifs_dbg(FYI, "ignore fsync for file not open for write\n");
++		} else
++			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+ 	}
+ 
++strict_fsync_exit:
+ 	free_xid(xid);
+ 	return rc;
+ }
+@@ -2706,6 +2717,7 @@ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	struct cifs_tcon *tcon;
+ 	struct TCP_Server_Info *server;
+ 	struct cifsFileInfo *smbfile = file->private_data;
++	struct inode *inode = file_inode(file);
+ 	struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);
+ 
+ 	rc = file_write_and_wait_range(file, start, end);
+@@ -2722,12 +2734,23 @@ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ 	tcon = tlink_tcon(smbfile->tlink);
+ 	if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
+ 		server = tcon->ses->server;
+-		if (server->ops->flush)
+-			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+-		else
++		if (server->ops->flush == NULL) {
+ 			rc = -ENOSYS;
++			goto fsync_exit;
++		}
++
++		if ((OPEN_FMODE(smbfile->f_flags) & FMODE_WRITE) == 0) {
++			smbfile = find_writable_file(CIFS_I(inode), FIND_WR_ANY);
++			if (smbfile) {
++				rc = server->ops->flush(xid, tcon, &smbfile->fid);
++				cifsFileInfo_put(smbfile);
++			} else
++				cifs_dbg(FYI, "ignore fsync for file not open for write\n");
++		} else
++			rc = server->ops->flush(xid, tcon, &smbfile->fid);
+ 	}
+ 
++fsync_exit:
+ 	free_xid(xid);
+ 	return rc;
+ }
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index 727c8835b2227..b20c88552d535 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -321,6 +321,7 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ 	new_ctx->nodename = NULL;
+ 	new_ctx->username = NULL;
+ 	new_ctx->password = NULL;
++	new_ctx->server_hostname = NULL;
+ 	new_ctx->domainname = NULL;
+ 	new_ctx->UNC = NULL;
+ 	new_ctx->source = NULL;
+@@ -332,6 +333,7 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ 	DUP_CTX_STR(mount_options);
+ 	DUP_CTX_STR(username);
+ 	DUP_CTX_STR(password);
++	DUP_CTX_STR(server_hostname);
+ 	DUP_CTX_STR(UNC);
+ 	DUP_CTX_STR(source);
+ 	DUP_CTX_STR(domainname);
+@@ -470,6 +472,12 @@ smb3_parse_devname(const char *devname, struct smb3_fs_context *ctx)
+ 	if (!pos)
+ 		return -EINVAL;
+ 
++	/* record the server hostname */
++	kfree(ctx->server_hostname);
++	ctx->server_hostname = kstrndup(devname + 2, pos - devname - 2, GFP_KERNEL);
++	if (!ctx->server_hostname)
++		return -ENOMEM;
++
+ 	/* skip past delimiter */
+ 	++pos;
+ 
+@@ -1510,6 +1518,8 @@ smb3_cleanup_fs_context_contents(struct smb3_fs_context *ctx)
+ 	ctx->username = NULL;
+ 	kfree_sensitive(ctx->password);
+ 	ctx->password = NULL;
++	kfree(ctx->server_hostname);
++	ctx->server_hostname = NULL;
+ 	kfree(ctx->UNC);
+ 	ctx->UNC = NULL;
+ 	kfree(ctx->source);
+diff --git a/fs/cifs/fs_context.h b/fs/cifs/fs_context.h
+index b6243972edf3b..ac4b631d8ce3d 100644
+--- a/fs/cifs/fs_context.h
++++ b/fs/cifs/fs_context.h
+@@ -169,6 +169,7 @@ struct smb3_fs_context {
+ 	char *password;
+ 	char *domainname;
+ 	char *source;
++	char *server_hostname;
+ 	char *UNC;
+ 	char *nodename;
+ 	char *iocharset;  /* local code page for mapping to and from Unicode */
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 3fa965eb3336d..cb25ef0cdf1f3 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -549,8 +549,9 @@ int __init fscrypt_init_keyring(void);
+ struct fscrypt_mode {
+ 	const char *friendly_name;
+ 	const char *cipher_str;
+-	int keysize;
+-	int ivsize;
++	int keysize;		/* key size in bytes */
++	int security_strength;	/* security strength in bytes */
++	int ivsize;		/* IV size in bytes */
+ 	int logged_impl_name;
+ 	enum blk_crypto_mode_num blk_crypto_mode;
+ };
+diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
+index e0ec210555053..7607d18b35fc0 100644
+--- a/fs/crypto/hkdf.c
++++ b/fs/crypto/hkdf.c
+@@ -16,9 +16,14 @@
+ 
+ /*
+  * HKDF supports any unkeyed cryptographic hash algorithm, but fscrypt uses
+- * SHA-512 because it is reasonably secure and efficient; and since it produces
+- * a 64-byte digest, deriving an AES-256-XTS key preserves all 64 bytes of
+- * entropy from the master key and requires only one iteration of HKDF-Expand.
++ * SHA-512 because it is well-established, secure, and reasonably efficient.
++ *
++ * HKDF-SHA256 was also considered, as its 256-bit security strength would be
++ * sufficient here.  A 512-bit security strength is "nice to have", though.
++ * Also, on 64-bit CPUs, SHA-512 is usually just as fast as SHA-256.  In the
++ * common case of deriving an AES-256-XTS key (512 bits), that can result in
++ * HKDF-SHA512 being much faster than HKDF-SHA256, as the longer digest size of
++ * SHA-512 causes HKDF-Expand to only need to do one iteration rather than two.
+  */
+ #define HKDF_HMAC_ALG		"hmac(sha512)"
+ #define HKDF_HASHLEN		SHA512_DIGEST_SIZE
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index bca9c6658a7c5..89cd533a88bff 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -19,6 +19,7 @@ struct fscrypt_mode fscrypt_modes[] = {
+ 		.friendly_name = "AES-256-XTS",
+ 		.cipher_str = "xts(aes)",
+ 		.keysize = 64,
++		.security_strength = 32,
+ 		.ivsize = 16,
+ 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ 	},
+@@ -26,12 +27,14 @@ struct fscrypt_mode fscrypt_modes[] = {
+ 		.friendly_name = "AES-256-CTS-CBC",
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 32,
++		.security_strength = 32,
+ 		.ivsize = 16,
+ 	},
+ 	[FSCRYPT_MODE_AES_128_CBC] = {
+ 		.friendly_name = "AES-128-CBC-ESSIV",
+ 		.cipher_str = "essiv(cbc(aes),sha256)",
+ 		.keysize = 16,
++		.security_strength = 16,
+ 		.ivsize = 16,
+ 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+ 	},
+@@ -39,12 +42,14 @@ struct fscrypt_mode fscrypt_modes[] = {
+ 		.friendly_name = "AES-128-CTS-CBC",
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 16,
++		.security_strength = 16,
+ 		.ivsize = 16,
+ 	},
+ 	[FSCRYPT_MODE_ADIANTUM] = {
+ 		.friendly_name = "Adiantum",
+ 		.cipher_str = "adiantum(xchacha12,aes)",
+ 		.keysize = 32,
++		.security_strength = 32,
+ 		.ivsize = 32,
+ 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
+ 	},
+@@ -357,6 +362,45 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
+ 	return 0;
+ }
+ 
++/*
++ * Check whether the size of the given master key (@mk) is appropriate for the
++ * encryption settings which a particular file will use (@ci).
++ *
++ * If the file uses a v1 encryption policy, then the master key must be at least
++ * as long as the derived key, as this is a requirement of the v1 KDF.
++ *
++ * Otherwise, the KDF can accept any size key, so we enforce a slightly looser
++ * requirement: we require that the size of the master key be at least the
++ * maximum security strength of any algorithm whose key will be derived from it
++ * (but in practice we only need to consider @ci->ci_mode, since any other
++ * possible subkeys such as DIRHASH and INODE_HASH will never increase the
++ * required key size over @ci->ci_mode).  This allows AES-256-XTS keys to be
++ * derived from a 256-bit master key, which is cryptographically sufficient,
++ * rather than requiring a 512-bit master key which is unnecessarily long.  (We
++ * still allow 512-bit master keys if the user chooses to use them, though.)
++ */
++static bool fscrypt_valid_master_key_size(const struct fscrypt_master_key *mk,
++					  const struct fscrypt_info *ci)
++{
++	unsigned int min_keysize;
++
++	if (ci->ci_policy.version == FSCRYPT_POLICY_V1)
++		min_keysize = ci->ci_mode->keysize;
++	else
++		min_keysize = ci->ci_mode->security_strength;
++
++	if (mk->mk_secret.size < min_keysize) {
++		fscrypt_warn(NULL,
++			     "key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
++			     master_key_spec_type(&mk->mk_spec),
++			     master_key_spec_len(&mk->mk_spec),
++			     (u8 *)&mk->mk_spec.u,
++			     mk->mk_secret.size, min_keysize);
++		return false;
++	}
++	return true;
++}
++
+ /*
+  * Find the master key, then set up the inode's actual encryption key.
+  *
+@@ -422,18 +466,7 @@ static int setup_file_encryption_key(struct fscrypt_info *ci,
+ 		goto out_release_key;
+ 	}
+ 
+-	/*
+-	 * Require that the master key be at least as long as the derived key.
+-	 * Otherwise, the derived key cannot possibly contain as much entropy as
+-	 * that required by the encryption mode it will be used for.  For v1
+-	 * policies it's also required for the KDF to work at all.
+-	 */
+-	if (mk->mk_secret.size < ci->ci_mode->keysize) {
+-		fscrypt_warn(NULL,
+-			     "key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
+-			     master_key_spec_type(&mk_spec),
+-			     master_key_spec_len(&mk_spec), (u8 *)&mk_spec.u,
+-			     mk->mk_secret.size, ci->ci_mode->keysize);
++	if (!fscrypt_valid_master_key_size(mk, ci)) {
+ 		err = -ENOKEY;
+ 		goto out_release_key;
+ 	}
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index a5bc4b1b7813e..ad3f31380e6b2 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -233,7 +233,6 @@ static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq, u8 *out)
+ 		erofs_err(rq->sb, "failed to decompress %d in[%u, %u] out[%u]",
+ 			  ret, rq->inputsize, inputmargin, rq->outputsize);
+ 
+-		WARN_ON(1);
+ 		print_hex_dump(KERN_DEBUG, "[ in]: ", DUMP_PREFIX_OFFSET,
+ 			       16, 1, src + inputmargin, rq->inputsize, true);
+ 		print_hex_dump(KERN_DEBUG, "[out]: ", DUMP_PREFIX_OFFSET,
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index cb4d0889eca95..58840842e8647 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -375,8 +375,8 @@ static bool z_erofs_try_inplace_io(struct z_erofs_collector *clt,
+ 
+ /* callers must be with collection lock held */
+ static int z_erofs_attach_page(struct z_erofs_collector *clt,
+-			       struct page *page,
+-			       enum z_erofs_page_type type)
++			       struct page *page, enum z_erofs_page_type type,
++			       bool pvec_safereuse)
+ {
+ 	int ret;
+ 
+@@ -386,9 +386,9 @@ static int z_erofs_attach_page(struct z_erofs_collector *clt,
+ 	    z_erofs_try_inplace_io(clt, page))
+ 		return 0;
+ 
+-	ret = z_erofs_pagevec_enqueue(&clt->vector, page, type);
++	ret = z_erofs_pagevec_enqueue(&clt->vector, page, type,
++				      pvec_safereuse);
+ 	clt->cl->vcnt += (unsigned int)ret;
+-
+ 	return ret ? 0 : -EAGAIN;
+ }
+ 
+@@ -731,7 +731,8 @@ hitted:
+ 		tight &= (clt->mode >= COLLECT_PRIMARY_FOLLOWED);
+ 
+ retry:
+-	err = z_erofs_attach_page(clt, page, page_type);
++	err = z_erofs_attach_page(clt, page, page_type,
++				  clt->mode >= COLLECT_PRIMARY_FOLLOWED);
+ 	/* should allocate an additional short-lived page for pagevec */
+ 	if (err == -EAGAIN) {
+ 		struct page *const newpage =
+@@ -739,7 +740,7 @@ retry:
+ 
+ 		set_page_private(newpage, Z_EROFS_SHORTLIVED_PAGE);
+ 		err = z_erofs_attach_page(clt, newpage,
+-					  Z_EROFS_PAGE_TYPE_EXCLUSIVE);
++					  Z_EROFS_PAGE_TYPE_EXCLUSIVE, true);
+ 		if (!err)
+ 			goto retry;
+ 	}
+diff --git a/fs/erofs/zpvec.h b/fs/erofs/zpvec.h
+index dfd7fe0503bb1..b05464f4a8083 100644
+--- a/fs/erofs/zpvec.h
++++ b/fs/erofs/zpvec.h
+@@ -106,11 +106,18 @@ static inline void z_erofs_pagevec_ctor_init(struct z_erofs_pagevec_ctor *ctor,
+ 
+ static inline bool z_erofs_pagevec_enqueue(struct z_erofs_pagevec_ctor *ctor,
+ 					   struct page *page,
+-					   enum z_erofs_page_type type)
++					   enum z_erofs_page_type type,
++					   bool pvec_safereuse)
+ {
+-	if (!ctor->next && type)
+-		if (ctor->index + 1 == ctor->nr)
++	if (!ctor->next) {
++		/* some pages cannot be reused as pvec safely without I/O */
++		if (type == Z_EROFS_PAGE_TYPE_EXCLUSIVE && !pvec_safereuse)
++			type = Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED;
++
++		if (type != Z_EROFS_PAGE_TYPE_EXCLUSIVE &&
++		    ctor->index + 1 == ctor->nr)
+ 			return false;
++	}
+ 
+ 	if (ctor->index >= ctor->nr)
+ 		z_erofs_pagevec_ctor_pagedown(ctor, false);
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index ca37d43443612..1c7aa1ea4724c 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -604,7 +604,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+ 	exfat_save_attr(inode, info->attr);
+ 
+ 	inode->i_blocks = ((i_size_read(inode) + (sbi->cluster_size - 1)) &
+-		~(sbi->cluster_size - 1)) >> inode->i_blkbits;
++		~((loff_t)sbi->cluster_size - 1)) >> inode->i_blkbits;
+ 	inode->i_mtime = info->mtime;
+ 	inode->i_ctime = info->mtime;
+ 	ei->i_crtime = info->crtime;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index b1933e3513d60..576677fd9b64a 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4971,36 +4971,6 @@ int ext4_get_es_cache(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 	return ext4_fill_es_cache_info(inode, start_blk, len_blks, fieinfo);
+ }
+ 
+-/*
+- * ext4_access_path:
+- * Function to access the path buffer for marking it dirty.
+- * It also checks if there are sufficient credits left in the journal handle
+- * to update path.
+- */
+-static int
+-ext4_access_path(handle_t *handle, struct inode *inode,
+-		struct ext4_ext_path *path)
+-{
+-	int credits, err;
+-
+-	if (!ext4_handle_valid(handle))
+-		return 0;
+-
+-	/*
+-	 * Check if need to extend journal credits
+-	 * 3 for leaf, sb, and inode plus 2 (bmap and group
+-	 * descriptor) for each block group; assume two block
+-	 * groups
+-	 */
+-	credits = ext4_writepage_trans_blocks(inode);
+-	err = ext4_datasem_ensure_credits(handle, inode, 7, credits, 0);
+-	if (err < 0)
+-		return err;
+-
+-	err = ext4_ext_get_access(handle, inode, path);
+-	return err;
+-}
+-
+ /*
+  * ext4_ext_shift_path_extents:
+  * Shift the extents of a path structure lying between path[depth].p_ext
+@@ -5015,6 +4985,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 	int depth, err = 0;
+ 	struct ext4_extent *ex_start, *ex_last;
+ 	bool update = false;
++	int credits, restart_credits;
+ 	depth = path->p_depth;
+ 
+ 	while (depth >= 0) {
+@@ -5024,13 +4995,26 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 				return -EFSCORRUPTED;
+ 
+ 			ex_last = EXT_LAST_EXTENT(path[depth].p_hdr);
++			/* leaf + sb + inode */
++			credits = 3;
++			if (ex_start == EXT_FIRST_EXTENT(path[depth].p_hdr)) {
++				update = true;
++				/* extent tree + sb + inode */
++				credits = depth + 2;
++			}
+ 
+-			err = ext4_access_path(handle, inode, path + depth);
+-			if (err)
++			restart_credits = ext4_writepage_trans_blocks(inode);
++			err = ext4_datasem_ensure_credits(handle, inode, credits,
++					restart_credits, 0);
++			if (err) {
++				if (err > 0)
++					err = -EAGAIN;
+ 				goto out;
++			}
+ 
+-			if (ex_start == EXT_FIRST_EXTENT(path[depth].p_hdr))
+-				update = true;
++			err = ext4_ext_get_access(handle, inode, path + depth);
++			if (err)
++				goto out;
+ 
+ 			while (ex_start <= ex_last) {
+ 				if (SHIFT == SHIFT_LEFT) {
+@@ -5061,7 +5045,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 		}
+ 
+ 		/* Update index too */
+-		err = ext4_access_path(handle, inode, path + depth);
++		err = ext4_ext_get_access(handle, inode, path + depth);
+ 		if (err)
+ 			goto out;
+ 
+@@ -5100,6 +5084,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 	int ret = 0, depth;
+ 	struct ext4_extent *extent;
+ 	ext4_lblk_t stop, *iterator, ex_start, ex_end;
++	ext4_lblk_t tmp = EXT_MAX_BLOCKS;
+ 
+ 	/* Let path point to the last extent */
+ 	path = ext4_find_extent(inode, EXT_MAX_BLOCKS - 1, NULL,
+@@ -5153,11 +5138,15 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 	 * till we reach stop. In case of right shift, iterator points to stop
+ 	 * and it is decreased till we reach start.
+ 	 */
++again:
+ 	if (SHIFT == SHIFT_LEFT)
+ 		iterator = &start;
+ 	else
+ 		iterator = &stop;
+ 
++	if (tmp != EXT_MAX_BLOCKS)
++		*iterator = tmp;
++
+ 	/*
+ 	 * Its safe to start updating extents.  Start and stop are unsigned, so
+ 	 * in case of right shift if extent with 0 block is reached, iterator
+@@ -5186,6 +5175,7 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 			}
+ 		}
+ 
++		tmp = *iterator;
+ 		if (SHIFT == SHIFT_LEFT) {
+ 			extent = EXT_LAST_EXTENT(path[depth].p_hdr);
+ 			*iterator = le32_to_cpu(extent->ee_block) +
+@@ -5204,6 +5194,9 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ 		}
+ 		ret = ext4_ext_shift_path_extents(path, shift, inode,
+ 				handle, SHIFT);
++		/* iterator can be NULL which means we should break */
++		if (ret == -EAGAIN)
++			goto again;
+ 		if (ret)
+ 			break;
+ 	}
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 59c25a95050af..a1f9adf8f968e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3427,9 +3427,9 @@ static int ext4_run_li_request(struct ext4_li_request *elr)
+ 	struct super_block *sb = elr->lr_super;
+ 	ext4_group_t ngroups = EXT4_SB(sb)->s_groups_count;
+ 	ext4_group_t group = elr->lr_next_group;
+-	unsigned long timeout = 0;
+ 	unsigned int prefetch_ios = 0;
+ 	int ret = 0;
++	u64 start_time;
+ 
+ 	if (elr->lr_mode == EXT4_LI_MODE_PREFETCH_BBITMAP) {
+ 		elr->lr_next_group = ext4_mb_prefetch(sb, group,
+@@ -3466,14 +3466,13 @@ static int ext4_run_li_request(struct ext4_li_request *elr)
+ 		ret = 1;
+ 
+ 	if (!ret) {
+-		timeout = jiffies;
++		start_time = ktime_get_real_ns();
+ 		ret = ext4_init_inode_table(sb, group,
+ 					    elr->lr_timeout ? 0 : 1);
+ 		trace_ext4_lazy_itable_init(sb, group);
+ 		if (elr->lr_timeout == 0) {
+-			timeout = (jiffies - timeout) *
+-				EXT4_SB(elr->lr_super)->s_li_wait_mult;
+-			elr->lr_timeout = timeout;
++			elr->lr_timeout = nsecs_to_jiffies((ktime_get_real_ns() - start_time) *
++				EXT4_SB(elr->lr_super)->s_li_wait_mult);
+ 		}
+ 		elr->lr_next_sched = jiffies + elr->lr_timeout;
+ 		elr->lr_next_group = group + 1;
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index b8b3f1160afa6..0985d9bf08ed7 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1476,6 +1476,7 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
+ 	if (cluster_may_compress(cc)) {
+ 		err = f2fs_compress_pages(cc);
+ 		if (err == -EAGAIN) {
++			add_compr_block_stat(cc->inode, cc->cluster_size);
+ 			goto write;
+ 		} else if (err) {
+ 			f2fs_put_rpages_wbc(cc, wbc, true, 1);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 9141147b5bb00..1213f15ffd68c 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -527,7 +527,7 @@ make_now:
+ 		inode->i_op = &f2fs_dir_inode_operations;
+ 		inode->i_fop = &f2fs_dir_operations;
+ 		inode->i_mapping->a_ops = &f2fs_dblock_aops;
+-		inode_nohighmem(inode);
++		mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+ 	} else if (S_ISLNK(inode->i_mode)) {
+ 		if (file_is_encrypt(inode))
+ 			inode->i_op = &f2fs_encrypted_symlink_inode_operations;
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 9c528e583c9d5..ae0838001480a 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -757,7 +757,7 @@ static int f2fs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
+ 	inode->i_op = &f2fs_dir_inode_operations;
+ 	inode->i_fop = &f2fs_dir_operations;
+ 	inode->i_mapping->a_ops = &f2fs_dblock_aops;
+-	inode_nohighmem(inode);
++	mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+ 
+ 	set_inode_flag(inode, FI_INC_LINK);
+ 	f2fs_lock_op(sbi);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 2b093a209ae40..187e345f52980 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -4271,6 +4271,8 @@ free_node_inode:
+ free_stats:
+ 	f2fs_destroy_stats(sbi);
+ free_nm:
++	/* stop discard thread before destroying node manager */
++	f2fs_stop_discard_thread(sbi);
+ 	f2fs_destroy_node_manager(sbi);
+ free_sm:
+ 	f2fs_destroy_segment_manager(sbi);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index dde341a6388a1..5a1f142bdb484 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -847,6 +847,12 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
+ 
+ 	replace_page_cache_page(oldpage, newpage);
+ 
++	/*
++	 * Release while we have extra ref on stolen page.  Otherwise
++	 * anon_pipe_buf_release() might think the page can be reused.
++	 */
++	pipe_buf_release(cs->pipe, buf);
++
+ 	get_page(newpage);
+ 
+ 	if (!(buf->flags & PIPE_BUF_FLAG_LRU))
+@@ -2031,8 +2037,12 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 
+ 	pipe_lock(pipe);
+ out_free:
+-	for (idx = 0; idx < nbuf; idx++)
+-		pipe_buf_release(pipe, &bufs[idx]);
++	for (idx = 0; idx < nbuf; idx++) {
++		struct pipe_buffer *buf = &bufs[idx];
++
++		if (buf->ops)
++			pipe_buf_release(pipe, buf);
++	}
+ 	pipe_unlock(pipe);
+ 
+ 	kvfree(bufs);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 1f3902ecddedd..b17b6350c0317 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1894,10 +1894,10 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
+ 	do {
+ 		rhashtable_walk_start(&iter);
+ 
+-		while ((gl = rhashtable_walk_next(&iter)) && !IS_ERR(gl))
+-			if (gl->gl_name.ln_sbd == sdp &&
+-			    lockref_get_not_dead(&gl->gl_lockref))
++		while ((gl = rhashtable_walk_next(&iter)) && !IS_ERR(gl)) {
++			if (gl->gl_name.ln_sbd == sdp)
+ 				examiner(gl);
++		}
+ 
+ 		rhashtable_walk_stop(&iter);
+ 	} while (cond_resched(), gl == ERR_PTR(-EAGAIN));
+@@ -1920,7 +1920,7 @@ bool gfs2_queue_delete_work(struct gfs2_glock *gl, unsigned long delay)
+ 
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl)
+ {
+-	if (cancel_delayed_work_sync(&gl->gl_delete)) {
++	if (cancel_delayed_work(&gl->gl_delete)) {
+ 		clear_bit(GLF_PENDING_DELETE, &gl->gl_flags);
+ 		gfs2_glock_put(gl);
+ 	}
+@@ -1939,7 +1939,6 @@ static void flush_delete_work(struct gfs2_glock *gl)
+ 					   &gl->gl_delete, 0);
+ 		}
+ 	}
+-	gfs2_glock_queue_work(gl, 0);
+ }
+ 
+ void gfs2_flush_delete_work(struct gfs2_sbd *sdp)
+@@ -1956,10 +1955,10 @@ void gfs2_flush_delete_work(struct gfs2_sbd *sdp)
+ 
+ static void thaw_glock(struct gfs2_glock *gl)
+ {
+-	if (!test_and_clear_bit(GLF_FROZEN, &gl->gl_flags)) {
+-		gfs2_glock_put(gl);
++	if (!test_and_clear_bit(GLF_FROZEN, &gl->gl_flags))
++		return;
++	if (!lockref_get_not_dead(&gl->gl_lockref))
+ 		return;
+-	}
+ 	set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
+ 	gfs2_glock_queue_work(gl, 0);
+ }
+@@ -1975,9 +1974,12 @@ static void clear_glock(struct gfs2_glock *gl)
+ 	gfs2_glock_remove_from_lru(gl);
+ 
+ 	spin_lock(&gl->gl_lockref.lock);
+-	if (gl->gl_state != LM_ST_UNLOCKED)
+-		handle_callback(gl, LM_ST_UNLOCKED, 0, false);
+-	__gfs2_glock_queue_work(gl, 0);
++	if (!__lockref_is_dead(&gl->gl_lockref)) {
++		gl->gl_lockref.count++;
++		if (gl->gl_state != LM_ST_UNLOCKED)
++			handle_callback(gl, LM_ST_UNLOCKED, 0, false);
++		__gfs2_glock_queue_work(gl, 0);
++	}
+ 	spin_unlock(&gl->gl_lockref.lock);
+ }
+ 
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index cb5d84f6b7693..0890d85ba2855 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -401,19 +401,22 @@ static inline unsigned int io_get_work_hash(struct io_wq_work *work)
+ 	return work->flags >> IO_WQ_HASH_SHIFT;
+ }
+ 
+-static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
++static bool io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
+ {
+ 	struct io_wq *wq = wqe->wq;
++	bool ret = false;
+ 
+-	spin_lock(&wq->hash->wait.lock);
++	spin_lock_irq(&wq->hash->wait.lock);
+ 	if (list_empty(&wqe->wait.entry)) {
+ 		__add_wait_queue(&wq->hash->wait, &wqe->wait);
+ 		if (!test_bit(hash, &wq->hash->map)) {
+ 			__set_current_state(TASK_RUNNING);
+ 			list_del_init(&wqe->wait.entry);
++			ret = true;
+ 		}
+ 	}
+-	spin_unlock(&wq->hash->wait.lock);
++	spin_unlock_irq(&wq->hash->wait.lock);
++	return ret;
+ }
+ 
+ /*
+@@ -436,8 +439,7 @@ static bool io_worker_can_run_work(struct io_worker *worker,
+ }
+ 
+ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe,
+-					   struct io_worker *worker,
+-					   bool *stalled)
++					   struct io_worker *worker)
+ 	__must_hold(wqe->lock)
+ {
+ 	struct io_wq_work_node *node, *prev;
+@@ -475,10 +477,21 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe,
+ 	}
+ 
+ 	if (stall_hash != -1U) {
++		bool unstalled;
++
++		/*
++		 * Set this before dropping the lock to avoid racing with new
++		 * work being added and clearing the stalled bit.
++		 */
++		wqe->flags |= IO_WQE_FLAG_STALLED;
+ 		raw_spin_unlock(&wqe->lock);
+-		io_wait_on_hash(wqe, stall_hash);
++		unstalled = io_wait_on_hash(wqe, stall_hash);
+ 		raw_spin_lock(&wqe->lock);
+-		*stalled = true;
++		if (unstalled) {
++			wqe->flags &= ~IO_WQE_FLAG_STALLED;
++			if (wq_has_sleeper(&wqe->wq->hash->wait))
++				wake_up(&wqe->wq->hash->wait);
++		}
+ 	}
+ 
+ 	return NULL;
+@@ -518,7 +531,6 @@ static void io_worker_handle_work(struct io_worker *worker)
+ 
+ 	do {
+ 		struct io_wq_work *work;
+-		bool stalled;
+ get_next:
+ 		/*
+ 		 * If we got some work, mark us as busy. If we didn't, but
+@@ -527,12 +539,9 @@ get_next:
+ 		 * can't make progress, any work completion or insertion will
+ 		 * clear the stalled flag.
+ 		 */
+-		stalled = false;
+-		work = io_get_next_work(wqe, worker, &stalled);
++		work = io_get_next_work(wqe, worker);
+ 		if (work)
+ 			__io_worker_busy(wqe, worker, work);
+-		else if (stalled)
+-			wqe->flags |= IO_WQE_FLAG_STALLED;
+ 
+ 		raw_spin_unlock_irq(&wqe->lock);
+ 		if (!work)
+@@ -563,11 +572,14 @@ get_next:
+ 				io_wqe_enqueue(wqe, linked);
+ 
+ 			if (hash != -1U && !next_hashed) {
++				/* serialize hash clear with wake_up() */
++				spin_lock_irq(&wq->hash->wait.lock);
+ 				clear_bit(hash, &wq->hash->map);
++				wqe->flags &= ~IO_WQE_FLAG_STALLED;
++				spin_unlock_irq(&wq->hash->wait.lock);
+ 				if (wq_has_sleeper(&wq->hash->wait))
+ 					wake_up(&wq->hash->wait);
+ 				raw_spin_lock_irq(&wqe->lock);
+-				wqe->flags &= ~IO_WQE_FLAG_STALLED;
+ 				/* skip unnecessary unlock-lock wqe->lock */
+ 				if (!work)
+ 					goto get_next;
+diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
+index 5d7d7170c03c0..aa4ff7bcaff23 100644
+--- a/fs/jfs/jfs_mount.c
++++ b/fs/jfs/jfs_mount.c
+@@ -81,14 +81,14 @@ int jfs_mount(struct super_block *sb)
+ 	 * (initialize mount inode from the superblock)
+ 	 */
+ 	if ((rc = chkSuper(sb))) {
+-		goto errout20;
++		goto out;
+ 	}
+ 
+ 	ipaimap = diReadSpecial(sb, AGGREGATE_I, 0);
+ 	if (ipaimap == NULL) {
+ 		jfs_err("jfs_mount: Failed to read AGGREGATE_I");
+ 		rc = -EIO;
+-		goto errout20;
++		goto out;
+ 	}
+ 	sbi->ipaimap = ipaimap;
+ 
+@@ -99,7 +99,7 @@ int jfs_mount(struct super_block *sb)
+ 	 */
+ 	if ((rc = diMount(ipaimap))) {
+ 		jfs_err("jfs_mount: diMount(ipaimap) failed w/rc = %d", rc);
+-		goto errout21;
++		goto err_ipaimap;
+ 	}
+ 
+ 	/*
+@@ -108,7 +108,7 @@ int jfs_mount(struct super_block *sb)
+ 	ipbmap = diReadSpecial(sb, BMAP_I, 0);
+ 	if (ipbmap == NULL) {
+ 		rc = -EIO;
+-		goto errout22;
++		goto err_umount_ipaimap;
+ 	}
+ 
+ 	jfs_info("jfs_mount: ipbmap:0x%p", ipbmap);
+@@ -120,7 +120,7 @@ int jfs_mount(struct super_block *sb)
+ 	 */
+ 	if ((rc = dbMount(ipbmap))) {
+ 		jfs_err("jfs_mount: dbMount failed w/rc = %d", rc);
+-		goto errout22;
++		goto err_ipbmap;
+ 	}
+ 
+ 	/*
+@@ -139,7 +139,7 @@ int jfs_mount(struct super_block *sb)
+ 		if (!ipaimap2) {
+ 			jfs_err("jfs_mount: Failed to read AGGREGATE_I");
+ 			rc = -EIO;
+-			goto errout35;
++			goto err_umount_ipbmap;
+ 		}
+ 		sbi->ipaimap2 = ipaimap2;
+ 
+@@ -151,7 +151,7 @@ int jfs_mount(struct super_block *sb)
+ 		if ((rc = diMount(ipaimap2))) {
+ 			jfs_err("jfs_mount: diMount(ipaimap2) failed, rc = %d",
+ 				rc);
+-			goto errout35;
++			goto err_ipaimap2;
+ 		}
+ 	} else
+ 		/* Secondary aggregate inode table is not valid */
+@@ -168,7 +168,7 @@ int jfs_mount(struct super_block *sb)
+ 		jfs_err("jfs_mount: Failed to read FILESYSTEM_I");
+ 		/* open fileset secondary inode allocation map */
+ 		rc = -EIO;
+-		goto errout40;
++		goto err_umount_ipaimap2;
+ 	}
+ 	jfs_info("jfs_mount: ipimap:0x%p", ipimap);
+ 
+@@ -178,41 +178,34 @@ int jfs_mount(struct super_block *sb)
+ 	/* initialize fileset inode allocation map */
+ 	if ((rc = diMount(ipimap))) {
+ 		jfs_err("jfs_mount: diMount failed w/rc = %d", rc);
+-		goto errout41;
++		goto err_ipimap;
+ 	}
+ 
+-	goto out;
++	return rc;
+ 
+ 	/*
+ 	 *	unwind on error
+ 	 */
+-      errout41:		/* close fileset inode allocation map inode */
++err_ipimap:
++	/* close fileset inode allocation map inode */
+ 	diFreeSpecial(ipimap);
+-
+-      errout40:		/* fileset closed */
+-
++err_umount_ipaimap2:
+ 	/* close secondary aggregate inode allocation map */
+-	if (ipaimap2) {
++	if (ipaimap2)
+ 		diUnmount(ipaimap2, 1);
++err_ipaimap2:
++	/* close aggregate inodes */
++	if (ipaimap2)
+ 		diFreeSpecial(ipaimap2);
+-	}
+-
+-      errout35:
+-
+-	/* close aggregate block allocation map */
++err_umount_ipbmap:	/* close aggregate block allocation map */
+ 	dbUnmount(ipbmap, 1);
++err_ipbmap:		/* close aggregate inodes */
+ 	diFreeSpecial(ipbmap);
+-
+-      errout22:		/* close aggregate inode allocation map */
+-
++err_umount_ipaimap:	/* close aggregate inode allocation map */
+ 	diUnmount(ipaimap, 1);
+-
+-      errout21:		/* close aggregate inodes */
++err_ipaimap:		/* close aggregate inodes */
+ 	diFreeSpecial(ipaimap);
+-      errout20:		/* aggregate closed */
+-
+-      out:
+-
++out:
+ 	if (rc)
+ 		jfs_err("Mount JFS Failure: %d", rc);
+ 
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 1a6d2867fba4f..5b68c44848caf 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1269,13 +1269,12 @@ static bool nfs_verifier_is_delegated(struct dentry *dentry)
+ static void nfs_set_verifier_locked(struct dentry *dentry, unsigned long verf)
+ {
+ 	struct inode *inode = d_inode(dentry);
++	struct inode *dir = d_inode(dentry->d_parent);
+ 
+-	if (!nfs_verifier_is_delegated(dentry) &&
+-	    !nfs_verify_change_attribute(d_inode(dentry->d_parent), verf))
+-		goto out;
++	if (!nfs_verify_change_attribute(dir, verf))
++		return;
+ 	if (inode && NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
+ 		nfs_set_verifier_delegated(&verf);
+-out:
+ 	dentry->d_time = verf;
+ }
+ 
+@@ -1413,7 +1412,7 @@ out_force:
+ static void nfs_mark_dir_for_revalidate(struct inode *inode)
+ {
+ 	spin_lock(&inode->i_lock);
+-	nfs_set_cache_invalid(inode, NFS_INO_REVAL_PAGECACHE);
++	nfs_set_cache_invalid(inode, NFS_INO_INVALID_CHANGE);
+ 	spin_unlock(&inode->i_lock);
+ }
+ 
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 2e894fec036b0..3c0335c15a730 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -620,7 +620,7 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data)
+ 		nfs_unlock_and_release_request(req);
+ 	}
+ 
+-	if (atomic_dec_and_test(&cinfo.mds->rpcs_out))
++	if (nfs_commit_end(cinfo.mds))
+ 		nfs_direct_write_complete(dreq);
+ }
+ 
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index c9b61b818ec11..bfa7202ca7be1 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -378,10 +378,10 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 		goto noconnect;
+ 
+ 	ds = mirror->mirror_ds->ds;
++	if (READ_ONCE(ds->ds_clp))
++		goto out;
+ 	/* matching smp_wmb() in _nfs4_pnfs_v3/4_ds_connect */
+ 	smp_rmb();
+-	if (ds->ds_clp)
+-		goto out;
+ 
+ 	/* FIXME: For now we assume the server sent only one version of NFS
+ 	 * to use for the DS.
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 853213b3a2095..f9d3ad3acf114 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -210,10 +210,15 @@ void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
+ 		flags &= ~NFS_INO_INVALID_XATTR;
+ 	if (flags & NFS_INO_INVALID_DATA)
+ 		nfs_fscache_invalidate(inode);
+-	if (inode->i_mapping->nrpages == 0)
+-		flags &= ~(NFS_INO_INVALID_DATA|NFS_INO_DATA_INVAL_DEFER);
+ 	flags &= ~(NFS_INO_REVAL_PAGECACHE | NFS_INO_REVAL_FORCED);
++
+ 	nfsi->cache_validity |= flags;
++
++	if (inode->i_mapping->nrpages == 0)
++		nfsi->cache_validity &= ~(NFS_INO_INVALID_DATA |
++					  NFS_INO_DATA_INVAL_DEFER);
++	else if (nfsi->cache_validity & NFS_INO_INVALID_DATA)
++		nfsi->cache_validity &= ~NFS_INO_DATA_INVAL_DEFER;
+ }
+ EXPORT_SYMBOL_GPL(nfs_set_cache_invalid);
+ 
+@@ -1777,8 +1782,10 @@ static int nfs_inode_finish_partial_attr_update(const struct nfs_fattr *fattr,
+ 		NFS_INO_INVALID_BLOCKS | NFS_INO_INVALID_OTHER |
+ 		NFS_INO_INVALID_NLINK;
+ 	unsigned long cache_validity = NFS_I(inode)->cache_validity;
++	enum nfs4_change_attr_type ctype = NFS_SERVER(inode)->change_attr_type;
+ 
+-	if (!(cache_validity & NFS_INO_INVALID_CHANGE) &&
++	if (ctype != NFS4_CHANGE_TYPE_IS_UNDEFINED &&
++	    !(cache_validity & NFS_INO_INVALID_CHANGE) &&
+ 	    (cache_validity & check_valid) != 0 &&
+ 	    (fattr->valid & NFS_ATTR_FATTR_CHANGE) != 0 &&
+ 	    nfs_inode_attrs_cmp_monotonic(fattr, inode) == 0)
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index e6eca1d7481b8..9274c9c5efea6 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -2227,7 +2227,7 @@ static int decode_fsinfo3resok(struct xdr_stream *xdr,
+ 
+ 	/* ignore properties */
+ 	result->lease_time = 0;
+-	result->change_attr_type = NFS4_CHANGE_TYPE_IS_TIME_METADATA;
++	result->change_attr_type = NFS4_CHANGE_TYPE_IS_UNDEFINED;
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index 8d8aba305ecca..f331866dd4182 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -487,7 +487,7 @@ nfs_idmap_new(struct nfs_client *clp)
+ err_destroy_pipe:
+ 	rpc_destroy_pipe_data(idmap->idmap_pipe);
+ err:
+-	get_user_ns(idmap->user_ns);
++	put_user_ns(idmap->user_ns);
+ 	kfree(idmap);
+ 	return error;
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index e1214bb6b7ee5..1f38f8cd8c3ce 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1609,15 +1609,16 @@ static bool nfs_stateid_is_sequential(struct nfs4_state *state,
+ {
+ 	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
+ 		/* The common case - we're updating to a new sequence number */
+-		if (nfs4_stateid_match_other(stateid, &state->open_stateid) &&
+-			nfs4_stateid_is_next(&state->open_stateid, stateid)) {
+-			return true;
++		if (nfs4_stateid_match_other(stateid, &state->open_stateid)) {
++			if (nfs4_stateid_is_next(&state->open_stateid, stateid))
++				return true;
++			return false;
+ 		}
+-	} else {
+-		/* This is the first OPEN in this generation */
+-		if (stateid->seqid == cpu_to_be32(1))
+-			return true;
++		/* The server returned a new stateid */
+ 	}
++	/* This is the first OPEN in this generation */
++	if (stateid->seqid == cpu_to_be32(1))
++		return true;
+ 	return false;
+ }
+ 
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index d810ae674f4e8..a0f6ff094b3a4 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -517,7 +517,7 @@ pnfs_mark_request_commit(struct nfs_page *req, struct pnfs_layout_segment *lseg,
+ {
+ 	struct pnfs_ds_commit_info *fl_cinfo = cinfo->ds;
+ 
+-	if (!lseg || !fl_cinfo->ops->mark_request_commit)
++	if (!lseg || !fl_cinfo->ops || !fl_cinfo->ops->mark_request_commit)
+ 		return false;
+ 	fl_cinfo->ops->mark_request_commit(req, lseg, cinfo, ds_commit_idx);
+ 	return true;
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index cf19914fec817..316f68f96e573 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -468,7 +468,6 @@ pnfs_bucket_alloc_ds_commits(struct list_head *list,
+ 				goto out_error;
+ 			data->ds_commit_index = i;
+ 			list_add_tail(&data->list, list);
+-			atomic_inc(&cinfo->mds->rpcs_out);
+ 			nreq++;
+ 		}
+ 		mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+@@ -520,7 +519,6 @@ pnfs_generic_commit_pagelist(struct inode *inode, struct list_head *mds_pages,
+ 		data->ds_commit_index = -1;
+ 		list_splice_init(mds_pages, &data->pages);
+ 		list_add_tail(&data->list, &list);
+-		atomic_inc(&cinfo->mds->rpcs_out);
+ 		nreq++;
+ 	}
+ 
+@@ -895,7 +893,7 @@ static int _nfs4_pnfs_v3_ds_connect(struct nfs_server *mds_srv,
+ 	}
+ 
+ 	smp_wmb();
+-	ds->ds_clp = clp;
++	WRITE_ONCE(ds->ds_clp, clp);
+ 	dprintk("%s [new] addr: %s\n", __func__, ds->ds_remotestr);
+ out:
+ 	return status;
+@@ -973,7 +971,7 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 	}
+ 
+ 	smp_wmb();
+-	ds->ds_clp = clp;
++	WRITE_ONCE(ds->ds_clp, clp);
+ 	dprintk("%s [new] addr: %s\n", __func__, ds->ds_remotestr);
+ out:
+ 	return status;
+diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
+index ea19dbf123014..ecc4e717808c4 100644
+--- a/fs/nfs/proc.c
++++ b/fs/nfs/proc.c
+@@ -91,7 +91,7 @@ nfs_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
+ 	info->dtpref = fsinfo.tsize;
+ 	info->maxfilesize = 0x7FFFFFFF;
+ 	info->lease_time = 0;
+-	info->change_attr_type = NFS4_CHANGE_TYPE_IS_TIME_METADATA;
++	info->change_attr_type = NFS4_CHANGE_TYPE_IS_UNDEFINED;
+ 	return 0;
+ }
+ 
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index eae9bf1140417..7dce3e735fc53 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1038,25 +1038,11 @@ nfs_scan_commit_list(struct list_head *src, struct list_head *dst,
+ 	struct nfs_page *req, *tmp;
+ 	int ret = 0;
+ 
+-restart:
+ 	list_for_each_entry_safe(req, tmp, src, wb_list) {
+ 		kref_get(&req->wb_kref);
+ 		if (!nfs_lock_request(req)) {
+-			int status;
+-
+-			/* Prevent deadlock with nfs_lock_and_join_requests */
+-			if (!list_empty(dst)) {
+-				nfs_release_request(req);
+-				continue;
+-			}
+-			/* Ensure we make progress to prevent livelock */
+-			mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+-			status = nfs_wait_on_request(req);
+ 			nfs_release_request(req);
+-			mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+-			if (status < 0)
+-				break;
+-			goto restart;
++			continue;
+ 		}
+ 		nfs_request_remove_commit_list(req, cinfo);
+ 		clear_bit(PG_COMMIT_TO_DS, &req->wb_flags);
+@@ -1671,10 +1657,13 @@ static void nfs_commit_begin(struct nfs_mds_commit_info *cinfo)
+ 	atomic_inc(&cinfo->rpcs_out);
+ }
+ 
+-static void nfs_commit_end(struct nfs_mds_commit_info *cinfo)
++bool nfs_commit_end(struct nfs_mds_commit_info *cinfo)
+ {
+-	if (atomic_dec_and_test(&cinfo->rpcs_out))
++	if (atomic_dec_and_test(&cinfo->rpcs_out)) {
+ 		wake_up_var(&cinfo->rpcs_out);
++		return true;
++	}
++	return false;
+ }
+ 
+ void nfs_commitdata_release(struct nfs_commit_data *data)
+@@ -1774,6 +1763,7 @@ void nfs_init_commit(struct nfs_commit_data *data,
+ 	data->res.fattr   = &data->fattr;
+ 	data->res.verf    = &data->verf;
+ 	nfs_fattr_init(&data->fattr);
++	nfs_commit_begin(cinfo->mds);
+ }
+ EXPORT_SYMBOL_GPL(nfs_init_commit);
+ 
+@@ -1820,7 +1810,6 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how,
+ 
+ 	/* Set up the argument struct */
+ 	nfs_init_commit(data, head, NULL, cinfo);
+-	atomic_inc(&cinfo->mds->rpcs_out);
+ 	if (NFS_SERVER(inode)->nfs_client->cl_minorversion)
+ 		task_flags = RPC_TASK_MOVEABLE;
+ 	return nfs_initiate_commit(NFS_CLIENT(inode), data, NFS_PROTO(inode),
+@@ -1936,6 +1925,7 @@ static int __nfs_commit_inode(struct inode *inode, int how,
+ 	int may_wait = how & FLUSH_SYNC;
+ 	int ret, nscan;
+ 
++	how &= ~FLUSH_SYNC;
+ 	nfs_init_cinfo_from_inode(&cinfo, inode);
+ 	nfs_commit_begin(cinfo.mds);
+ 	for (;;) {
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 54d7843c02114..fc5f780fa2355 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -476,10 +476,11 @@ int ocfs2_truncate_file(struct inode *inode,
+ 	 * greater than page size, so we have to truncate them
+ 	 * anyway.
+ 	 */
+-	unmap_mapping_range(inode->i_mapping, new_i_size + PAGE_SIZE - 1, 0, 1);
+-	truncate_inode_pages(inode->i_mapping, new_i_size);
+ 
+ 	if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) {
++		unmap_mapping_range(inode->i_mapping,
++				    new_i_size + PAGE_SIZE - 1, 0, 1);
++		truncate_inode_pages(inode->i_mapping, new_i_size);
+ 		status = ocfs2_truncate_inline(inode, di_bh, new_i_size,
+ 					       i_size_read(inode), 1);
+ 		if (status)
+@@ -498,6 +499,9 @@ int ocfs2_truncate_file(struct inode *inode,
+ 		goto bail_unlock_sem;
+ 	}
+ 
++	unmap_mapping_range(inode->i_mapping, new_i_size + PAGE_SIZE - 1, 0, 1);
++	truncate_inode_pages(inode->i_mapping, new_i_size);
++
+ 	status = ocfs2_commit_truncate(osb, inode, di_bh);
+ 	if (status < 0) {
+ 		mlog_errno(status);
+diff --git a/fs/orangefs/dcache.c b/fs/orangefs/dcache.c
+index fe484cf93e5cd..8bbe9486e3a62 100644
+--- a/fs/orangefs/dcache.c
++++ b/fs/orangefs/dcache.c
+@@ -26,8 +26,10 @@ static int orangefs_revalidate_lookup(struct dentry *dentry)
+ 	gossip_debug(GOSSIP_DCACHE_DEBUG, "%s: attempting lookup.\n", __func__);
+ 
+ 	new_op = op_alloc(ORANGEFS_VFS_OP_LOOKUP);
+-	if (!new_op)
++	if (!new_op) {
++		ret = -ENOMEM;
+ 		goto out_put_parent;
++	}
+ 
+ 	new_op->upcall.req.lookup.sym_follow = ORANGEFS_LOOKUP_LINK_NO_FOLLOW;
+ 	new_op->upcall.req.lookup.parent_refn = parent->refn;
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index c88ac571593dc..44fea16751f1d 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -17,6 +17,7 @@
+ 
+ struct ovl_aio_req {
+ 	struct kiocb iocb;
++	refcount_t ref;
+ 	struct kiocb *orig_iocb;
+ 	struct fd fd;
+ };
+@@ -252,6 +253,14 @@ static rwf_t ovl_iocb_to_rwf(int ifl)
+ 	return flags;
+ }
+ 
++static inline void ovl_aio_put(struct ovl_aio_req *aio_req)
++{
++	if (refcount_dec_and_test(&aio_req->ref)) {
++		fdput(aio_req->fd);
++		kmem_cache_free(ovl_aio_request_cachep, aio_req);
++	}
++}
++
+ static void ovl_aio_cleanup_handler(struct ovl_aio_req *aio_req)
+ {
+ 	struct kiocb *iocb = &aio_req->iocb;
+@@ -268,8 +277,7 @@ static void ovl_aio_cleanup_handler(struct ovl_aio_req *aio_req)
+ 	}
+ 
+ 	orig_iocb->ki_pos = iocb->ki_pos;
+-	fdput(aio_req->fd);
+-	kmem_cache_free(ovl_aio_request_cachep, aio_req);
++	ovl_aio_put(aio_req);
+ }
+ 
+ static void ovl_aio_rw_complete(struct kiocb *iocb, long res, long res2)
+@@ -319,7 +327,9 @@ static ssize_t ovl_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 		aio_req->orig_iocb = iocb;
+ 		kiocb_clone(&aio_req->iocb, iocb, real.file);
+ 		aio_req->iocb.ki_complete = ovl_aio_rw_complete;
++		refcount_set(&aio_req->ref, 2);
+ 		ret = vfs_iocb_iter_read(real.file, &aio_req->iocb, iter);
++		ovl_aio_put(aio_req);
+ 		if (ret != -EIOCBQUEUED)
+ 			ovl_aio_cleanup_handler(aio_req);
+ 	}
+@@ -390,7 +400,9 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 		kiocb_clone(&aio_req->iocb, iocb, real.file);
+ 		aio_req->iocb.ki_flags = ifl;
+ 		aio_req->iocb.ki_complete = ovl_aio_rw_complete;
++		refcount_set(&aio_req->ref, 2);
+ 		ret = vfs_iocb_iter_write(real.file, &aio_req->iocb, iter);
++		ovl_aio_put(aio_req);
+ 		if (ret != -EIOCBQUEUED)
+ 			ovl_aio_cleanup_handler(aio_req);
+ 	}
+diff --git a/fs/proc/stat.c b/fs/proc/stat.c
+index 6561a06ef9059..4fb8729a68d4e 100644
+--- a/fs/proc/stat.c
++++ b/fs/proc/stat.c
+@@ -24,7 +24,7 @@
+ 
+ #ifdef arch_idle_time
+ 
+-static u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
++u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
+ {
+ 	u64 idle;
+ 
+@@ -46,7 +46,7 @@ static u64 get_iowait_time(struct kernel_cpustat *kcs, int cpu)
+ 
+ #else
+ 
+-static u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
++u64 get_idle_time(struct kernel_cpustat *kcs, int cpu)
+ {
+ 	u64 idle, idle_usecs = -1ULL;
+ 
+diff --git a/fs/proc/uptime.c b/fs/proc/uptime.c
+index 5a1b228964fb7..deb99bc9b7e6b 100644
+--- a/fs/proc/uptime.c
++++ b/fs/proc/uptime.c
+@@ -12,18 +12,22 @@ static int uptime_proc_show(struct seq_file *m, void *v)
+ {
+ 	struct timespec64 uptime;
+ 	struct timespec64 idle;
+-	u64 nsec;
++	u64 idle_nsec;
+ 	u32 rem;
+ 	int i;
+ 
+-	nsec = 0;
+-	for_each_possible_cpu(i)
+-		nsec += (__force u64) kcpustat_cpu(i).cpustat[CPUTIME_IDLE];
++	idle_nsec = 0;
++	for_each_possible_cpu(i) {
++		struct kernel_cpustat kcs;
++
++		kcpustat_cpu_fetch(&kcs, i);
++		idle_nsec += get_idle_time(&kcs, i);
++	}
+ 
+ 	ktime_get_boottime_ts64(&uptime);
+ 	timens_add_boottime(&uptime);
+ 
+-	idle.tv_sec = div_u64_rem(nsec, NSEC_PER_SEC, &rem);
++	idle.tv_sec = div_u64_rem(idle_nsec, NSEC_PER_SEC, &rem);
+ 	idle.tv_nsec = rem;
+ 	seq_printf(m, "%lu.%02lu %lu.%02lu\n",
+ 			(unsigned long) uptime.tv_sec,
+diff --git a/fs/quota/quota_tree.c b/fs/quota/quota_tree.c
+index d3e995e1046fb..5f2405994280a 100644
+--- a/fs/quota/quota_tree.c
++++ b/fs/quota/quota_tree.c
+@@ -414,6 +414,7 @@ static int free_dqentry(struct qtree_mem_dqinfo *info, struct dquot *dquot,
+ 		quota_error(dquot->dq_sb, "Quota structure has offset to "
+ 			"other block (%u) than it should (%u)", blk,
+ 			(uint)(dquot->dq_off >> info->dqi_blocksize_bits));
++		ret = -EIO;
+ 		goto out_buf;
+ 	}
+ 	ret = read_blk(info, blk, buf);
+@@ -479,6 +480,13 @@ static int remove_tree(struct qtree_mem_dqinfo *info, struct dquot *dquot,
+ 		goto out_buf;
+ 	}
+ 	newblk = le32_to_cpu(ref[get_index(info, dquot->dq_id, depth)]);
++	if (newblk < QT_TREEOFF || newblk >= info->dqi_blocks) {
++		quota_error(dquot->dq_sb, "Getting block too big (%u >= %u)",
++			    newblk, info->dqi_blocks);
++		ret = -EUCLEAN;
++		goto out_buf;
++	}
++
+ 	if (depth == info->dqi_qtree_depth - 1) {
+ 		ret = free_dqentry(info, dquot, newblk);
+ 		newblk = 0;
+@@ -578,6 +586,13 @@ static loff_t find_tree_dqentry(struct qtree_mem_dqinfo *info,
+ 	blk = le32_to_cpu(ref[get_index(info, dquot->dq_id, depth)]);
+ 	if (!blk)	/* No reference? */
+ 		goto out_buf;
++	if (blk < QT_TREEOFF || blk >= info->dqi_blocks) {
++		quota_error(dquot->dq_sb, "Getting block too big (%u >= %u)",
++			    blk, info->dqi_blocks);
++		ret = -EUCLEAN;
++		goto out_buf;
++	}
++
+ 	if (depth < info->dqi_qtree_depth - 1)
+ 		ret = find_tree_dqentry(info, dquot, blk, depth+1);
+ 	else
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 1261e8b41edb4..925a621b432e3 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -432,7 +432,8 @@ static struct dentry *__create_dir(const char *name, struct dentry *parent,
+ 	if (unlikely(!inode))
+ 		return failed_creating(dentry);
+ 
+-	inode->i_mode = S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO;
++	/* Do not set bits for OTH */
++	inode->i_mode = S_IFDIR | S_IRWXU | S_IRUSR| S_IRGRP | S_IXUSR | S_IXGRP;
+ 	inode->i_op = ops;
+ 	inode->i_fop = &simple_dir_operations;
+ 
+diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
+index f681bbdbc6982..36f7eb9d06639 100644
+--- a/include/drm/ttm/ttm_bo_api.h
++++ b/include/drm/ttm/ttm_bo_api.h
+@@ -594,8 +594,7 @@ vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo,
+ 
+ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
+ 				    pgprot_t prot,
+-				    pgoff_t num_prefault,
+-				    pgoff_t fault_page_size);
++				    pgoff_t num_prefault);
+ 
+ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf);
+ 
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 4b0f8bb0671d1..e7979fe7e4fad 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1223,8 +1223,6 @@ struct blk_plug {
+ 	bool multiple_queues;
+ 	bool nowait;
+ };
+-#define BLK_MAX_REQUEST_COUNT 16
+-#define BLK_PLUG_FLUSH_SIZE (128 * 1024)
+ 
+ struct blk_plug_cb;
+ typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool);
+diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
+new file mode 100644
+index 0000000000000..a075b70b9a70c
+--- /dev/null
++++ b/include/linux/cc_platform.h
+@@ -0,0 +1,88 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Confidential Computing Platform Capability checks
++ *
++ * Copyright (C) 2021 Advanced Micro Devices, Inc.
++ *
++ * Author: Tom Lendacky <thomas.lendacky@amd.com>
++ */
++
++#ifndef _LINUX_CC_PLATFORM_H
++#define _LINUX_CC_PLATFORM_H
++
++#include <linux/types.h>
++#include <linux/stddef.h>
++
++/**
++ * enum cc_attr - Confidential computing attributes
++ *
++ * These attributes represent confidential computing features that are
++ * currently active.
++ */
++enum cc_attr {
++	/**
++	 * @CC_ATTR_MEM_ENCRYPT: Memory encryption is active
++	 *
++	 * The platform/OS is running with active memory encryption. This
++	 * includes running either as a bare-metal system or a hypervisor
++	 * and actively using memory encryption or as a guest/virtual machine
++	 * and actively using memory encryption.
++	 *
++	 * Examples include SME, SEV and SEV-ES.
++	 */
++	CC_ATTR_MEM_ENCRYPT,
++
++	/**
++	 * @CC_ATTR_HOST_MEM_ENCRYPT: Host memory encryption is active
++	 *
++	 * The platform/OS is running as a bare-metal system or a hypervisor
++	 * and actively using memory encryption.
++	 *
++	 * Examples include SME.
++	 */
++	CC_ATTR_HOST_MEM_ENCRYPT,
++
++	/**
++	 * @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
++	 *
++	 * The platform/OS is running as a guest/virtual machine and actively
++	 * using memory encryption.
++	 *
++	 * Examples include SEV and SEV-ES.
++	 */
++	CC_ATTR_GUEST_MEM_ENCRYPT,
++
++	/**
++	 * @CC_ATTR_GUEST_STATE_ENCRYPT: Guest state encryption is active
++	 *
++	 * The platform/OS is running as a guest/virtual machine and actively
++	 * using memory encryption and register state encryption.
++	 *
++	 * Examples include SEV-ES.
++	 */
++	CC_ATTR_GUEST_STATE_ENCRYPT,
++};
++
++#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
++
++/**
++ * cc_platform_has() - Checks if the specified cc_attr attribute is active
++ * @attr: Confidential computing attribute to check
++ *
++ * The cc_platform_has() function will return an indicator as to whether the
++ * specified Confidential Computing attribute is currently active.
++ *
++ * Context: Any context
++ * Return:
++ * * TRUE  - Specified Confidential Computing attribute is active
++ * * FALSE - Specified Confidential Computing attribute is not active
++ */
++bool cc_platform_has(enum cc_attr attr);
++
++#else	/* !CONFIG_ARCH_HAS_CC_PLATFORM */
++
++static inline bool cc_platform_has(enum cc_attr attr) { return false; }
++
++#endif	/* CONFIG_ARCH_HAS_CC_PLATFORM */
++
++#endif	/* _LINUX_CC_PLATFORM_H */
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index e1c705fdfa7c5..db2e147e069fe 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -752,107 +752,54 @@ static inline void cgroup_threadgroup_change_end(struct task_struct *tsk) {}
+  * sock_cgroup_data is embedded at sock->sk_cgrp_data and contains
+  * per-socket cgroup information except for memcg association.
+  *
+- * On legacy hierarchies, net_prio and net_cls controllers directly set
+- * attributes on each sock which can then be tested by the network layer.
+- * On the default hierarchy, each sock is associated with the cgroup it was
+- * created in and the networking layer can match the cgroup directly.
+- *
+- * To avoid carrying all three cgroup related fields separately in sock,
+- * sock_cgroup_data overloads (prioidx, classid) and the cgroup pointer.
+- * On boot, sock_cgroup_data records the cgroup that the sock was created
+- * in so that cgroup2 matches can be made; however, once either net_prio or
+- * net_cls starts being used, the area is overridden to carry prioidx and/or
+- * classid.  The two modes are distinguished by whether the lowest bit is
+- * set.  Clear bit indicates cgroup pointer while set bit prioidx and
+- * classid.
+- *
+- * While userland may start using net_prio or net_cls at any time, once
+- * either is used, cgroup2 matching no longer works.  There is no reason to
+- * mix the two and this is in line with how legacy and v2 compatibility is
+- * handled.  On mode switch, cgroup references which are already being
+- * pointed to by socks may be leaked.  While this can be remedied by adding
+- * synchronization around sock_cgroup_data, given that the number of leaked
+- * cgroups is bound and highly unlikely to be high, this seems to be the
+- * better trade-off.
++ * On legacy hierarchies, net_prio and net_cls controllers directly
++ * set attributes on each sock which can then be tested by the network
++ * layer. On the default hierarchy, each sock is associated with the
++ * cgroup it was created in and the networking layer can match the
++ * cgroup directly.
+  */
+ struct sock_cgroup_data {
+-	union {
+-#ifdef __LITTLE_ENDIAN
+-		struct {
+-			u8	is_data : 1;
+-			u8	no_refcnt : 1;
+-			u8	unused : 6;
+-			u8	padding;
+-			u16	prioidx;
+-			u32	classid;
+-		} __packed;
+-#else
+-		struct {
+-			u32	classid;
+-			u16	prioidx;
+-			u8	padding;
+-			u8	unused : 6;
+-			u8	no_refcnt : 1;
+-			u8	is_data : 1;
+-		} __packed;
++	struct cgroup	*cgroup; /* v2 */
++#ifdef CONFIG_CGROUP_NET_CLASSID
++	u32		classid; /* v1 */
++#endif
++#ifdef CONFIG_CGROUP_NET_PRIO
++	u16		prioidx; /* v1 */
+ #endif
+-		u64		val;
+-	};
+ };
+ 
+-/*
+- * There's a theoretical window where the following accessors race with
+- * updaters and return part of the previous pointer as the prioidx or
+- * classid.  Such races are short-lived and the result isn't critical.
+- */
+ static inline u16 sock_cgroup_prioidx(const struct sock_cgroup_data *skcd)
+ {
+-	/* fallback to 1 which is always the ID of the root cgroup */
+-	return (skcd->is_data & 1) ? skcd->prioidx : 1;
++#ifdef CONFIG_CGROUP_NET_PRIO
++	return READ_ONCE(skcd->prioidx);
++#else
++	return 1;
++#endif
+ }
+ 
+ static inline u32 sock_cgroup_classid(const struct sock_cgroup_data *skcd)
+ {
+-	/* fallback to 0 which is the unconfigured default classid */
+-	return (skcd->is_data & 1) ? skcd->classid : 0;
++#ifdef CONFIG_CGROUP_NET_CLASSID
++	return READ_ONCE(skcd->classid);
++#else
++	return 0;
++#endif
+ }
+ 
+-/*
+- * If invoked concurrently, the updaters may clobber each other.  The
+- * caller is responsible for synchronization.
+- */
+ static inline void sock_cgroup_set_prioidx(struct sock_cgroup_data *skcd,
+ 					   u16 prioidx)
+ {
+-	struct sock_cgroup_data skcd_buf = {{ .val = READ_ONCE(skcd->val) }};
+-
+-	if (sock_cgroup_prioidx(&skcd_buf) == prioidx)
+-		return;
+-
+-	if (!(skcd_buf.is_data & 1)) {
+-		skcd_buf.val = 0;
+-		skcd_buf.is_data = 1;
+-	}
+-
+-	skcd_buf.prioidx = prioidx;
+-	WRITE_ONCE(skcd->val, skcd_buf.val);	/* see sock_cgroup_ptr() */
++#ifdef CONFIG_CGROUP_NET_PRIO
++	WRITE_ONCE(skcd->prioidx, prioidx);
++#endif
+ }
+ 
+ static inline void sock_cgroup_set_classid(struct sock_cgroup_data *skcd,
+ 					   u32 classid)
+ {
+-	struct sock_cgroup_data skcd_buf = {{ .val = READ_ONCE(skcd->val) }};
+-
+-	if (sock_cgroup_classid(&skcd_buf) == classid)
+-		return;
+-
+-	if (!(skcd_buf.is_data & 1)) {
+-		skcd_buf.val = 0;
+-		skcd_buf.is_data = 1;
+-	}
+-
+-	skcd_buf.classid = classid;
+-	WRITE_ONCE(skcd->val, skcd_buf.val);	/* see sock_cgroup_ptr() */
++#ifdef CONFIG_CGROUP_NET_CLASSID
++	WRITE_ONCE(skcd->classid, classid);
++#endif
+ }
+ 
+ #else	/* CONFIG_SOCK_CGROUP_DATA */
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 7bf60454a3136..75c151413fda8 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -829,33 +829,13 @@ static inline void cgroup_account_cputime_field(struct task_struct *task,
+  */
+ #ifdef CONFIG_SOCK_CGROUP_DATA
+ 
+-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
+-extern spinlock_t cgroup_sk_update_lock;
+-#endif
+-
+-void cgroup_sk_alloc_disable(void);
+ void cgroup_sk_alloc(struct sock_cgroup_data *skcd);
+ void cgroup_sk_clone(struct sock_cgroup_data *skcd);
+ void cgroup_sk_free(struct sock_cgroup_data *skcd);
+ 
+ static inline struct cgroup *sock_cgroup_ptr(struct sock_cgroup_data *skcd)
+ {
+-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
+-	unsigned long v;
+-
+-	/*
+-	 * @skcd->val is 64bit but the following is safe on 32bit too as we
+-	 * just need the lower ulong to be written and read atomically.
+-	 */
+-	v = READ_ONCE(skcd->val);
+-
+-	if (v & 3)
+-		return &cgrp_dfl_root.cgrp;
+-
+-	return (struct cgroup *)(unsigned long)v ?: &cgrp_dfl_root.cgrp;
+-#else
+-	return (struct cgroup *)(unsigned long)skcd->val;
+-#endif
++	return skcd->cgroup;
+ }
+ 
+ #else	/* CONFIG_CGROUP_DATA */
+diff --git a/include/linux/console.h b/include/linux/console.h
+index 20874db50bc8a..a97f277cfdfa3 100644
+--- a/include/linux/console.h
++++ b/include/linux/console.h
+@@ -149,6 +149,8 @@ struct console {
+ 	short	flags;
+ 	short	index;
+ 	int	cflag;
++	uint	ispeed;
++	uint	ospeed;
+ 	void	*data;
+ 	struct	 console *next;
+ };
+diff --git a/include/linux/dsa/ocelot.h b/include/linux/dsa/ocelot.h
+index c6bc45ae5e03a..f7e9f7d48b70b 100644
+--- a/include/linux/dsa/ocelot.h
++++ b/include/linux/dsa/ocelot.h
+@@ -6,6 +6,27 @@
+ #define _NET_DSA_TAG_OCELOT_H
+ 
+ #include <linux/packing.h>
++#include <linux/skbuff.h>
++
++struct ocelot_skb_cb {
++	struct sk_buff *clone;
++	unsigned int ptp_class; /* valid only for clones */
++	u32 tstamp_lo;
++	u8 ptp_cmd;
++	u8 ts_id;
++};
++
++#define OCELOT_SKB_CB(skb) \
++	((struct ocelot_skb_cb *)((skb)->cb))
++
++#define IFH_TAG_TYPE_C			0
++#define IFH_TAG_TYPE_S			1
++
++#define IFH_REW_OP_NOOP			0x0
++#define IFH_REW_OP_DSCP			0x1
++#define IFH_REW_OP_ONE_STEP_PTP		0x2
++#define IFH_REW_OP_TWO_STEP_PTP		0x3
++#define IFH_REW_OP_ORIGIN_PTP		0x5
+ 
+ #define OCELOT_TAG_LEN			16
+ #define OCELOT_SHORT_PREFIX_LEN		4
+@@ -215,4 +236,21 @@ static inline void ocelot_ifh_set_vid(void *injection, u64 vid)
+ 	packing(injection, &vid, 11, 0, OCELOT_TAG_LEN, PACK, 0);
+ }
+ 
++/* Determine the PTP REW_OP to use for injecting the given skb */
++static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
++{
++	struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
++	u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd;
++	u32 rew_op = 0;
++
++	if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) {
++		rew_op = ptp_cmd;
++		rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3;
++	} else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
++		rew_op = ptp_cmd;
++	}
++
++	return rew_op;
++}
++
+ #endif
+diff --git a/include/linux/ethtool_netlink.h b/include/linux/ethtool_netlink.h
+index 1e7bf78cb3829..aba348d58ff61 100644
+--- a/include/linux/ethtool_netlink.h
++++ b/include/linux/ethtool_netlink.h
+@@ -10,6 +10,9 @@
+ #define __ETHTOOL_LINK_MODE_MASK_NWORDS \
+ 	DIV_ROUND_UP(__ETHTOOL_LINK_MODE_MASK_NBITS, 32)
+ 
++#define ETHTOOL_PAUSE_STAT_CNT	(__ETHTOOL_A_PAUSE_STAT_CNT -		\
++				 ETHTOOL_A_PAUSE_STAT_TX_FRAMES)
++
+ enum ethtool_multicast_groups {
+ 	ETHNL_MCGRP_MONITOR,
+ };
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 83b896044e79f..c227c45121d6a 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -1027,6 +1027,7 @@ extern int bpf_jit_enable;
+ extern int bpf_jit_harden;
+ extern int bpf_jit_kallsyms;
+ extern long bpf_jit_limit;
++extern long bpf_jit_limit_max;
+ 
+ typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size);
+ 
+diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
+index c1be37437e778..0c70febd03e95 100644
+--- a/include/linux/fortify-string.h
++++ b/include/linux/fortify-string.h
+@@ -280,7 +280,10 @@ __FORTIFY_INLINE char *strcpy(char *p, const char *q)
+ 	if (p_size == (size_t)-1 && q_size == (size_t)-1)
+ 		return __underlying_strcpy(p, q);
+ 	size = strlen(q) + 1;
+-	/* test here to use the more stringent object size */
++	/* Compile-time check for const size overflow. */
++	if (__builtin_constant_p(size) && p_size < size)
++		__write_overflow();
++	/* Run-time check for dynamic size overflow. */
+ 	if (p_size < size)
+ 		fortify_panic(__func__);
+ 	memcpy(p, q, size);
+diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
+index 44ae1a7eb9e39..69ae6b2784645 100644
+--- a/include/linux/kernel_stat.h
++++ b/include/linux/kernel_stat.h
+@@ -102,6 +102,7 @@ extern void account_system_index_time(struct task_struct *, u64,
+ 				      enum cpu_usage_stat);
+ extern void account_steal_time(u64);
+ extern void account_idle_time(u64);
++extern u64 get_idle_time(struct kernel_cpustat *kcs, int cpu);
+ 
+ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+ static inline void account_process_tick(struct task_struct *tsk, int user)
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index cb95d3f3337d5..3d9f2ea6d2a5d 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -390,7 +390,7 @@ enum {
+ 	/* This should match the actual table size of
+ 	 * ata_eh_cmd_timeout_table in libata-eh.c.
+ 	 */
+-	ATA_EH_CMD_TIMEOUT_TABLE_SIZE = 6,
++	ATA_EH_CMD_TIMEOUT_TABLE_SIZE = 7,
+ 
+ 	/* Horkage types. May be set by libata or controller on drives
+ 	   (some horkage may be drive/controller pair dependent */
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index ce64745948722..36405ce03b1dc 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -564,6 +564,7 @@ extern int nfs_wb_page_cancel(struct inode *inode, struct page* page);
+ extern int  nfs_commit_inode(struct inode *, int);
+ extern struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
++bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+ 
+ static inline int
+ nfs_have_writebacks(struct inode *inode)
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index 896c16d2c5fb2..913aa60228b16 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -177,8 +177,10 @@ static inline void posix_cputimers_group_init(struct posix_cputimers *pct,
+ #endif
+ 
+ #ifdef CONFIG_POSIX_CPU_TIMERS_TASK_WORK
++void clear_posix_cputimers_work(struct task_struct *p);
+ void posix_cputimers_init_work(void);
+ #else
++static inline void clear_posix_cputimers_work(struct task_struct *p) { }
+ static inline void posix_cputimers_init_work(void) { }
+ #endif
+ 
+diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
+index d97dcd049f18f..a8dcf8a9ae885 100644
+--- a/include/linux/rpmsg.h
++++ b/include/linux/rpmsg.h
+@@ -231,7 +231,7 @@ static inline struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev
+ 	/* This shouldn't be possible */
+ 	WARN_ON(1);
+ 
+-	return ERR_PTR(-ENXIO);
++	return NULL;
+ }
+ 
+ static inline int rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len)
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 8e10c7accdbcc..4ee118cf06971 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2030,6 +2030,7 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ #endif /* CONFIG_SMP */
+ 
+ extern bool sched_task_on_rq(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ /*
+  * In order to reduce various lock holder preemption latencies provide an
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index ef02be869cf28..ba88a69874004 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -54,7 +54,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev);
+ extern void init_idle(struct task_struct *idle, int cpu);
+ 
+ extern int sched_fork(unsigned long clone_flags, struct task_struct *p);
+-extern void sched_post_fork(struct task_struct *p);
++extern void sched_post_fork(struct task_struct *p,
++			    struct kernel_clone_args *kargs);
+ extern void sched_dead(struct task_struct *p);
+ 
+ void __noreturn do_task_dead(void);
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index 2413427e439c7..d10150587d819 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -25,7 +25,11 @@ static inline void *task_stack_page(const struct task_struct *task)
+ 
+ static inline unsigned long *end_of_stack(const struct task_struct *task)
+ {
++#ifdef CONFIG_STACK_GROWSUP
++	return (unsigned long *)((unsigned long)task->stack + THREAD_SIZE) - 1;
++#else
+ 	return task->stack;
++#endif
+ }
+ 
+ #elif !defined(__HAVE_THREAD_FUNCTIONS)
+diff --git a/include/linux/seq_file.h b/include/linux/seq_file.h
+index dd99569595fd3..5733890df64f5 100644
+--- a/include/linux/seq_file.h
++++ b/include/linux/seq_file.h
+@@ -194,7 +194,7 @@ static const struct file_operations __name ## _fops = {			\
+ #define DEFINE_PROC_SHOW_ATTRIBUTE(__name)				\
+ static int __name ## _open(struct inode *inode, struct file *file)	\
+ {									\
+-	return single_open(file, __name ## _show, inode->i_private);	\
++	return single_open(file, __name ## _show, PDE_DATA(inode));	\
+ }									\
+ 									\
+ static const struct proc_ops __name ## _proc_ops = {			\
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 14ab0c0bc9241..94e2a1f6e58db 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -508,8 +508,22 @@ static inline bool sk_psock_strp_enabled(struct sk_psock *psock)
+ 
+ #if IS_ENABLED(CONFIG_NET_SOCK_MSG)
+ 
+-/* We only have one bit so far. */
+-#define BPF_F_PTR_MASK ~(BPF_F_INGRESS)
++#define BPF_F_STRPARSER	(1UL << 1)
++
++/* We only have two bits so far. */
++#define BPF_F_PTR_MASK ~(BPF_F_INGRESS | BPF_F_STRPARSER)
++
++static inline bool skb_bpf_strparser(const struct sk_buff *skb)
++{
++	unsigned long sk_redir = skb->_sk_redir;
++
++	return sk_redir & BPF_F_STRPARSER;
++}
++
++static inline void skb_bpf_set_strparser(struct sk_buff *skb)
++{
++	skb->_sk_redir |= BPF_F_STRPARSER;
++}
+ 
+ static inline bool skb_bpf_ingress(const struct sk_buff *skb)
+ {
+diff --git a/include/linux/surface_aggregator/controller.h b/include/linux/surface_aggregator/controller.h
+index 068e1982ad371..74bfdffaf7b0e 100644
+--- a/include/linux/surface_aggregator/controller.h
++++ b/include/linux/surface_aggregator/controller.h
+@@ -792,8 +792,8 @@ enum ssam_event_mask {
+ #define SSAM_EVENT_REGISTRY_KIP	\
+ 	SSAM_EVENT_REGISTRY(SSAM_SSH_TC_KIP, 0x02, 0x27, 0x28)
+ 
+-#define SSAM_EVENT_REGISTRY_REG \
+-	SSAM_EVENT_REGISTRY(SSAM_SSH_TC_REG, 0x02, 0x01, 0x02)
++#define SSAM_EVENT_REGISTRY_REG(tid)\
++	SSAM_EVENT_REGISTRY(SSAM_SSH_TC_REG, tid, 0x01, 0x02)
+ 
+ /**
+  * enum ssam_event_notifier_flags - Flags for event notifiers.
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index aa11fe323c56b..12d827734686d 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -269,6 +269,7 @@ enum tpm2_cc_attrs {
+ #define TPM_VID_INTEL    0x8086
+ #define TPM_VID_WINBOND  0x1050
+ #define TPM_VID_STM      0x104A
++#define TPM_VID_ATML     0x1114
+ 
+ enum tpm_chip_flags {
+ 	TPM_CHIP_FLAG_TPM2		= BIT(1),
+diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
+index 12955cb460d23..3b5986cee0739 100644
+--- a/include/media/videobuf2-core.h
++++ b/include/media/videobuf2-core.h
+@@ -46,6 +46,7 @@ enum vb2_memory {
+ 
+ struct vb2_fileio_data;
+ struct vb2_threadio_data;
++struct vb2_buffer;
+ 
+ /**
+  * struct vb2_mem_ops - memory handling/memory allocator operations.
+@@ -53,10 +54,8 @@ struct vb2_threadio_data;
+  *		return ERR_PTR() on failure or a pointer to allocator private,
+  *		per-buffer data on success; the returned private structure
+  *		will then be passed as @buf_priv argument to other ops in this
+- *		structure. Additional gfp_flags to use when allocating the
+- *		are also passed to this operation. These flags are from the
+- *		gfp_flags field of vb2_queue. The size argument to this function
+- *		shall be *page aligned*.
++ *		structure. The size argument to this function shall be
++ *		*page aligned*.
+  * @put:	inform the allocator that the buffer will no longer be used;
+  *		usually will result in the allocator freeing the buffer (if
+  *		no other users of this buffer are present); the @buf_priv
+@@ -117,31 +116,33 @@ struct vb2_threadio_data;
+  *       map_dmabuf, unmap_dmabuf.
+  */
+ struct vb2_mem_ops {
+-	void		*(*alloc)(struct device *dev, unsigned long attrs,
+-				  unsigned long size,
+-				  enum dma_data_direction dma_dir,
+-				  gfp_t gfp_flags);
++	void		*(*alloc)(struct vb2_buffer *vb,
++				  struct device *dev,
++				  unsigned long size);
+ 	void		(*put)(void *buf_priv);
+-	struct dma_buf *(*get_dmabuf)(void *buf_priv, unsigned long flags);
+-
+-	void		*(*get_userptr)(struct device *dev, unsigned long vaddr,
+-					unsigned long size,
+-					enum dma_data_direction dma_dir);
++	struct dma_buf *(*get_dmabuf)(struct vb2_buffer *vb,
++				      void *buf_priv,
++				      unsigned long flags);
++
++	void		*(*get_userptr)(struct vb2_buffer *vb,
++					struct device *dev,
++					unsigned long vaddr,
++					unsigned long size);
+ 	void		(*put_userptr)(void *buf_priv);
+ 
+ 	void		(*prepare)(void *buf_priv);
+ 	void		(*finish)(void *buf_priv);
+ 
+-	void		*(*attach_dmabuf)(struct device *dev,
++	void		*(*attach_dmabuf)(struct vb2_buffer *vb,
++					  struct device *dev,
+ 					  struct dma_buf *dbuf,
+-					  unsigned long size,
+-					  enum dma_data_direction dma_dir);
++					  unsigned long size);
+ 	void		(*detach_dmabuf)(void *buf_priv);
+ 	int		(*map_dmabuf)(void *buf_priv);
+ 	void		(*unmap_dmabuf)(void *buf_priv);
+ 
+-	void		*(*vaddr)(void *buf_priv);
+-	void		*(*cookie)(void *buf_priv);
++	void		*(*vaddr)(struct vb2_buffer *vb, void *buf_priv);
++	void		*(*cookie)(struct vb2_buffer *vb, void *buf_priv);
+ 
+ 	unsigned int	(*num_users)(void *buf_priv);
+ 
+diff --git a/include/memory/renesas-rpc-if.h b/include/memory/renesas-rpc-if.h
+index e3e770f76f349..77c694a19149d 100644
+--- a/include/memory/renesas-rpc-if.h
++++ b/include/memory/renesas-rpc-if.h
+@@ -59,6 +59,7 @@ struct rpcif_op {
+ 
+ struct rpcif {
+ 	struct device *dev;
++	void __iomem *base;
+ 	void __iomem *dirmap;
+ 	struct regmap *regmap;
+ 	struct reset_control *rstc;
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index b06c2d02ec84e..fa6a87246a7b8 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -289,7 +289,7 @@ static inline void inet_csk_prepare_for_destroy_sock(struct sock *sk)
+ {
+ 	/* The below has to be done to allow calling inet_csk_destroy_sock */
+ 	sock_set_flag(sk, SOCK_DEAD);
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(*sk->sk_prot->orphan_count);
+ }
+ 
+ void inet_csk_destroy_sock(struct sock *sk);
+diff --git a/include/net/llc.h b/include/net/llc.h
+index df282d9b40170..9c10b121b49b0 100644
+--- a/include/net/llc.h
++++ b/include/net/llc.h
+@@ -72,7 +72,9 @@ struct llc_sap {
+ static inline
+ struct hlist_head *llc_sk_dev_hash(struct llc_sap *sap, int ifindex)
+ {
+-	return &sap->sk_dev_hash[ifindex % LLC_SK_DEV_HASH_ENTRIES];
++	u32 bucket = hash_32(ifindex, LLC_SK_DEV_HASH_BITS);
++
++	return &sap->sk_dev_hash[bucket];
+ }
+ 
+ static inline
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 22ced1381ede5..d5767e25509cc 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -253,6 +253,7 @@ static inline void *neighbour_priv(const struct neighbour *n)
+ #define NEIGH_UPDATE_F_OVERRIDE			0x00000001
+ #define NEIGH_UPDATE_F_WEAK_OVERRIDE		0x00000002
+ #define NEIGH_UPDATE_F_OVERRIDE_ISROUTER	0x00000004
++#define NEIGH_UPDATE_F_USE			0x10000000
+ #define NEIGH_UPDATE_F_EXT_LEARNED		0x20000000
+ #define NEIGH_UPDATE_F_ISROUTER			0x40000000
+ #define NEIGH_UPDATE_F_ADMIN			0x80000000
+@@ -504,10 +505,15 @@ static inline int neigh_output(struct neighbour *n, struct sk_buff *skb,
+ {
+ 	const struct hh_cache *hh = &n->hh;
+ 
+-	if ((n->nud_state & NUD_CONNECTED) && hh->hh_len && !skip_cache)
++	/* n->nud_state and hh->hh_len could be changed under us.
++	 * neigh_hh_output() is taking care of the race later.
++	 */
++	if (!skip_cache &&
++	    (READ_ONCE(n->nud_state) & NUD_CONNECTED) &&
++	    READ_ONCE(hh->hh_len))
+ 		return neigh_hh_output(hh, skb);
+-	else
+-		return n->output(n, skb);
++
++	return n->output(n, skb);
+ }
+ 
+ static inline struct neighbour *
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 9ed33e6840bd6..30da65a421d7a 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -308,6 +308,8 @@ struct Qdisc_ops {
+ 					  struct netlink_ext_ack *extack);
+ 	void			(*attach)(struct Qdisc *sch);
+ 	int			(*change_tx_queue_len)(struct Qdisc *, unsigned int);
++	void			(*change_real_num_tx)(struct Qdisc *sch,
++						      unsigned int new_real_tx);
+ 
+ 	int			(*dump)(struct Qdisc *, struct sk_buff *);
+ 	int			(*dump_stats)(struct Qdisc *, struct gnet_dump *);
+@@ -684,6 +686,8 @@ void qdisc_class_hash_grow(struct Qdisc *, struct Qdisc_class_hash *);
+ void qdisc_class_hash_destroy(struct Qdisc_class_hash *);
+ 
+ int dev_qdisc_change_tx_queue_len(struct net_device *dev);
++void dev_qdisc_change_real_num_tx(struct net_device *dev,
++				  unsigned int new_real_tx);
+ void dev_init_scheduler(struct net_device *dev);
+ void dev_shutdown(struct net_device *dev);
+ void dev_activate(struct net_device *dev);
+diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
+index 69bab88ad66b1..189fdb9db1622 100644
+--- a/include/net/sctp/sctp.h
++++ b/include/net/sctp/sctp.h
+@@ -626,7 +626,8 @@ static inline __u32 sctp_min_frag_point(struct sctp_sock *sp, __u16 datasize)
+ 
+ static inline int sctp_transport_pl_hlen(struct sctp_transport *t)
+ {
+-	return __sctp_mtu_payload(sctp_sk(t->asoc->base.sk), t, 0, 0);
++	return __sctp_mtu_payload(sctp_sk(t->asoc->base.sk), t, 0, 0) -
++	       sizeof(struct sctphdr);
+ }
+ 
+ static inline void sctp_transport_pl_reset(struct sctp_transport *t)
+@@ -653,12 +654,10 @@ static inline void sctp_transport_pl_update(struct sctp_transport *t)
+ 	if (t->pl.state == SCTP_PL_DISABLED)
+ 		return;
+ 
+-	if (del_timer(&t->probe_timer))
+-		sctp_transport_put(t);
+-
+ 	t->pl.state = SCTP_PL_BASE;
+ 	t->pl.pmtu = SCTP_BASE_PLPMTU;
+ 	t->pl.probe_size = SCTP_BASE_PLPMTU;
++	sctp_transport_reset_probe_timer(t);
+ }
+ 
+ static inline bool sctp_transport_pl_enabled(struct sctp_transport *t)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index d28b9bb5ef5a0..95e0a290b648b 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1235,7 +1235,7 @@ struct proto {
+ 	unsigned int		useroffset;	/* Usercopy region offset */
+ 	unsigned int		usersize;	/* Usercopy region size */
+ 
+-	struct percpu_counter	*orphan_count;
++	unsigned int __percpu	*orphan_count;
+ 
+ 	struct request_sock_ops	*rsk_prot;
+ 	struct timewait_sock_ops *twsk_prot;
+diff --git a/include/net/strparser.h b/include/net/strparser.h
+index 1d20b98493a10..732b7097d78e4 100644
+--- a/include/net/strparser.h
++++ b/include/net/strparser.h
+@@ -54,10 +54,28 @@ struct strp_msg {
+ 	int offset;
+ };
+ 
++struct _strp_msg {
++	/* Internal cb structure. struct strp_msg must be first for passing
++	 * to upper layer.
++	 */
++	struct strp_msg strp;
++	int accum_len;
++};
++
++struct sk_skb_cb {
++#define SK_SKB_CB_PRIV_LEN 20
++	unsigned char data[SK_SKB_CB_PRIV_LEN];
++	struct _strp_msg strp;
++	/* temp_reg is a temporary register used for bpf_convert_data_end_access
++	 * when dst_reg == src_reg.
++	 */
++	u64 temp_reg;
++};
++
+ static inline struct strp_msg *strp_msg(struct sk_buff *skb)
+ {
+ 	return (struct strp_msg *)((void *)skb->cb +
+-		offsetof(struct qdisc_skb_cb, data));
++		offsetof(struct sk_skb_cb, strp));
+ }
+ 
+ /* Structure for an attached lower socket */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 784d5c3ef1c5b..c5cf900539209 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -48,7 +48,9 @@
+ 
+ extern struct inet_hashinfo tcp_hashinfo;
+ 
+-extern struct percpu_counter tcp_orphan_count;
++DECLARE_PER_CPU(unsigned int, tcp_orphan_count);
++int tcp_orphan_count_sum(void);
++
+ void tcp_time_wait(struct sock *sk, int state, int timeo);
+ 
+ #define MAX_TCP_HEADER	L1_CACHE_ALIGN(128 + MAX_HEADER)
+@@ -290,19 +292,6 @@ static inline bool tcp_out_of_memory(struct sock *sk)
+ 
+ void sk_forced_mem_schedule(struct sock *sk, int size);
+ 
+-static inline bool tcp_too_many_orphans(struct sock *sk, int shift)
+-{
+-	struct percpu_counter *ocp = sk->sk_prot->orphan_count;
+-	int orphans = percpu_counter_read_positive(ocp);
+-
+-	if (orphans << shift > sysctl_tcp_max_orphans) {
+-		orphans = percpu_counter_sum_positive(ocp);
+-		if (orphans << shift > sysctl_tcp_max_orphans)
+-			return true;
+-	}
+-	return false;
+-}
+-
+ bool tcp_check_oom(struct sock *sk, int shift);
+ 
+ 
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 360df454356cb..909ecf447e0fb 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -494,8 +494,9 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ 	 * CHECKSUM_NONE in __udp_gso_segment. UDP GRO indeed builds partial
+ 	 * packets in udp_gro_complete_segment. As does UDP GSO, verified by
+ 	 * udp_send_skb. But when those packets are looped in dev_loopback_xmit
+-	 * their ip_summed is set to CHECKSUM_UNNECESSARY. Reset in this
+-	 * specific case, where PARTIAL is both correct and required.
++	 * their ip_summed CHECKSUM_NONE is changed to CHECKSUM_UNNECESSARY.
++	 * Reset in this specific case, where PARTIAL is both correct and
++	 * required.
+ 	 */
+ 	if (skb->pkt_type == PACKET_LOOPBACK)
+ 		skb->ip_summed = CHECKSUM_PARTIAL;
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 779a59fe8676d..05bf735d36d69 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -68,7 +68,7 @@ struct scsi_pointer {
+ struct scsi_cmnd {
+ 	struct scsi_request req;
+ 	struct scsi_device *device;
+-	struct list_head eh_entry; /* entry for the host eh_cmd_q */
++	struct list_head eh_entry; /* entry for the host eh_abort_list/eh_cmd_q */
+ 	struct delayed_work abort_work;
+ 
+ 	struct rcu_head rcu;
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 75363707b73f9..1a02e58eb4e44 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -556,6 +556,7 @@ struct Scsi_Host {
+ 
+ 	struct mutex		scan_mutex;/* serialize scanning activity */
+ 
++	struct list_head	eh_abort_list;
+ 	struct list_head	eh_cmd_q;
+ 	struct task_struct    * ehandler;  /* Error recovery thread. */
+ 	struct completion     * eh_action; /* Wait for specific actions on the
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 4984093882372..a02f0d7515f25 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -89,15 +89,6 @@
+ /* Source PGIDs, one per physical port */
+ #define PGID_SRC			80
+ 
+-#define IFH_TAG_TYPE_C			0
+-#define IFH_TAG_TYPE_S			1
+-
+-#define IFH_REW_OP_NOOP			0x0
+-#define IFH_REW_OP_DSCP			0x1
+-#define IFH_REW_OP_ONE_STEP_PTP		0x2
+-#define IFH_REW_OP_TWO_STEP_PTP		0x3
+-#define IFH_REW_OP_ORIGIN_PTP		0x5
+-
+ #define OCELOT_NUM_TC			8
+ 
+ #define OCELOT_SPEED_2500		0
+@@ -692,16 +683,6 @@ struct ocelot_policer {
+ 	u32 burst; /* bytes */
+ };
+ 
+-struct ocelot_skb_cb {
+-	struct sk_buff *clone;
+-	unsigned int ptp_class; /* valid only for clones */
+-	u8 ptp_cmd;
+-	u8 ts_id;
+-};
+-
+-#define OCELOT_SKB_CB(skb) \
+-	((struct ocelot_skb_cb *)((skb)->cb))
+-
+ #define ocelot_read_ix(ocelot, reg, gi, ri) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri))
+ #define ocelot_read_gix(ocelot, reg, gi) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi))
+ #define ocelot_read_rix(ocelot, reg, ri) __ocelot_read_ix(ocelot, reg, reg##_RSZ * (ri))
+@@ -762,7 +743,6 @@ void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
+ int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb);
+ void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp);
+ 
+-u32 ocelot_ptp_rew_op(struct sk_buff *skb);
+ #else
+ 
+ static inline bool ocelot_can_inject(struct ocelot *ocelot, int grp)
+@@ -786,10 +766,6 @@ static inline void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp)
+ {
+ }
+ 
+-static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
+-{
+-	return 0;
+-}
+ #endif
+ 
+ /* Hardware initialization */
+diff --git a/include/sound/soc-topology.h b/include/sound/soc-topology.h
+index 4afd667e124c2..3e8a85e1e8094 100644
+--- a/include/sound/soc-topology.h
++++ b/include/sound/soc-topology.h
+@@ -188,8 +188,7 @@ int snd_soc_tplg_widget_bind_event(struct snd_soc_dapm_widget *w,
+ 
+ #else
+ 
+-static inline int snd_soc_tplg_component_remove(struct snd_soc_component *comp,
+-						u32 index)
++static inline int snd_soc_tplg_component_remove(struct snd_soc_component *comp)
+ {
+ 	return 0;
+ }
+diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h
+index b3b93710eff70..010edbb7382d4 100644
+--- a/include/uapi/linux/ethtool_netlink.h
++++ b/include/uapi/linux/ethtool_netlink.h
+@@ -405,7 +405,9 @@ enum {
+ 	ETHTOOL_A_PAUSE_STAT_TX_FRAMES,
+ 	ETHTOOL_A_PAUSE_STAT_RX_FRAMES,
+ 
+-	/* add new constants above here */
++	/* add new constants above here
++	 * adjust ETHTOOL_PAUSE_STAT_CNT if adding non-stats!
++	 */
+ 	__ETHTOOL_A_PAUSE_STAT_CNT,
+ 	ETHTOOL_A_PAUSE_STAT_MAX = (__ETHTOOL_A_PAUSE_STAT_CNT - 1)
+ };
+diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h
+index e709ae8235e7f..ff6ccbc6efe96 100644
+--- a/include/uapi/linux/pci_regs.h
++++ b/include/uapi/linux/pci_regs.h
+@@ -504,6 +504,12 @@
+ #define  PCI_EXP_DEVCTL_URRE	0x0008	/* Unsupported Request Reporting En. */
+ #define  PCI_EXP_DEVCTL_RELAX_EN 0x0010 /* Enable relaxed ordering */
+ #define  PCI_EXP_DEVCTL_PAYLOAD	0x00e0	/* Max_Payload_Size */
++#define  PCI_EXP_DEVCTL_PAYLOAD_128B 0x0000 /* 128 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_256B 0x0020 /* 256 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_512B 0x0040 /* 512 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_1024B 0x0060 /* 1024 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_2048B 0x0080 /* 2048 Bytes */
++#define  PCI_EXP_DEVCTL_PAYLOAD_4096B 0x00a0 /* 4096 Bytes */
+ #define  PCI_EXP_DEVCTL_EXT_TAG	0x0100	/* Extended Tag Field Enable */
+ #define  PCI_EXP_DEVCTL_PHANTOM	0x0200	/* Phantom Functions Enable */
+ #define  PCI_EXP_DEVCTL_AUX_PME	0x0400	/* Auxiliary Power PM Enable */
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 4c0c0146f956c..2340d11737cca 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -524,6 +524,7 @@ int bpf_jit_enable   __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
+ int bpf_jit_kallsyms __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
+ int bpf_jit_harden   __read_mostly;
+ long bpf_jit_limit   __read_mostly;
++long bpf_jit_limit_max __read_mostly;
+ 
+ static void
+ bpf_prog_ksym_set_addr(struct bpf_prog *prog)
+@@ -817,7 +818,8 @@ u64 __weak bpf_jit_alloc_exec_limit(void)
+ static int __init bpf_jit_charge_init(void)
+ {
+ 	/* Only used as heuristic here to derive limit. */
+-	bpf_jit_limit = min_t(u64, round_up(bpf_jit_alloc_exec_limit() >> 2,
++	bpf_jit_limit_max = bpf_jit_alloc_exec_limit();
++	bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 2,
+ 					    PAGE_SIZE), LONG_MAX);
+ 	return 0;
+ }
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 28a3630c48ee1..9587e5ebddaa3 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -579,11 +579,13 @@ static void notrace update_prog_stats(struct bpf_prog *prog,
+ 	     * Hence check that 'start' is valid.
+ 	     */
+ 	    start > NO_START_TIME) {
++		unsigned long flags;
++
+ 		stats = this_cpu_ptr(prog->stats);
+-		u64_stats_update_begin(&stats->syncp);
++		flags = u64_stats_update_begin_irqsave(&stats->syncp);
+ 		stats->cnt++;
+ 		stats->nsecs += sched_clock() - start;
+-		u64_stats_update_end(&stats->syncp);
++		u64_stats_update_end_irqrestore(&stats->syncp, flags);
+ 	}
+ }
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 592b9b68cbd93..4dd9cedfc453d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1397,12 +1397,12 @@ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
+ 
+ static bool __reg64_bound_s32(s64 a)
+ {
+-	return a > S32_MIN && a < S32_MAX;
++	return a >= S32_MIN && a <= S32_MAX;
+ }
+ 
+ static bool __reg64_bound_u32(u64 a)
+ {
+-	return a > U32_MIN && a < U32_MAX;
++	return a >= U32_MIN && a <= U32_MAX;
+ }
+ 
+ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 38750c385dd2c..bfbed4c99f166 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1726,6 +1726,7 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 	struct cgroup *dcgrp = &dst_root->cgrp;
+ 	struct cgroup_subsys *ss;
+ 	int ssid, i, ret;
++	u16 dfl_disable_ss_mask = 0;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+ 
+@@ -1742,8 +1743,28 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 		/* can't move between two non-dummy roots either */
+ 		if (ss->root != &cgrp_dfl_root && dst_root != &cgrp_dfl_root)
+ 			return -EBUSY;
++
++		/*
++		 * Collect ssid's that need to be disabled from default
++		 * hierarchy.
++		 */
++		if (ss->root == &cgrp_dfl_root)
++			dfl_disable_ss_mask |= 1 << ssid;
++
+ 	} while_each_subsys_mask();
+ 
++	if (dfl_disable_ss_mask) {
++		struct cgroup *scgrp = &cgrp_dfl_root.cgrp;
++
++		/*
++		 * Controllers from default hierarchy that need to be rebound
++		 * are all disabled together in one go.
++		 */
++		cgrp_dfl_root.subsys_mask &= ~dfl_disable_ss_mask;
++		WARN_ON(cgroup_apply_control(scgrp));
++		cgroup_finalize_control(scgrp, 0);
++	}
++
+ 	do_each_subsys_mask(ss, ssid, ss_mask) {
+ 		struct cgroup_root *src_root = ss->root;
+ 		struct cgroup *scgrp = &src_root->cgrp;
+@@ -1752,10 +1773,12 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask)
+ 
+ 		WARN_ON(!css || cgroup_css(dcgrp, ss));
+ 
+-		/* disable from the source */
+-		src_root->subsys_mask &= ~(1 << ssid);
+-		WARN_ON(cgroup_apply_control(scgrp));
+-		cgroup_finalize_control(scgrp, 0);
++		if (src_root != &cgrp_dfl_root) {
++			/* disable from the source */
++			src_root->subsys_mask &= ~(1 << ssid);
++			WARN_ON(cgroup_apply_control(scgrp));
++			cgroup_finalize_control(scgrp, 0);
++		}
+ 
+ 		/* rebind */
+ 		RCU_INIT_POINTER(scgrp->subsys[ssid], NULL);
+@@ -6561,74 +6584,51 @@ int cgroup_parse_float(const char *input, unsigned dec_shift, s64 *v)
+  */
+ #ifdef CONFIG_SOCK_CGROUP_DATA
+ 
+-#if defined(CONFIG_CGROUP_NET_PRIO) || defined(CONFIG_CGROUP_NET_CLASSID)
+-
+-DEFINE_SPINLOCK(cgroup_sk_update_lock);
+-static bool cgroup_sk_alloc_disabled __read_mostly;
+-
+-void cgroup_sk_alloc_disable(void)
+-{
+-	if (cgroup_sk_alloc_disabled)
+-		return;
+-	pr_info("cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation\n");
+-	cgroup_sk_alloc_disabled = true;
+-}
+-
+-#else
+-
+-#define cgroup_sk_alloc_disabled	false
+-
+-#endif
+-
+ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
+ {
+-	if (cgroup_sk_alloc_disabled) {
+-		skcd->no_refcnt = 1;
+-		return;
+-	}
+-
+-	/* Don't associate the sock with unrelated interrupted task's cgroup. */
+-	if (in_interrupt())
+-		return;
++	struct cgroup *cgroup;
+ 
+ 	rcu_read_lock();
++	/* Don't associate the sock with unrelated interrupted task's cgroup. */
++	if (in_interrupt()) {
++		cgroup = &cgrp_dfl_root.cgrp;
++		cgroup_get(cgroup);
++		goto out;
++	}
+ 
+ 	while (true) {
+ 		struct css_set *cset;
+ 
+ 		cset = task_css_set(current);
+ 		if (likely(cgroup_tryget(cset->dfl_cgrp))) {
+-			skcd->val = (unsigned long)cset->dfl_cgrp;
+-			cgroup_bpf_get(cset->dfl_cgrp);
++			cgroup = cset->dfl_cgrp;
+ 			break;
+ 		}
+ 		cpu_relax();
+ 	}
+-
++out:
++	skcd->cgroup = cgroup;
++	cgroup_bpf_get(cgroup);
+ 	rcu_read_unlock();
+ }
+ 
+ void cgroup_sk_clone(struct sock_cgroup_data *skcd)
+ {
+-	if (skcd->val) {
+-		if (skcd->no_refcnt)
+-			return;
+-		/*
+-		 * We might be cloning a socket which is left in an empty
+-		 * cgroup and the cgroup might have already been rmdir'd.
+-		 * Don't use cgroup_get_live().
+-		 */
+-		cgroup_get(sock_cgroup_ptr(skcd));
+-		cgroup_bpf_get(sock_cgroup_ptr(skcd));
+-	}
++	struct cgroup *cgrp = sock_cgroup_ptr(skcd);
++
++	/*
++	 * We might be cloning a socket which is left in an empty
++	 * cgroup and the cgroup might have already been rmdir'd.
++	 * Don't use cgroup_get_live().
++	 */
++	cgroup_get(cgrp);
++	cgroup_bpf_get(cgrp);
+ }
+ 
+ void cgroup_sk_free(struct sock_cgroup_data *skcd)
+ {
+ 	struct cgroup *cgrp = sock_cgroup_ptr(skcd);
+ 
+-	if (skcd->no_refcnt)
+-		return;
+ 	cgroup_bpf_put(cgrp);
+ 	cgroup_put(cgrp);
+ }
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index b264ab5652ba9..1486768f23185 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -433,8 +433,6 @@ static void root_cgroup_cputime(struct task_cputime *cputime)
+ 		cputime->sum_exec_runtime += user;
+ 		cputime->sum_exec_runtime += sys;
+ 		cputime->sum_exec_runtime += cpustat[CPUTIME_STEAL];
+-		cputime->sum_exec_runtime += cpustat[CPUTIME_GUEST];
+-		cputime->sum_exec_runtime += cpustat[CPUTIME_GUEST_NICE];
+ 	}
+ }
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index cbba21e3a58df..32a545c86ad14 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2206,6 +2206,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 	p->pdeath_signal = 0;
+ 	INIT_LIST_HEAD(&p->thread_group);
+ 	p->task_works = NULL;
++	clear_posix_cputimers_work(p);
+ 
+ #ifdef CONFIG_KRETPROBES
+ 	p->kretprobe_instances.first = NULL;
+@@ -2331,7 +2332,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 	write_unlock_irq(&tasklist_lock);
+ 
+ 	proc_fork_connector(p);
+-	sched_post_fork(p);
++	sched_post_fork(p, args);
+ 	cgroup_post_fork(p, args);
+ 	perf_event_fork(p);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 790a573bbe00c..1cf8bca1ea861 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2809,13 +2809,12 @@ static const struct file_operations fops_kp = {
+ static int __init debugfs_kprobe_init(void)
+ {
+ 	struct dentry *dir;
+-	unsigned int value = 1;
+ 
+ 	dir = debugfs_create_dir("kprobes", NULL);
+ 
+ 	debugfs_create_file("list", 0400, dir, NULL, &kprobes_fops);
+ 
+-	debugfs_create_file("enabled", 0600, dir, &value, &fops_kp);
++	debugfs_create_file("enabled", 0600, dir, NULL, &fops_kp);
+ 
+ 	debugfs_create_file("blacklist", 0400, dir, NULL,
+ 			    &kprobe_blacklist_fops);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index bf1c00c881e48..d624231eab2bb 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -888,7 +888,7 @@ look_up_lock_class(const struct lockdep_map *lock, unsigned int subclass)
+ 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
+ 		return NULL;
+ 
+-	hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
++	hlist_for_each_entry_rcu_notrace(class, hash_head, hash_entry) {
+ 		if (class->key == key) {
+ 			/*
+ 			 * Huh! same key, different name? Did someone trample
+@@ -5366,7 +5366,7 @@ int __lock_is_held(const struct lockdep_map *lock, int read)
+ 		struct held_lock *hlock = curr->held_locks + i;
+ 
+ 		if (match_held_lock(hlock, lock)) {
+-			if (read == -1 || hlock->read == read)
++			if (read == -1 || !!hlock->read == read)
+ 				return LOCK_STATE_HELD;
+ 
+ 			return LOCK_STATE_NOT_HELD;
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index 16bfbb10c74d7..1d42c18736380 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -576,6 +576,24 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
+ 	return true;
+ }
+ 
++/*
++ * The rwsem_spin_on_owner() function returns the following 4 values
++ * depending on the lock owner state.
++ *   OWNER_NULL  : owner is currently NULL
++ *   OWNER_WRITER: when owner changes and is a writer
++ *   OWNER_READER: when owner changes and the new owner may be a reader.
++ *   OWNER_NONSPINNABLE:
++ *		   when optimistic spinning has to stop because either the
++ *		   owner stops running, is unknown, or its timeslice has
++ *		   been used up.
++ */
++enum owner_state {
++	OWNER_NULL		= 1 << 0,
++	OWNER_WRITER		= 1 << 1,
++	OWNER_READER		= 1 << 2,
++	OWNER_NONSPINNABLE	= 1 << 3,
++};
++
+ #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
+ /*
+  * Try to acquire write lock before the writer has been put on wait queue.
+@@ -631,23 +649,6 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
+ 	return ret;
+ }
+ 
+-/*
+- * The rwsem_spin_on_owner() function returns the following 4 values
+- * depending on the lock owner state.
+- *   OWNER_NULL  : owner is currently NULL
+- *   OWNER_WRITER: when owner changes and is a writer
+- *   OWNER_READER: when owner changes and the new owner may be a reader.
+- *   OWNER_NONSPINNABLE:
+- *		   when optimistic spinning has to stop because either the
+- *		   owner stops running, is unknown, or its timeslice has
+- *		   been used up.
+- */
+-enum owner_state {
+-	OWNER_NULL		= 1 << 0,
+-	OWNER_WRITER		= 1 << 1,
+-	OWNER_READER		= 1 << 2,
+-	OWNER_NONSPINNABLE	= 1 << 3,
+-};
+ #define OWNER_SPINNABLE		(OWNER_NULL | OWNER_WRITER | OWNER_READER)
+ 
+ static inline enum owner_state
+@@ -877,12 +878,11 @@ static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem)
+ 
+ static inline void clear_nonspinnable(struct rw_semaphore *sem) { }
+ 
+-static inline int
++static inline enum owner_state
+ rwsem_spin_on_owner(struct rw_semaphore *sem)
+ {
+-	return 0;
++	return OWNER_NONSPINNABLE;
+ }
+-#define OWNER_NULL	1
+ #endif
+ 
+ /*
+@@ -1094,9 +1094,16 @@ wait:
+ 		 * In this case, we attempt to acquire the lock again
+ 		 * without sleeping.
+ 		 */
+-		if (wstate == WRITER_HANDOFF &&
+-		    rwsem_spin_on_owner(sem) == OWNER_NULL)
+-			goto trylock_again;
++		if (wstate == WRITER_HANDOFF) {
++			enum owner_state owner_state;
++
++			preempt_disable();
++			owner_state = rwsem_spin_on_owner(sem);
++			preempt_enable();
++
++			if (owner_state == OWNER_NULL)
++				goto trylock_again;
++		}
+ 
+ 		/* Block until there are no active lockers. */
+ 		for (;;) {
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index a332ccd829e24..97e62469a6b32 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -107,8 +107,7 @@ static void em_debug_remove_pd(struct device *dev) {}
+ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 				int nr_states, struct em_data_callback *cb)
+ {
+-	unsigned long opp_eff, prev_opp_eff = ULONG_MAX;
+-	unsigned long power, freq, prev_freq = 0;
++	unsigned long power, freq, prev_freq = 0, prev_cost = ULONG_MAX;
+ 	struct em_perf_state *table;
+ 	int i, ret;
+ 	u64 fmax;
+@@ -153,27 +152,21 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 
+ 		table[i].power = power;
+ 		table[i].frequency = prev_freq = freq;
+-
+-		/*
+-		 * The hertz/watts efficiency ratio should decrease as the
+-		 * frequency grows on sane platforms. But this isn't always
+-		 * true in practice so warn the user if a higher OPP is more
+-		 * power efficient than a lower one.
+-		 */
+-		opp_eff = freq / power;
+-		if (opp_eff >= prev_opp_eff)
+-			dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n",
+-					i, i - 1);
+-		prev_opp_eff = opp_eff;
+ 	}
+ 
+ 	/* Compute the cost of each performance state. */
+ 	fmax = (u64) table[nr_states - 1].frequency;
+-	for (i = 0; i < nr_states; i++) {
++	for (i = nr_states - 1; i >= 0; i--) {
+ 		unsigned long power_res = em_scale_power(table[i].power);
+ 
+ 		table[i].cost = div64_u64(fmax * power_res,
+ 					  table[i].frequency);
++		if (table[i].cost >= prev_cost) {
++			dev_dbg(dev, "EM: OPP:%lu is inefficient\n",
++				table[i].frequency);
++		} else {
++			prev_cost = table[i].cost;
++		}
+ 	}
+ 
+ 	pd->table = table;
+diff --git a/kernel/power/swap.c b/kernel/power/swap.c
+index 3cb89baebc796..f3a1086f7cdb2 100644
+--- a/kernel/power/swap.c
++++ b/kernel/power/swap.c
+@@ -299,7 +299,7 @@ static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
+ 	return error;
+ }
+ 
+-static blk_status_t hib_wait_io(struct hib_bio_batch *hb)
++static int hib_wait_io(struct hib_bio_batch *hb)
+ {
+ 	/*
+ 	 * We are relying on the behavior of blk_plug that a thread with
+@@ -1521,9 +1521,10 @@ end:
+ int swsusp_check(void)
+ {
+ 	int error;
++	void *holder;
+ 
+ 	hib_resume_bdev = blkdev_get_by_dev(swsusp_resume_device,
+-					    FMODE_READ, NULL);
++					    FMODE_READ | FMODE_EXCL, &holder);
+ 	if (!IS_ERR(hib_resume_bdev)) {
+ 		set_blocksize(hib_resume_bdev, PAGE_SIZE);
+ 		clear_page(swsusp_header);
+@@ -1545,7 +1546,7 @@ int swsusp_check(void)
+ 
+ put:
+ 		if (error)
+-			blkdev_put(hib_resume_bdev, FMODE_READ);
++			blkdev_put(hib_resume_bdev, FMODE_READ | FMODE_EXCL);
+ 		else
+ 			pr_debug("Image signature found, resuming\n");
+ 	} else {
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 40ef5417d9545..d2ef535530b10 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1432,28 +1432,34 @@ static void rcutorture_one_extend(int *readstate, int newstate,
+ 	/* First, put new protection in place to avoid critical-section gap. */
+ 	if (statesnew & RCUTORTURE_RDR_BH)
+ 		local_bh_disable();
++	if (statesnew & RCUTORTURE_RDR_RBH)
++		rcu_read_lock_bh();
+ 	if (statesnew & RCUTORTURE_RDR_IRQ)
+ 		local_irq_disable();
+ 	if (statesnew & RCUTORTURE_RDR_PREEMPT)
+ 		preempt_disable();
+-	if (statesnew & RCUTORTURE_RDR_RBH)
+-		rcu_read_lock_bh();
+ 	if (statesnew & RCUTORTURE_RDR_SCHED)
+ 		rcu_read_lock_sched();
+ 	if (statesnew & RCUTORTURE_RDR_RCU)
+ 		idxnew = cur_ops->readlock() << RCUTORTURE_RDR_SHIFT;
+ 
+-	/* Next, remove old protection, irq first due to bh conflict. */
++	/*
++	 * Next, remove old protection, in decreasing order of strength
++	 * to avoid unlock paths that aren't safe in the stronger
++	 * context. Namely: BH can not be enabled with disabled interrupts.
++	 * Additionally PREEMPT_RT requires that BH is enabled in preemptible
++	 * context.
++	 */
+ 	if (statesold & RCUTORTURE_RDR_IRQ)
+ 		local_irq_enable();
+-	if (statesold & RCUTORTURE_RDR_BH)
+-		local_bh_enable();
+ 	if (statesold & RCUTORTURE_RDR_PREEMPT)
+ 		preempt_enable();
+-	if (statesold & RCUTORTURE_RDR_RBH)
+-		rcu_read_unlock_bh();
+ 	if (statesold & RCUTORTURE_RDR_SCHED)
+ 		rcu_read_unlock_sched();
++	if (statesold & RCUTORTURE_RDR_BH)
++		local_bh_enable();
++	if (statesold & RCUTORTURE_RDR_RBH)
++		rcu_read_unlock_bh();
+ 	if (statesold & RCUTORTURE_RDR_RCU) {
+ 		bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
+ 
+@@ -1496,6 +1502,9 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
+ 	int mask = rcutorture_extend_mask_max();
+ 	unsigned long randmask1 = torture_random(trsp) >> 8;
+ 	unsigned long randmask2 = randmask1 >> 3;
++	unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
++	unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
++	unsigned long bhs = RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
+ 
+ 	WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
+ 	/* Mostly only one bit (need preemption!), sometimes lots of bits. */
+@@ -1503,11 +1512,26 @@ rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
+ 		mask = mask & randmask2;
+ 	else
+ 		mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
+-	/* Can't enable bh w/irq disabled. */
+-	if ((mask & RCUTORTURE_RDR_IRQ) &&
+-	    ((!(mask & RCUTORTURE_RDR_BH) && (oldmask & RCUTORTURE_RDR_BH)) ||
+-	     (!(mask & RCUTORTURE_RDR_RBH) && (oldmask & RCUTORTURE_RDR_RBH))))
+-		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
++
++	/*
++	 * Can't enable bh w/irq disabled.
++	 */
++	if (mask & RCUTORTURE_RDR_IRQ)
++		mask |= oldmask & bhs;
++
++	/*
++	 * Ideally these sequences would be detected in debug builds
++	 * (regardless of RT), but until then don't stop testing
++	 * them on non-RT.
++	 */
++	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++		/* Can't modify BH in atomic context */
++		if (oldmask & preempts_irq)
++			mask &= ~bhs;
++		if ((oldmask | mask) & preempts_irq)
++			mask |= oldmask & bhs;
++	}
++
+ 	return mask ?: RCUTORTURE_RDR_RCU;
+ }
+ 
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 8536c55df5142..fd3909c59b6a4 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -197,6 +197,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
+ 	 * This loop is terminated by the system going down.  ;-)
+ 	 */
+ 	for (;;) {
++		set_tasks_gp_state(rtp, RTGS_WAIT_CBS);
+ 
+ 		/* Pick up any new callbacks. */
+ 		raw_spin_lock_irqsave(&rtp->cbs_lock, flags);
+@@ -236,8 +237,6 @@ static int __noreturn rcu_tasks_kthread(void *arg)
+ 		}
+ 		/* Paranoid sleep to keep this from entering a tight loop */
+ 		schedule_timeout_idle(rtp->gp_sleep);
+-
+-		set_tasks_gp_state(rtp, RTGS_WAIT_CBS);
+ 	}
+ }
+ 
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 2796084ef85a5..454b516ea566e 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -760,7 +760,7 @@ static void sync_sched_exp_online_cleanup(int cpu)
+ 	my_cpu = get_cpu();
+ 	/* Quiescent state either not needed or already requested, leave. */
+ 	if (!(READ_ONCE(rnp->expmask) & rdp->grpmask) ||
+-	    __this_cpu_read(rcu_data.cpu_no_qs.b.exp)) {
++	    rdp->cpu_no_qs.b.exp) {
+ 		put_cpu();
+ 		return;
+ 	}
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 6ce104242b23d..591e038e7670d 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2964,7 +2964,7 @@ static void rcu_bind_gp_kthread(void)
+ }
+ 
+ /* Record the current task on dyntick-idle entry. */
+-static void noinstr rcu_dynticks_task_enter(void)
++static __always_inline void rcu_dynticks_task_enter(void)
+ {
+ #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL)
+ 	WRITE_ONCE(current->rcu_tasks_idle_cpu, smp_processor_id());
+@@ -2972,7 +2972,7 @@ static void noinstr rcu_dynticks_task_enter(void)
+ }
+ 
+ /* Record no current task on dyntick-idle exit. */
+-static void noinstr rcu_dynticks_task_exit(void)
++static __always_inline void rcu_dynticks_task_exit(void)
+ {
+ #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL)
+ 	WRITE_ONCE(current->rcu_tasks_idle_cpu, -1);
+@@ -2980,7 +2980,7 @@ static void noinstr rcu_dynticks_task_exit(void)
+ }
+ 
+ /* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
+-static void rcu_dynticks_task_trace_enter(void)
++static __always_inline void rcu_dynticks_task_trace_enter(void)
+ {
+ #ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+@@ -2989,7 +2989,7 @@ static void rcu_dynticks_task_trace_enter(void)
+ }
+ 
+ /* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
+-static void rcu_dynticks_task_trace_exit(void)
++static __always_inline void rcu_dynticks_task_trace_exit(void)
+ {
+ #ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e165d28cf73be..5ea5b6d8b2a94 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1960,6 +1960,25 @@ bool sched_task_on_rq(struct task_struct *p)
+ 	return task_on_rq_queued(p);
+ }
+ 
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
+ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
+ {
+ 	if (!(flags & ENQUEUE_NOCLOCK))
+@@ -4096,8 +4115,6 @@ int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
+  */
+ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+-	unsigned long flags;
+-
+ 	__sched_fork(clone_flags, p);
+ 	/*
+ 	 * We mark the process as NEW here. This guarantees that
+@@ -4143,24 +4160,6 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 
+ 	init_entity_runnable_average(&p->se);
+ 
+-	/*
+-	 * The child is not yet in the pid-hash so no cgroup attach races,
+-	 * and the cgroup is pinned to this child due to cgroup_fork()
+-	 * is ran before sched_fork().
+-	 *
+-	 * Silence PROVE_RCU.
+-	 */
+-	raw_spin_lock_irqsave(&p->pi_lock, flags);
+-	rseq_migrate(p);
+-	/*
+-	 * We're setting the CPU for the first time, we don't migrate,
+-	 * so use __set_task_cpu().
+-	 */
+-	__set_task_cpu(p, smp_processor_id());
+-	if (p->sched_class->task_fork)
+-		p->sched_class->task_fork(p);
+-	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+-
+ #ifdef CONFIG_SCHED_INFO
+ 	if (likely(sched_info_on()))
+ 		memset(&p->sched_info, 0, sizeof(p->sched_info));
+@@ -4176,8 +4175,29 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 	return 0;
+ }
+ 
+-void sched_post_fork(struct task_struct *p)
++void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs)
+ {
++	unsigned long flags;
++#ifdef CONFIG_CGROUP_SCHED
++	struct task_group *tg;
++#endif
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++#ifdef CONFIG_CGROUP_SCHED
++	tg = container_of(kargs->cset->subsys[cpu_cgrp_id],
++			  struct task_group, css);
++	p->sched_task_group = autogroup_task_group(p, tg);
++#endif
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	if (p->sched_class->task_fork)
++		p->sched_class->task_fork(p);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
+ 	uclamp_post_fork(p);
+ }
+ 
+diff --git a/kernel/scs.c b/kernel/scs.c
+index e2a71fc82fa06..579841be88646 100644
+--- a/kernel/scs.c
++++ b/kernel/scs.c
+@@ -78,6 +78,7 @@ void scs_free(void *s)
+ 		if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL)
+ 			return;
+ 
++	kasan_unpoison_vmalloc(s, SCS_SIZE);
+ 	vfree_atomic(s);
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 13d2505a14a0e..59af8e2f40081 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2109,15 +2109,6 @@ static inline bool may_ptrace_stop(void)
+ 	return true;
+ }
+ 
+-/*
+- * Return non-zero if there is a SIGKILL that should be waking us up.
+- * Called with the siglock held.
+- */
+-static bool sigkill_pending(struct task_struct *tsk)
+-{
+-	return sigismember(&tsk->pending.signal, SIGKILL) ||
+-	       sigismember(&tsk->signal->shared_pending.signal, SIGKILL);
+-}
+ 
+ /*
+  * This must be called with current->sighand->siglock held.
+@@ -2144,17 +2135,16 @@ static void ptrace_stop(int exit_code, int why, int clear_code, kernel_siginfo_t
+ 		 * calling arch_ptrace_stop, so we must release it now.
+ 		 * To preserve proper semantics, we must do this before
+ 		 * any signal bookkeeping like checking group_stop_count.
+-		 * Meanwhile, a SIGKILL could come in before we retake the
+-		 * siglock.  That must prevent us from sleeping in TASK_TRACED.
+-		 * So after regaining the lock, we must check for SIGKILL.
+ 		 */
+ 		spin_unlock_irq(&current->sighand->siglock);
+ 		arch_ptrace_stop(exit_code, info);
+ 		spin_lock_irq(&current->sighand->siglock);
+-		if (sigkill_pending(current))
+-			return;
+ 	}
+ 
++	/*
++	 * schedule() will not sleep if there is a pending signal that
++	 * can awaken the task.
++	 */
+ 	set_special_state(TASK_TRACED);
+ 
+ 	/*
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 517be7fd175ef..58f7b5fe0f0da 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1100,14 +1100,29 @@ static void posix_cpu_timers_work(struct callback_head *work)
+ 	handle_posix_cpu_timers(current);
+ }
+ 
++/*
++ * Clear existing posix CPU timers task work.
++ */
++void clear_posix_cputimers_work(struct task_struct *p)
++{
++	/*
++	 * A copied work entry from the old task is not meaningful, clear it.
++	 * N.B. init_task_work will not do this.
++	 */
++	memset(&p->posix_cputimers_work.work, 0,
++	       sizeof(p->posix_cputimers_work.work));
++	init_task_work(&p->posix_cputimers_work.work,
++		       posix_cpu_timers_work);
++	p->posix_cputimers_work.scheduled = false;
++}
++
+ /*
+  * Initialize posix CPU timers task work in init task. Out of line to
+  * keep the callback static and to avoid header recursion hell.
+  */
+ void __init posix_cputimers_init_work(void)
+ {
+-	init_task_work(&current->posix_cputimers_work.work,
+-		       posix_cpu_timers_work);
++	clear_posix_cputimers_work(current);
+ }
+ 
+ /*
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 06700d5b11717..bcc872f3300b0 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -988,8 +988,9 @@ static __init void ftrace_profile_tracefs(struct dentry *d_tracer)
+ 		}
+ 	}
+ 
+-	entry = tracefs_create_file("function_profile_enabled", 0644,
+-				    d_tracer, NULL, &ftrace_profile_fops);
++	entry = tracefs_create_file("function_profile_enabled",
++				    TRACE_MODE_WRITE, d_tracer, NULL,
++				    &ftrace_profile_fops);
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'function_profile_enabled' entry\n");
+ }
+@@ -6109,10 +6110,10 @@ void ftrace_create_filter_files(struct ftrace_ops *ops,
+ 				struct dentry *parent)
+ {
+ 
+-	trace_create_file("set_ftrace_filter", 0644, parent,
++	trace_create_file("set_ftrace_filter", TRACE_MODE_WRITE, parent,
+ 			  ops, &ftrace_filter_fops);
+ 
+-	trace_create_file("set_ftrace_notrace", 0644, parent,
++	trace_create_file("set_ftrace_notrace", TRACE_MODE_WRITE, parent,
+ 			  ops, &ftrace_notrace_fops);
+ }
+ 
+@@ -6139,19 +6140,19 @@ void ftrace_destroy_filter_files(struct ftrace_ops *ops)
+ static __init int ftrace_init_dyn_tracefs(struct dentry *d_tracer)
+ {
+ 
+-	trace_create_file("available_filter_functions", 0444,
++	trace_create_file("available_filter_functions", TRACE_MODE_READ,
+ 			d_tracer, NULL, &ftrace_avail_fops);
+ 
+-	trace_create_file("enabled_functions", 0444,
++	trace_create_file("enabled_functions", TRACE_MODE_READ,
+ 			d_tracer, NULL, &ftrace_enabled_fops);
+ 
+ 	ftrace_create_filter_files(&global_ops, d_tracer);
+ 
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+-	trace_create_file("set_graph_function", 0644, d_tracer,
++	trace_create_file("set_graph_function", TRACE_MODE_WRITE, d_tracer,
+ 				    NULL,
+ 				    &ftrace_graph_fops);
+-	trace_create_file("set_graph_notrace", 0644, d_tracer,
++	trace_create_file("set_graph_notrace", TRACE_MODE_WRITE, d_tracer,
+ 				    NULL,
+ 				    &ftrace_graph_notrace_fops);
+ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+@@ -7494,10 +7495,10 @@ static const struct file_operations ftrace_no_pid_fops = {
+ 
+ void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ {
+-	trace_create_file("set_ftrace_pid", 0644, d_tracer,
++	trace_create_file("set_ftrace_pid", TRACE_MODE_WRITE, d_tracer,
+ 			    tr, &ftrace_pid_fops);
+-	trace_create_file("set_ftrace_notrace_pid", 0644, d_tracer,
+-			    tr, &ftrace_no_pid_fops);
++	trace_create_file("set_ftrace_notrace_pid", TRACE_MODE_WRITE,
++			  d_tracer, tr, &ftrace_no_pid_fops);
+ }
+ 
+ void __init ftrace_init_tracefs_toplevel(struct trace_array *tr,
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index e592d1df6f888..f4928c7f4a185 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -5233,6 +5233,9 @@ void ring_buffer_reset(struct trace_buffer *buffer)
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	int cpu;
+ 
++	/* prevent another thread from changing buffer sizes */
++	mutex_lock(&buffer->mutex);
++
+ 	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+ 
+@@ -5251,6 +5254,8 @@ void ring_buffer_reset(struct trace_buffer *buffer)
+ 		atomic_dec(&cpu_buffer->record_disabled);
+ 		atomic_dec(&cpu_buffer->resize_disabled);
+ 	}
++
++	mutex_unlock(&buffer->mutex);
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_reset);
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index f4aa00a58f3c6..1b5946a3a8236 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1714,7 +1714,8 @@ static void trace_create_maxlat_file(struct trace_array *tr,
+ {
+ 	INIT_WORK(&tr->fsnotify_work, latency_fsnotify_workfn);
+ 	init_irq_work(&tr->fsnotify_irqwork, latency_fsnotify_workfn_irq);
+-	tr->d_max_latency = trace_create_file("tracing_max_latency", 0644,
++	tr->d_max_latency = trace_create_file("tracing_max_latency",
++					      TRACE_MODE_WRITE,
+ 					      d_tracer, &tr->max_latency,
+ 					      &tracing_max_lat_fops);
+ }
+@@ -1748,8 +1749,8 @@ void latency_fsnotify(struct trace_array *tr)
+ 	|| defined(CONFIG_OSNOISE_TRACER)
+ 
+ #define trace_create_maxlat_file(tr, d_tracer)				\
+-	trace_create_file("tracing_max_latency", 0644, d_tracer,	\
+-			  &tr->max_latency, &tracing_max_lat_fops)
++	trace_create_file("tracing_max_latency", TRACE_MODE_WRITE,	\
++			  d_tracer, &tr->max_latency, &tracing_max_lat_fops)
+ 
+ #else
+ #define trace_create_maxlat_file(tr, d_tracer)	 do { } while (0)
+@@ -6061,7 +6062,7 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
+ 
+ static void trace_create_eval_file(struct dentry *d_tracer)
+ {
+-	trace_create_file("eval_map", 0444, d_tracer,
++	trace_create_file("eval_map", TRACE_MODE_READ, d_tracer,
+ 			  NULL, &tracing_eval_map_fops);
+ }
+ 
+@@ -8574,27 +8575,27 @@ tracing_init_tracefs_percpu(struct trace_array *tr, long cpu)
+ 	}
+ 
+ 	/* per cpu trace_pipe */
+-	trace_create_cpu_file("trace_pipe", 0444, d_cpu,
++	trace_create_cpu_file("trace_pipe", TRACE_MODE_READ, d_cpu,
+ 				tr, cpu, &tracing_pipe_fops);
+ 
+ 	/* per cpu trace */
+-	trace_create_cpu_file("trace", 0644, d_cpu,
++	trace_create_cpu_file("trace", TRACE_MODE_WRITE, d_cpu,
+ 				tr, cpu, &tracing_fops);
+ 
+-	trace_create_cpu_file("trace_pipe_raw", 0444, d_cpu,
++	trace_create_cpu_file("trace_pipe_raw", TRACE_MODE_READ, d_cpu,
+ 				tr, cpu, &tracing_buffers_fops);
+ 
+-	trace_create_cpu_file("stats", 0444, d_cpu,
++	trace_create_cpu_file("stats", TRACE_MODE_READ, d_cpu,
+ 				tr, cpu, &tracing_stats_fops);
+ 
+-	trace_create_cpu_file("buffer_size_kb", 0444, d_cpu,
++	trace_create_cpu_file("buffer_size_kb", TRACE_MODE_READ, d_cpu,
+ 				tr, cpu, &tracing_entries_fops);
+ 
+ #ifdef CONFIG_TRACER_SNAPSHOT
+-	trace_create_cpu_file("snapshot", 0644, d_cpu,
++	trace_create_cpu_file("snapshot", TRACE_MODE_WRITE, d_cpu,
+ 				tr, cpu, &snapshot_fops);
+ 
+-	trace_create_cpu_file("snapshot_raw", 0444, d_cpu,
++	trace_create_cpu_file("snapshot_raw", TRACE_MODE_READ, d_cpu,
+ 				tr, cpu, &snapshot_raw_fops);
+ #endif
+ }
+@@ -8800,8 +8801,8 @@ create_trace_option_file(struct trace_array *tr,
+ 	topt->opt = opt;
+ 	topt->tr = tr;
+ 
+-	topt->entry = trace_create_file(opt->name, 0644, t_options, topt,
+-				    &trace_options_fops);
++	topt->entry = trace_create_file(opt->name, TRACE_MODE_WRITE,
++					t_options, topt, &trace_options_fops);
+ 
+ }
+ 
+@@ -8876,7 +8877,7 @@ create_trace_option_core_file(struct trace_array *tr,
+ 	if (!t_options)
+ 		return NULL;
+ 
+-	return trace_create_file(option, 0644, t_options,
++	return trace_create_file(option, TRACE_MODE_WRITE, t_options,
+ 				 (void *)&tr->trace_flags_index[index],
+ 				 &trace_options_core_fops);
+ }
+@@ -9401,28 +9402,28 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ 	struct trace_event_file *file;
+ 	int cpu;
+ 
+-	trace_create_file("available_tracers", 0444, d_tracer,
++	trace_create_file("available_tracers", TRACE_MODE_READ, d_tracer,
+ 			tr, &show_traces_fops);
+ 
+-	trace_create_file("current_tracer", 0644, d_tracer,
++	trace_create_file("current_tracer", TRACE_MODE_WRITE, d_tracer,
+ 			tr, &set_tracer_fops);
+ 
+-	trace_create_file("tracing_cpumask", 0644, d_tracer,
++	trace_create_file("tracing_cpumask", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &tracing_cpumask_fops);
+ 
+-	trace_create_file("trace_options", 0644, d_tracer,
++	trace_create_file("trace_options", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &tracing_iter_fops);
+ 
+-	trace_create_file("trace", 0644, d_tracer,
++	trace_create_file("trace", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &tracing_fops);
+ 
+-	trace_create_file("trace_pipe", 0444, d_tracer,
++	trace_create_file("trace_pipe", TRACE_MODE_READ, d_tracer,
+ 			  tr, &tracing_pipe_fops);
+ 
+-	trace_create_file("buffer_size_kb", 0644, d_tracer,
++	trace_create_file("buffer_size_kb", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &tracing_entries_fops);
+ 
+-	trace_create_file("buffer_total_size_kb", 0444, d_tracer,
++	trace_create_file("buffer_total_size_kb", TRACE_MODE_READ, d_tracer,
+ 			  tr, &tracing_total_entries_fops);
+ 
+ 	trace_create_file("free_buffer", 0200, d_tracer,
+@@ -9433,25 +9434,25 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ 
+ 	file = __find_event_file(tr, "ftrace", "print");
+ 	if (file && file->dir)
+-		trace_create_file("trigger", 0644, file->dir, file,
+-				  &event_trigger_fops);
++		trace_create_file("trigger", TRACE_MODE_WRITE, file->dir,
++				  file, &event_trigger_fops);
+ 	tr->trace_marker_file = file;
+ 
+ 	trace_create_file("trace_marker_raw", 0220, d_tracer,
+ 			  tr, &tracing_mark_raw_fops);
+ 
+-	trace_create_file("trace_clock", 0644, d_tracer, tr,
++	trace_create_file("trace_clock", TRACE_MODE_WRITE, d_tracer, tr,
+ 			  &trace_clock_fops);
+ 
+-	trace_create_file("tracing_on", 0644, d_tracer,
++	trace_create_file("tracing_on", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &rb_simple_fops);
+ 
+-	trace_create_file("timestamp_mode", 0444, d_tracer, tr,
++	trace_create_file("timestamp_mode", TRACE_MODE_READ, d_tracer, tr,
+ 			  &trace_time_stamp_mode_fops);
+ 
+ 	tr->buffer_percent = 50;
+ 
+-	trace_create_file("buffer_percent", 0444, d_tracer,
++	trace_create_file("buffer_percent", TRACE_MODE_READ, d_tracer,
+ 			tr, &buffer_percent_fops);
+ 
+ 	create_trace_options_dir(tr);
+@@ -9462,11 +9463,11 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ 		MEM_FAIL(1, "Could not allocate function filter files");
+ 
+ #ifdef CONFIG_TRACER_SNAPSHOT
+-	trace_create_file("snapshot", 0644, d_tracer,
++	trace_create_file("snapshot", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &snapshot_fops);
+ #endif
+ 
+-	trace_create_file("error_log", 0644, d_tracer,
++	trace_create_file("error_log", TRACE_MODE_WRITE, d_tracer,
+ 			  tr, &tracing_err_log_fops);
+ 
+ 	for_each_tracing_cpu(cpu)
+@@ -9659,19 +9660,19 @@ static __init int tracer_init_tracefs(void)
+ 	init_tracer_tracefs(&global_trace, NULL);
+ 	ftrace_init_tracefs_toplevel(&global_trace, NULL);
+ 
+-	trace_create_file("tracing_thresh", 0644, NULL,
++	trace_create_file("tracing_thresh", TRACE_MODE_WRITE, NULL,
+ 			&global_trace, &tracing_thresh_fops);
+ 
+-	trace_create_file("README", 0444, NULL,
++	trace_create_file("README", TRACE_MODE_READ, NULL,
+ 			NULL, &tracing_readme_fops);
+ 
+-	trace_create_file("saved_cmdlines", 0444, NULL,
++	trace_create_file("saved_cmdlines", TRACE_MODE_READ, NULL,
+ 			NULL, &tracing_saved_cmdlines_fops);
+ 
+-	trace_create_file("saved_cmdlines_size", 0644, NULL,
++	trace_create_file("saved_cmdlines_size", TRACE_MODE_WRITE, NULL,
+ 			  NULL, &tracing_saved_cmdlines_size_fops);
+ 
+-	trace_create_file("saved_tgids", 0444, NULL,
++	trace_create_file("saved_tgids", TRACE_MODE_READ, NULL,
+ 			NULL, &tracing_saved_tgids_fops);
+ 
+ 	trace_eval_init();
+@@ -9683,7 +9684,7 @@ static __init int tracer_init_tracefs(void)
+ #endif
+ 
+ #ifdef CONFIG_DYNAMIC_FTRACE
+-	trace_create_file("dyn_ftrace_total_info", 0444, NULL,
++	trace_create_file("dyn_ftrace_total_info", TRACE_MODE_READ, NULL,
+ 			NULL, &tracing_dyn_info_fops);
+ #endif
+ 
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 4a0e693000c6c..ed28a9aac98f3 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -27,6 +27,9 @@
+ #include <asm/syscall.h>	/* some archs define it here */
+ #endif
+ 
++#define TRACE_MODE_WRITE	0640
++#define TRACE_MODE_READ		0440
++
+ enum trace_type {
+ 	__TRACE_FIRST_TYPE = 0,
+ 
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index e57cc0870892c..d804206d1f052 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -224,7 +224,7 @@ static __init int init_dynamic_event(void)
+ 	if (ret)
+ 		return 0;
+ 
+-	entry = tracefs_create_file("dynamic_events", 0644, NULL,
++	entry = tracefs_create_file("dynamic_events", TRACE_MODE_WRITE, NULL,
+ 				    NULL, &dynamic_events_ops);
+ 
+ 	/* Event list interface */
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 03be4435d103f..50cd5a1a7ab4a 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -441,13 +441,13 @@ perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip,
+ 	if (!rcu_is_watching())
+ 		return;
+ 
+-	if ((unsigned long)ops->private != smp_processor_id())
+-		return;
+-
+ 	bit = ftrace_test_recursion_trylock(ip, parent_ip);
+ 	if (bit < 0)
+ 		return;
+ 
++	if ((unsigned long)ops->private != smp_processor_id())
++		goto out;
++
+ 	event = container_of(ops, struct perf_event, ftrace_ops);
+ 
+ 	/*
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 80e96989770ed..478f328c30d15 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -2311,7 +2311,8 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
+ 	/* the ftrace system is special, do not create enable or filter files */
+ 	if (strcmp(name, "ftrace") != 0) {
+ 
+-		entry = tracefs_create_file("filter", 0644, dir->entry, dir,
++		entry = tracefs_create_file("filter", TRACE_MODE_WRITE,
++					    dir->entry, dir,
+ 					    &ftrace_subsystem_filter_fops);
+ 		if (!entry) {
+ 			kfree(system->filter);
+@@ -2319,7 +2320,7 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
+ 			pr_warn("Could not create tracefs '%s/filter' entry\n", name);
+ 		}
+ 
+-		trace_create_file("enable", 0644, dir->entry, dir,
++		trace_create_file("enable", TRACE_MODE_WRITE, dir->entry, dir,
+ 				  &ftrace_system_enable_fops);
+ 	}
+ 
+@@ -2401,12 +2402,12 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
+ 	}
+ 
+ 	if (call->class->reg && !(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE))
+-		trace_create_file("enable", 0644, file->dir, file,
++		trace_create_file("enable", TRACE_MODE_WRITE, file->dir, file,
+ 				  &ftrace_enable_fops);
+ 
+ #ifdef CONFIG_PERF_EVENTS
+ 	if (call->event.type && call->class->reg)
+-		trace_create_file("id", 0444, file->dir,
++		trace_create_file("id", TRACE_MODE_READ, file->dir,
+ 				  (void *)(long)call->event.type,
+ 				  &ftrace_event_id_fops);
+ #endif
+@@ -2422,22 +2423,22 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
+ 	 * triggers or filters.
+ 	 */
+ 	if (!(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE)) {
+-		trace_create_file("filter", 0644, file->dir, file,
+-				  &ftrace_event_filter_fops);
++		trace_create_file("filter", TRACE_MODE_WRITE, file->dir,
++				  file, &ftrace_event_filter_fops);
+ 
+-		trace_create_file("trigger", 0644, file->dir, file,
+-				  &event_trigger_fops);
++		trace_create_file("trigger", TRACE_MODE_WRITE, file->dir,
++				  file, &event_trigger_fops);
+ 	}
+ 
+ #ifdef CONFIG_HIST_TRIGGERS
+-	trace_create_file("hist", 0444, file->dir, file,
++	trace_create_file("hist", TRACE_MODE_READ, file->dir, file,
+ 			  &event_hist_fops);
+ #endif
+ #ifdef CONFIG_HIST_TRIGGERS_DEBUG
+-	trace_create_file("hist_debug", 0444, file->dir, file,
++	trace_create_file("hist_debug", TRACE_MODE_READ, file->dir, file,
+ 			  &event_hist_debug_fops);
+ #endif
+-	trace_create_file("format", 0444, file->dir, call,
++	trace_create_file("format", TRACE_MODE_READ, file->dir, call,
+ 			  &ftrace_event_format_fops);
+ 
+ #ifdef CONFIG_TRACE_EVENT_INJECT
+@@ -3426,7 +3427,7 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
+ 	struct dentry *d_events;
+ 	struct dentry *entry;
+ 
+-	entry = tracefs_create_file("set_event", 0644, parent,
++	entry = tracefs_create_file("set_event", TRACE_MODE_WRITE, parent,
+ 				    tr, &ftrace_set_event_fops);
+ 	if (!entry) {
+ 		pr_warn("Could not create tracefs 'set_event' entry\n");
+@@ -3439,7 +3440,7 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
+ 		return -ENOMEM;
+ 	}
+ 
+-	entry = trace_create_file("enable", 0644, d_events,
++	entry = trace_create_file("enable", TRACE_MODE_WRITE, d_events,
+ 				  tr, &ftrace_tr_enable_fops);
+ 	if (!entry) {
+ 		pr_warn("Could not create tracefs 'enable' entry\n");
+@@ -3448,24 +3449,25 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
+ 
+ 	/* There are not as crucial, just warn if they are not created */
+ 
+-	entry = tracefs_create_file("set_event_pid", 0644, parent,
++	entry = tracefs_create_file("set_event_pid", TRACE_MODE_WRITE, parent,
+ 				    tr, &ftrace_set_event_pid_fops);
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'set_event_pid' entry\n");
+ 
+-	entry = tracefs_create_file("set_event_notrace_pid", 0644, parent,
+-				    tr, &ftrace_set_event_notrace_pid_fops);
++	entry = tracefs_create_file("set_event_notrace_pid",
++				    TRACE_MODE_WRITE, parent, tr,
++				    &ftrace_set_event_notrace_pid_fops);
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'set_event_notrace_pid' entry\n");
+ 
+ 	/* ring buffer internal formats */
+-	entry = trace_create_file("header_page", 0444, d_events,
++	entry = trace_create_file("header_page", TRACE_MODE_READ, d_events,
+ 				  ring_buffer_print_page_header,
+ 				  &ftrace_show_header_fops);
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'header_page' entry\n");
+ 
+-	entry = trace_create_file("header_event", 0444, d_events,
++	entry = trace_create_file("header_event", TRACE_MODE_READ, d_events,
+ 				  ring_buffer_print_entry_header,
+ 				  &ftrace_show_header_fops);
+ 	if (!entry)
+@@ -3682,8 +3684,8 @@ __init int event_trace_init(void)
+ 	if (!tr)
+ 		return -ENODEV;
+ 
+-	entry = tracefs_create_file("available_events", 0444, NULL,
+-				    tr, &ftrace_avail_fops);
++	entry = tracefs_create_file("available_events", TRACE_MODE_READ,
++				    NULL, tr, &ftrace_avail_fops);
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'available_events' entry\n");
+ 
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index 9315fc03e3030..4b633095dc907 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -2222,8 +2222,8 @@ static __init int trace_events_synth_init(void)
+ 	if (err)
+ 		goto err;
+ 
+-	entry = tracefs_create_file("synthetic_events", 0644, NULL,
+-				    NULL, &synth_events_fops);
++	entry = tracefs_create_file("synthetic_events", TRACE_MODE_WRITE,
++				    NULL, NULL, &synth_events_fops);
+ 	if (!entry) {
+ 		err = -ENODEV;
+ 		goto err;
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 0de6837722da5..6b5ff3ba4251f 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -1340,7 +1340,7 @@ static __init int init_graph_tracefs(void)
+ 	if (ret)
+ 		return 0;
+ 
+-	trace_create_file("max_graph_depth", 0644, NULL,
++	trace_create_file("max_graph_depth", TRACE_MODE_WRITE, NULL,
+ 			  NULL, &graph_depth_fops);
+ 
+ 	return 0;
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index 14f46aae1981f..31bd7ec5e6026 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -782,21 +782,21 @@ static int init_tracefs(void)
+ 	if (!top_dir)
+ 		return -ENOMEM;
+ 
+-	hwlat_sample_window = tracefs_create_file("window", 0640,
++	hwlat_sample_window = tracefs_create_file("window", TRACE_MODE_WRITE,
+ 						  top_dir,
+ 						  &hwlat_window,
+ 						  &trace_min_max_fops);
+ 	if (!hwlat_sample_window)
+ 		goto err;
+ 
+-	hwlat_sample_width = tracefs_create_file("width", 0644,
++	hwlat_sample_width = tracefs_create_file("width", TRACE_MODE_WRITE,
+ 						 top_dir,
+ 						 &hwlat_width,
+ 						 &trace_min_max_fops);
+ 	if (!hwlat_sample_width)
+ 		goto err;
+ 
+-	hwlat_thread_mode = trace_create_file("mode", 0644,
++	hwlat_thread_mode = trace_create_file("mode", TRACE_MODE_WRITE,
+ 					      top_dir,
+ 					      NULL,
+ 					      &thread_mode_fops);
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 032191977e34c..cf3269d8f8ca5 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1934,16 +1934,16 @@ static __init int init_kprobe_trace(void)
+ 	if (ret)
+ 		return 0;
+ 
+-	entry = tracefs_create_file("kprobe_events", 0644, NULL,
+-				    NULL, &kprobe_events_ops);
++	entry = tracefs_create_file("kprobe_events", TRACE_MODE_WRITE,
++				    NULL, NULL, &kprobe_events_ops);
+ 
+ 	/* Event list interface */
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'kprobe_events' entry\n");
+ 
+ 	/* Profile interface */
+-	entry = tracefs_create_file("kprobe_profile", 0444, NULL,
+-				    NULL, &kprobe_profile_ops);
++	entry = tracefs_create_file("kprobe_profile", TRACE_MODE_READ,
++				    NULL, NULL, &kprobe_profile_ops);
+ 
+ 	if (!entry)
+ 		pr_warn("Could not create tracefs 'kprobe_profile' entry\n");
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 7b3c754821e55..c7dad5237c0c2 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1856,38 +1856,38 @@ static int init_tracefs(void)
+ 	if (!top_dir)
+ 		return 0;
+ 
+-	tmp = tracefs_create_file("period_us", 0640, top_dir,
++	tmp = tracefs_create_file("period_us", TRACE_MODE_WRITE, top_dir,
+ 				  &osnoise_period, &trace_min_max_fops);
+ 	if (!tmp)
+ 		goto err;
+ 
+-	tmp = tracefs_create_file("runtime_us", 0644, top_dir,
++	tmp = tracefs_create_file("runtime_us", TRACE_MODE_WRITE, top_dir,
+ 				  &osnoise_runtime, &trace_min_max_fops);
+ 	if (!tmp)
+ 		goto err;
+ 
+-	tmp = tracefs_create_file("stop_tracing_us", 0640, top_dir,
++	tmp = tracefs_create_file("stop_tracing_us", TRACE_MODE_WRITE, top_dir,
+ 				  &osnoise_stop_tracing_in, &trace_min_max_fops);
+ 	if (!tmp)
+ 		goto err;
+ 
+-	tmp = tracefs_create_file("stop_tracing_total_us", 0640, top_dir,
++	tmp = tracefs_create_file("stop_tracing_total_us", TRACE_MODE_WRITE, top_dir,
+ 				  &osnoise_stop_tracing_total, &trace_min_max_fops);
+ 	if (!tmp)
+ 		goto err;
+ 
+-	tmp = trace_create_file("cpus", 0644, top_dir, NULL, &cpus_fops);
++	tmp = trace_create_file("cpus", TRACE_MODE_WRITE, top_dir, NULL, &cpus_fops);
+ 	if (!tmp)
+ 		goto err;
+ #ifdef CONFIG_TIMERLAT_TRACER
+ #ifdef CONFIG_STACKTRACE
+-	tmp = tracefs_create_file("print_stack", 0640, top_dir,
++	tmp = tracefs_create_file("print_stack", TRACE_MODE_WRITE, top_dir,
+ 				  &osnoise_print_stack, &trace_min_max_fops);
+ 	if (!tmp)
+ 		goto err;
+ #endif
+ 
+-	tmp = tracefs_create_file("timerlat_period_us", 0640, top_dir,
++	tmp = tracefs_create_file("timerlat_period_us", TRACE_MODE_WRITE, top_dir,
+ 				  &timerlat_period, &trace_min_max_fops);
+ 	if (!tmp)
+ 		goto err;
+diff --git a/kernel/trace/trace_printk.c b/kernel/trace/trace_printk.c
+index 4b320fe7df704..29f6e95439b67 100644
+--- a/kernel/trace/trace_printk.c
++++ b/kernel/trace/trace_printk.c
+@@ -384,7 +384,7 @@ static __init int init_trace_printk_function_export(void)
+ 	if (ret)
+ 		return 0;
+ 
+-	trace_create_file("printk_formats", 0444, NULL,
++	trace_create_file("printk_formats", TRACE_MODE_READ, NULL,
+ 				    NULL, &ftrace_formats_fops);
+ 
+ 	return 0;
+diff --git a/kernel/trace/trace_recursion_record.c b/kernel/trace/trace_recursion_record.c
+index b2edac1fe156e..4d4b78c8ca257 100644
+--- a/kernel/trace/trace_recursion_record.c
++++ b/kernel/trace/trace_recursion_record.c
+@@ -226,8 +226,8 @@ __init static int create_recursed_functions(void)
+ {
+ 	struct dentry *dentry;
+ 
+-	dentry = trace_create_file("recursed_functions", 0644, NULL, NULL,
+-				   &recursed_functions_fops);
++	dentry = trace_create_file("recursed_functions", TRACE_MODE_WRITE,
++				   NULL, NULL, &recursed_functions_fops);
+ 	if (!dentry)
+ 		pr_warn("WARNING: Failed to create recursed_functions\n");
+ 	return 0;
+diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
+index 63c2850420516..5a48dba912eae 100644
+--- a/kernel/trace/trace_stack.c
++++ b/kernel/trace/trace_stack.c
+@@ -559,14 +559,14 @@ static __init int stack_trace_init(void)
+ 	if (ret)
+ 		return 0;
+ 
+-	trace_create_file("stack_max_size", 0644, NULL,
++	trace_create_file("stack_max_size", TRACE_MODE_WRITE, NULL,
+ 			&stack_trace_max_size, &stack_max_size_fops);
+ 
+-	trace_create_file("stack_trace", 0444, NULL,
++	trace_create_file("stack_trace", TRACE_MODE_READ, NULL,
+ 			NULL, &stack_trace_fops);
+ 
+ #ifdef CONFIG_DYNAMIC_FTRACE
+-	trace_create_file("stack_trace_filter", 0644, NULL,
++	trace_create_file("stack_trace_filter", TRACE_MODE_WRITE, NULL,
+ 			  &trace_ops, &stack_trace_filter_fops);
+ #endif
+ 
+diff --git a/kernel/trace/trace_stat.c b/kernel/trace/trace_stat.c
+index 8d141c3825a94..bb247beec4470 100644
+--- a/kernel/trace/trace_stat.c
++++ b/kernel/trace/trace_stat.c
+@@ -297,9 +297,9 @@ static int init_stat_file(struct stat_session *session)
+ 	if (!stat_dir && (ret = tracing_stat_init()))
+ 		return ret;
+ 
+-	session->file = tracefs_create_file(session->ts->name, 0644,
+-					    stat_dir,
+-					    session, &tracing_stat_fops);
++	session->file = tracefs_create_file(session->ts->name, TRACE_MODE_WRITE,
++					    stat_dir, session,
++					    &tracing_stat_fops);
+ 	if (!session->file)
+ 		return -ENOMEM;
+ 	return 0;
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 957244ee07c8d..54de2eb384684 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1657,10 +1657,10 @@ static __init int init_uprobe_trace(void)
+ 	if (ret)
+ 		return 0;
+ 
+-	trace_create_file("uprobe_events", 0644, NULL,
++	trace_create_file("uprobe_events", TRACE_MODE_WRITE, NULL,
+ 				    NULL, &uprobe_events_ops);
+ 	/* Profile interface */
+-	trace_create_file("uprobe_profile", 0444, NULL,
++	trace_create_file("uprobe_profile", TRACE_MODE_READ, NULL,
+ 				    NULL, &uprobe_profile_ops);
+ 	return 0;
+ }
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index d6bddb157ef20..39bb56d2dcbef 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -834,29 +834,35 @@ int tracing_map_init(struct tracing_map *map)
+ 	return err;
+ }
+ 
+-static int cmp_entries_dup(const struct tracing_map_sort_entry **a,
+-			   const struct tracing_map_sort_entry **b)
++static int cmp_entries_dup(const void *A, const void *B)
+ {
++	const struct tracing_map_sort_entry *a, *b;
+ 	int ret = 0;
+ 
+-	if (memcmp((*a)->key, (*b)->key, (*a)->elt->map->key_size))
++	a = *(const struct tracing_map_sort_entry **)A;
++	b = *(const struct tracing_map_sort_entry **)B;
++
++	if (memcmp(a->key, b->key, a->elt->map->key_size))
+ 		ret = 1;
+ 
+ 	return ret;
+ }
+ 
+-static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
+-			   const struct tracing_map_sort_entry **b)
++static int cmp_entries_sum(const void *A, const void *B)
+ {
+ 	const struct tracing_map_elt *elt_a, *elt_b;
++	const struct tracing_map_sort_entry *a, *b;
+ 	struct tracing_map_sort_key *sort_key;
+ 	struct tracing_map_field *field;
+ 	tracing_map_cmp_fn_t cmp_fn;
+ 	void *val_a, *val_b;
+ 	int ret = 0;
+ 
+-	elt_a = (*a)->elt;
+-	elt_b = (*b)->elt;
++	a = *(const struct tracing_map_sort_entry **)A;
++	b = *(const struct tracing_map_sort_entry **)B;
++
++	elt_a = a->elt;
++	elt_b = b->elt;
+ 
+ 	sort_key = &elt_a->map->sort_key;
+ 
+@@ -873,18 +879,21 @@ static int cmp_entries_sum(const struct tracing_map_sort_entry **a,
+ 	return ret;
+ }
+ 
+-static int cmp_entries_key(const struct tracing_map_sort_entry **a,
+-			   const struct tracing_map_sort_entry **b)
++static int cmp_entries_key(const void *A, const void *B)
+ {
+ 	const struct tracing_map_elt *elt_a, *elt_b;
++	const struct tracing_map_sort_entry *a, *b;
+ 	struct tracing_map_sort_key *sort_key;
+ 	struct tracing_map_field *field;
+ 	tracing_map_cmp_fn_t cmp_fn;
+ 	void *val_a, *val_b;
+ 	int ret = 0;
+ 
+-	elt_a = (*a)->elt;
+-	elt_b = (*b)->elt;
++	a = *(const struct tracing_map_sort_entry **)A;
++	b = *(const struct tracing_map_sort_entry **)B;
++
++	elt_a = a->elt;
++	elt_b = b->elt;
+ 
+ 	sort_key = &elt_a->map->sort_key;
+ 
+@@ -989,10 +998,8 @@ static void sort_secondary(struct tracing_map *map,
+ 			   struct tracing_map_sort_key *primary_key,
+ 			   struct tracing_map_sort_key *secondary_key)
+ {
+-	int (*primary_fn)(const struct tracing_map_sort_entry **,
+-			  const struct tracing_map_sort_entry **);
+-	int (*secondary_fn)(const struct tracing_map_sort_entry **,
+-			    const struct tracing_map_sort_entry **);
++	int (*primary_fn)(const void *, const void *);
++	int (*secondary_fn)(const void *, const void *);
+ 	unsigned i, start = 0, n_sub = 1;
+ 
+ 	if (is_key(map, primary_key->field_idx))
+@@ -1061,8 +1068,7 @@ int tracing_map_sort_entries(struct tracing_map *map,
+ 			     unsigned int n_sort_keys,
+ 			     struct tracing_map_sort_entry ***sort_entries)
+ {
+-	int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
+-			      const struct tracing_map_sort_entry **);
++	int (*cmp_entries_fn)(const void *, const void *);
+ 	struct tracing_map_sort_entry *sort_entry, **entries;
+ 	int i, n_entries, ret;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 542c2d03dab65..ccad28bf21e26 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5340,9 +5340,6 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ 	int ret = -EINVAL;
+ 	cpumask_var_t saved_cpumask;
+ 
+-	if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL))
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * Not excluding isolated cpus on purpose.
+ 	 * If the user wishes to include them, we allow that.
+@@ -5350,6 +5347,15 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ 	cpumask_and(cpumask, cpumask, cpu_possible_mask);
+ 	if (!cpumask_empty(cpumask)) {
+ 		apply_wqattrs_lock();
++		if (cpumask_equal(cpumask, wq_unbound_cpumask)) {
++			ret = 0;
++			goto out_unlock;
++		}
++
++		if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL)) {
++			ret = -ENOMEM;
++			goto out_unlock;
++		}
+ 
+ 		/* save the old wq_unbound_cpumask. */
+ 		cpumask_copy(saved_cpumask, wq_unbound_cpumask);
+@@ -5362,10 +5368,11 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ 		if (ret < 0)
+ 			cpumask_copy(wq_unbound_cpumask, saved_cpumask);
+ 
++		free_cpumask_var(saved_cpumask);
++out_unlock:
+ 		apply_wqattrs_unlock();
+ 	}
+ 
+-	free_cpumask_var(saved_cpumask);
+ 	return ret;
+ }
+ 
+diff --git a/lib/decompress_unxz.c b/lib/decompress_unxz.c
+index a2f38e23004aa..f7a3dc13316a3 100644
+--- a/lib/decompress_unxz.c
++++ b/lib/decompress_unxz.c
+@@ -167,7 +167,7 @@
+  * memeq and memzero are not used much and any remotely sane implementation
+  * is fast enough. memcpy/memmove speed matters in multi-call mode, but
+  * the kernel image is decompressed in single-call mode, in which only
+- * memcpy speed can matter and only if there is a lot of uncompressible data
++ * memmove speed can matter and only if there is a lot of uncompressible data
+  * (LZMA2 stores uncompressible chunks in uncompressed form). Thus, the
+  * functions below should just be kept small; it's probably not worth
+  * optimizing for speed.
+diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
+index cb5abb42c16a2..84c16309cc637 100644
+--- a/lib/dynamic_debug.c
++++ b/lib/dynamic_debug.c
+@@ -761,6 +761,18 @@ static __init int ddebug_setup_query(char *str)
+ 
+ __setup("ddebug_query=", ddebug_setup_query);
+ 
++/*
++ * Install a noop handler to make dyndbg look like a normal kernel cli param.
++ * This avoids warnings about dyndbg being an unknown cli param when supplied
++ * by a user.
++ */
++static __init int dyndbg_setup(char *str)
++{
++	return 1;
++}
++
++__setup("dyndbg=", dyndbg_setup);
++
+ /*
+  * File_ops->write method for <debugfs>/dynamic_debug/control.  Gathers the
+  * command text from userspace, parses and executes it.
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index e23123ae3a137..25dfc48536d72 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1484,7 +1484,7 @@ ssize_t iov_iter_get_pages(struct iov_iter *i,
+ 		res = get_user_pages_fast(addr, n,
+ 				iov_iter_rw(i) != WRITE ?  FOLL_WRITE : 0,
+ 				pages);
+-		if (unlikely(res < 0))
++		if (unlikely(res <= 0))
+ 			return res;
+ 		return (res == n ? len : res * PAGE_SIZE) - *start;
+ 	}
+@@ -1608,8 +1608,9 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
+ 			return -ENOMEM;
+ 		res = get_user_pages_fast(addr, n,
+ 				iov_iter_rw(i) != WRITE ?  FOLL_WRITE : 0, p);
+-		if (unlikely(res < 0)) {
++		if (unlikely(res <= 0)) {
+ 			kvfree(p);
++			*pages = NULL;
+ 			return res;
+ 		}
+ 		*pages = p;
+diff --git a/lib/xz/xz_dec_lzma2.c b/lib/xz/xz_dec_lzma2.c
+index 7a6781e3f47b6..d548cf0e59fe6 100644
+--- a/lib/xz/xz_dec_lzma2.c
++++ b/lib/xz/xz_dec_lzma2.c
+@@ -387,7 +387,14 @@ static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b,
+ 
+ 		*left -= copy_size;
+ 
+-		memcpy(dict->buf + dict->pos, b->in + b->in_pos, copy_size);
++		/*
++		 * If doing in-place decompression in single-call mode and the
++		 * uncompressed size of the file is larger than the caller
++		 * thought (i.e. it is invalid input!), the buffers below may
++		 * overlap and cause undefined behavior with memcpy().
++		 * With valid inputs memcpy() would be fine here.
++		 */
++		memmove(dict->buf + dict->pos, b->in + b->in_pos, copy_size);
+ 		dict->pos += copy_size;
+ 
+ 		if (dict->full < dict->pos)
+@@ -397,7 +404,11 @@ static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b,
+ 			if (dict->pos == dict->end)
+ 				dict->pos = 0;
+ 
+-			memcpy(b->out + b->out_pos, b->in + b->in_pos,
++			/*
++			 * Like above but for multi-call mode: use memmove()
++			 * to avoid undefined behavior with invalid input.
++			 */
++			memmove(b->out + b->out_pos, b->in + b->in_pos,
+ 					copy_size);
+ 		}
+ 
+@@ -421,6 +432,12 @@ static uint32_t dict_flush(struct dictionary *dict, struct xz_buf *b)
+ 		if (dict->pos == dict->end)
+ 			dict->pos = 0;
+ 
++		/*
++		 * These buffers cannot overlap even if doing in-place
++		 * decompression because in multi-call mode dict->buf
++		 * has been allocated by us in this file; it's not
++		 * provided by the caller like in single-call mode.
++		 */
+ 		memcpy(b->out + b->out_pos, dict->buf + dict->start,
+ 				copy_size);
+ 	}
+diff --git a/lib/xz/xz_dec_stream.c b/lib/xz/xz_dec_stream.c
+index fea86deaaa01d..683570b93a8c4 100644
+--- a/lib/xz/xz_dec_stream.c
++++ b/lib/xz/xz_dec_stream.c
+@@ -402,12 +402,12 @@ static enum xz_ret dec_stream_header(struct xz_dec *s)
+ 	 * we will accept other check types too, but then the check won't
+ 	 * be verified and a warning (XZ_UNSUPPORTED_CHECK) will be given.
+ 	 */
++	if (s->temp.buf[HEADER_MAGIC_SIZE + 1] > XZ_CHECK_MAX)
++		return XZ_OPTIONS_ERROR;
++
+ 	s->check_type = s->temp.buf[HEADER_MAGIC_SIZE + 1];
+ 
+ #ifdef XZ_DEC_ANY_CHECK
+-	if (s->check_type > XZ_CHECK_MAX)
+-		return XZ_OPTIONS_ERROR;
+-
+ 	if (s->check_type > XZ_CHECK_CRC32)
+ 		return XZ_UNSUPPORTED_CHECK;
+ #else
+diff --git a/mm/filemap.c b/mm/filemap.c
+index d1458ecf2f51e..34de0b14aaa90 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2038,7 +2038,6 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start,
+ 		if (!xa_is_value(page)) {
+ 			if (page->index < start)
+ 				goto put;
+-			VM_BUG_ON_PAGE(page->index != xas.xa_index, page);
+ 			if (page->index + thp_nr_pages(page) - 1 > end)
+ 				goto put;
+ 			if (!trylock_page(page))
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 702a81dfe72dc..768ba69a82ecf 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -234,7 +234,7 @@ enum res_type {
+ 	     iter != NULL;				\
+ 	     iter = mem_cgroup_iter(NULL, iter, NULL))
+ 
+-static inline bool should_force_charge(void)
++static inline bool task_is_dying(void)
+ {
+ 	return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
+ 		(current->flags & PF_EXITING);
+@@ -1607,7 +1607,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	 * A few threads which were not waiting at mutex_lock_killable() can
+ 	 * fail to bail out. Therefore, check again after holding oom_lock.
+ 	 */
+-	ret = should_force_charge() || out_of_memory(&oc);
++	ret = task_is_dying() || out_of_memory(&oc);
+ 
+ unlock:
+ 	mutex_unlock(&oom_lock);
+@@ -2588,6 +2588,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	struct page_counter *counter;
+ 	enum oom_status oom_status;
+ 	unsigned long nr_reclaimed;
++	bool passed_oom = false;
+ 	bool may_swap = true;
+ 	bool drained = false;
+ 	unsigned long pflags;
+@@ -2622,15 +2623,6 @@ retry:
+ 	if (gfp_mask & __GFP_ATOMIC)
+ 		goto force;
+ 
+-	/*
+-	 * Unlike in global OOM situations, memcg is not in a physical
+-	 * memory shortage.  Allow dying and OOM-killed tasks to
+-	 * bypass the last charges so that they can exit quickly and
+-	 * free their memory.
+-	 */
+-	if (unlikely(should_force_charge()))
+-		goto force;
+-
+ 	/*
+ 	 * Prevent unbounded recursion when reclaim operations need to
+ 	 * allocate memory. This might exceed the limits temporarily,
+@@ -2688,8 +2680,9 @@ retry:
+ 	if (gfp_mask & __GFP_RETRY_MAYFAIL)
+ 		goto nomem;
+ 
+-	if (fatal_signal_pending(current))
+-		goto force;
++	/* Avoid endless loop for tasks bypassed by the oom killer */
++	if (passed_oom && task_is_dying())
++		goto nomem;
+ 
+ 	/*
+ 	 * keep retrying as long as the memcg oom killer is able to make
+@@ -2698,14 +2691,10 @@ retry:
+ 	 */
+ 	oom_status = mem_cgroup_oom(mem_over_limit, gfp_mask,
+ 		       get_order(nr_pages * PAGE_SIZE));
+-	switch (oom_status) {
+-	case OOM_SUCCESS:
++	if (oom_status == OOM_SUCCESS) {
++		passed_oom = true;
+ 		nr_retries = MAX_RECLAIM_RETRIES;
+ 		goto retry;
+-	case OOM_FAILED:
+-		goto force;
+-	default:
+-		goto nomem;
+ 	}
+ nomem:
+ 	if (!(gfp_mask & __GFP_NOFAIL))
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index c729a4c4a1ace..daf05007829f3 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -1119,25 +1119,22 @@ bool out_of_memory(struct oom_control *oc)
+ }
+ 
+ /*
+- * The pagefault handler calls here because it is out of memory, so kill a
+- * memory-hogging task. If oom_lock is held by somebody else, a parallel oom
+- * killing is already in progress so do nothing.
++ * The pagefault handler calls here because some allocation has failed. We have
++ * to take care of the memcg OOM here because this is the only safe context without
++ * any locks held but let the oom killer triggered from the allocation context care
++ * about the global OOM.
+  */
+ void pagefault_out_of_memory(void)
+ {
+-	struct oom_control oc = {
+-		.zonelist = NULL,
+-		.nodemask = NULL,
+-		.memcg = NULL,
+-		.gfp_mask = 0,
+-		.order = 0,
+-	};
++	static DEFINE_RATELIMIT_STATE(pfoom_rs, DEFAULT_RATELIMIT_INTERVAL,
++				      DEFAULT_RATELIMIT_BURST);
+ 
+ 	if (mem_cgroup_oom_synchronize(true))
+ 		return;
+ 
+-	if (!mutex_trylock(&oom_lock))
++	if (fatal_signal_pending(current))
+ 		return;
+-	out_of_memory(&oc);
+-	mutex_unlock(&oom_lock);
++
++	if (__ratelimit(&pfoom_rs))
++		pr_warn("Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF\n");
+ }
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 68e8831068f4b..b897ce3b399a1 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1830,10 +1830,11 @@ static inline void zs_pool_dec_isolated(struct zs_pool *pool)
+ 	VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
+ 	atomic_long_dec(&pool->isolated_pages);
+ 	/*
+-	 * There's no possibility of racing, since wait_for_isolated_drain()
+-	 * checks the isolated count under &class->lock after enqueuing
+-	 * on migration_wait.
++	 * Checking pool->destroying must happen after atomic_long_dec()
++	 * for pool->isolated_pages above. Paired with the smp_mb() in
++	 * zs_unregister_migration().
+ 	 */
++	smp_mb__after_atomic();
+ 	if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
+ 		wake_up_all(&pool->migration_wait);
+ }
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index 4cdf8416869d1..e31f6be98b75b 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -123,9 +123,6 @@ void unregister_vlan_dev(struct net_device *dev, struct list_head *head)
+ 	}
+ 
+ 	vlan_vid_del(real_dev, vlan->vlan_proto, vlan_id);
+-
+-	/* Get rid of the vlan's reference to real_dev */
+-	dev_put(real_dev);
+ }
+ 
+ int vlan_check_real_dev(struct net_device *real_dev,
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index a0367b37512d8..c34685774b573 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -843,6 +843,9 @@ static void vlan_dev_free(struct net_device *dev)
+ 
+ 	free_percpu(vlan->vlan_pcpu_stats);
+ 	vlan->vlan_pcpu_stats = NULL;
++
++	/* Get rid of the vlan's reference to real_dev */
++	dev_put(vlan->real_dev);
+ }
+ 
+ void vlan_setup(struct net_device *dev)
+diff --git a/net/9p/client.c b/net/9p/client.c
+index b7b958f61fafe..8a24ac2ea239b 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -539,6 +539,8 @@ static int p9_check_errors(struct p9_client *c, struct p9_req_t *req)
+ 		kfree(ename);
+ 	} else {
+ 		err = p9pdu_readf(&req->rc, c->proto_version, "d", &ecode);
++		if (err)
++			goto out_err;
+ 		err = -ecode;
+ 
+ 		p9_debug(P9_DEBUG_9P, "<<< RLERROR (%d)\n", -ecode);
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index c99d65ef13b1e..160c016a5dfb9 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1508,6 +1508,9 @@ static void l2cap_sock_close_cb(struct l2cap_chan *chan)
+ {
+ 	struct sock *sk = chan->data;
+ 
++	if (!sk)
++		return;
++
+ 	l2cap_sock_kill(sk);
+ }
+ 
+@@ -1516,6 +1519,9 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
+ 	struct sock *sk = chan->data;
+ 	struct sock *parent;
+ 
++	if (!sk)
++		return;
++
+ 	BT_DBG("chan %p state %s", chan, state_to_string(chan->state));
+ 
+ 	/* This callback can be called both for server (BT_LISTEN)
+@@ -1707,8 +1713,10 @@ static void l2cap_sock_destruct(struct sock *sk)
+ {
+ 	BT_DBG("sk %p", sk);
+ 
+-	if (l2cap_pi(sk)->chan)
++	if (l2cap_pi(sk)->chan) {
++		l2cap_pi(sk)->chan->data = NULL;
+ 		l2cap_chan_put(l2cap_pi(sk)->chan);
++	}
+ 
+ 	if (l2cap_pi(sk)->rx_busy_skb) {
+ 		kfree_skb(l2cap_pi(sk)->rx_busy_skb);
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 110cfd6aa2b77..2e251ac89da58 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -134,6 +134,7 @@ static struct sco_conn *sco_conn_add(struct hci_conn *hcon)
+ 		return NULL;
+ 
+ 	spin_lock_init(&conn->lock);
++	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
+ 
+ 	hcon->sco_data = conn;
+ 	conn->hcon = hcon;
+@@ -197,11 +198,11 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+ 		sock_put(sk);
+-
+-		/* Ensure no more work items will run before freeing conn. */
+-		cancel_delayed_work_sync(&conn->timeout_work);
+ 	}
+ 
++	/* Ensure no more work items will run before freeing conn. */
++	cancel_delayed_work_sync(&conn->timeout_work);
++
+ 	hcon->sco_data = NULL;
+ 	kfree(conn);
+ }
+@@ -214,8 +215,6 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	sco_pi(sk)->conn = conn;
+ 	conn->sk = sk;
+ 
+-	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
+-
+ 	if (parent)
+ 		bt_accept_enqueue(parent, sk, true);
+ }
+@@ -281,7 +280,8 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+ 	return err;
+ }
+ 
+-static int sco_send_frame(struct sock *sk, struct msghdr *msg, int len)
++static int sco_send_frame(struct sock *sk, void *buf, int len,
++			  unsigned int msg_flags)
+ {
+ 	struct sco_conn *conn = sco_pi(sk)->conn;
+ 	struct sk_buff *skb;
+@@ -293,15 +293,11 @@ static int sco_send_frame(struct sock *sk, struct msghdr *msg, int len)
+ 
+ 	BT_DBG("sk %p len %d", sk, len);
+ 
+-	skb = bt_skb_send_alloc(sk, len, msg->msg_flags & MSG_DONTWAIT, &err);
++	skb = bt_skb_send_alloc(sk, len, msg_flags & MSG_DONTWAIT, &err);
+ 	if (!skb)
+ 		return err;
+ 
+-	if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
+-		kfree_skb(skb);
+-		return -EFAULT;
+-	}
+-
++	memcpy(skb_put(skb, len), buf, len);
+ 	hci_send_sco(conn->hcon, skb);
+ 
+ 	return len;
+@@ -726,6 +722,7 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			    size_t len)
+ {
+ 	struct sock *sk = sock->sk;
++	void *buf;
+ 	int err;
+ 
+ 	BT_DBG("sock %p, sk %p", sock, sk);
+@@ -737,14 +734,24 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	if (msg->msg_flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
++	buf = kmalloc(len, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	if (memcpy_from_msg(buf, msg, len)) {
++		kfree(buf);
++		return -EFAULT;
++	}
++
+ 	lock_sock(sk);
+ 
+ 	if (sk->sk_state == BT_CONNECTED)
+-		err = sco_send_frame(sk, msg, len);
++		err = sco_send_frame(sk, buf, len, msg->msg_flags);
+ 	else
+ 		err = -ENOTCONN;
+ 
+ 	release_sock(sk);
++	kfree(buf);
+ 	return err;
+ }
+ 
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 7d3155283af93..f538fe3902da9 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -1594,11 +1594,13 @@ static inline int br_cfm_status_fill_info(struct sk_buff *skb,
+ 
+ static inline int br_cfm_mep_count(struct net_bridge *br, u32 *count)
+ {
++	*count = 0;
+ 	return -EOPNOTSUPP;
+ }
+ 
+ static inline int br_cfm_peer_mep_count(struct net_bridge *br, u32 *count)
+ {
++	*count = 0;
+ 	return -EOPNOTSUPP;
+ }
+ #endif
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 9bc55ecb37f9f..8452b0fbb78c9 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -75,6 +75,13 @@ static void j1939_can_recv(struct sk_buff *iskb, void *data)
+ 	skcb->addr.pgn = (cf->can_id >> 8) & J1939_PGN_MAX;
+ 	/* set default message type */
+ 	skcb->addr.type = J1939_TP;
++
++	if (!j1939_address_is_valid(skcb->addr.sa)) {
++		netdev_err_once(priv->ndev, "%s: sa is broadcast address, ignoring!\n",
++				__func__);
++		goto done;
++	}
++
+ 	if (j1939_pgn_is_pdu1(skcb->addr.pgn)) {
+ 		/* Type 1: with destination address */
+ 		skcb->addr.da = skcb->addr.pgn;
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index e59fbbffa31ce..fe35fdad35c9b 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -2065,6 +2065,12 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ 		break;
+ 
+ 	case J1939_ETP_CMD_ABORT: /* && J1939_TP_CMD_ABORT */
++		if (j1939_cb_is_broadcast(skcb)) {
++			netdev_err_once(priv->ndev, "%s: abort to broadcast (%02x), ignoring!\n",
++					__func__, skcb->addr.sa);
++			return;
++		}
++
+ 		if (j1939_tp_im_transmitter(skcb))
+ 			j1939_xtp_rx_abort(priv, skb, true);
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 9cb47618d4869..1d66548b6fc01 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3048,6 +3048,8 @@ int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq)
+ 		if (dev->num_tc)
+ 			netif_setup_tc(dev, txq);
+ 
++		dev_qdisc_change_real_num_tx(dev, txq);
++
+ 		dev->real_num_tx_queues = txq;
+ 
+ 		if (disabling) {
+@@ -3995,7 +3997,8 @@ int dev_loopback_xmit(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 	skb_reset_mac_header(skb);
+ 	__skb_pull(skb, skb_network_offset(skb));
+ 	skb->pkt_type = PACKET_LOOPBACK;
+-	skb->ip_summed = CHECKSUM_UNNECESSARY;
++	if (skb->ip_summed == CHECKSUM_NONE)
++		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 	WARN_ON(!skb_dst(skb));
+ 	skb_dst_force(skb);
+ 	netif_rx_ni(skb);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index d70187ce851bc..58ec74f012930 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -9655,22 +9655,46 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
+ static struct bpf_insn *bpf_convert_data_end_access(const struct bpf_insn *si,
+ 						    struct bpf_insn *insn)
+ {
+-	/* si->dst_reg = skb->data */
++	int reg;
++	int temp_reg_off = offsetof(struct sk_buff, cb) +
++			   offsetof(struct sk_skb_cb, temp_reg);
++
++	if (si->src_reg == si->dst_reg) {
++		/* We need an extra register, choose and save a register. */
++		reg = BPF_REG_9;
++		if (si->src_reg == reg || si->dst_reg == reg)
++			reg--;
++		if (si->src_reg == reg || si->dst_reg == reg)
++			reg--;
++		*insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, temp_reg_off);
++	} else {
++		reg = si->dst_reg;
++	}
++
++	/* reg = skb->data */
+ 	*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, data),
+-			      si->dst_reg, si->src_reg,
++			      reg, si->src_reg,
+ 			      offsetof(struct sk_buff, data));
+ 	/* AX = skb->len */
+ 	*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, len),
+ 			      BPF_REG_AX, si->src_reg,
+ 			      offsetof(struct sk_buff, len));
+-	/* si->dst_reg = skb->data + skb->len */
+-	*insn++ = BPF_ALU64_REG(BPF_ADD, si->dst_reg, BPF_REG_AX);
++	/* reg = skb->data + skb->len */
++	*insn++ = BPF_ALU64_REG(BPF_ADD, reg, BPF_REG_AX);
+ 	/* AX = skb->data_len */
+ 	*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, data_len),
+ 			      BPF_REG_AX, si->src_reg,
+ 			      offsetof(struct sk_buff, data_len));
+-	/* si->dst_reg = skb->data + skb->len - skb->data_len */
+-	*insn++ = BPF_ALU64_REG(BPF_SUB, si->dst_reg, BPF_REG_AX);
++
++	/* reg = skb->data + skb->len - skb->data_len */
++	*insn++ = BPF_ALU64_REG(BPF_SUB, reg, BPF_REG_AX);
++
++	if (si->src_reg == si->dst_reg) {
++		/* Restore the saved register */
++		*insn++ = BPF_MOV64_REG(BPF_REG_AX, si->src_reg);
++		*insn++ = BPF_MOV64_REG(si->dst_reg, reg);
++		*insn++ = BPF_LDX_MEM(BPF_DW, reg, BPF_REG_AX, temp_reg_off);
++	}
+ 
+ 	return insn;
+ }
+@@ -9681,11 +9705,33 @@ static u32 sk_skb_convert_ctx_access(enum bpf_access_type type,
+ 				     struct bpf_prog *prog, u32 *target_size)
+ {
+ 	struct bpf_insn *insn = insn_buf;
++	int off;
+ 
+ 	switch (si->off) {
+ 	case offsetof(struct __sk_buff, data_end):
+ 		insn = bpf_convert_data_end_access(si, insn);
+ 		break;
++	case offsetof(struct __sk_buff, cb[0]) ...
++	     offsetofend(struct __sk_buff, cb[4]) - 1:
++		BUILD_BUG_ON(sizeof_field(struct sk_skb_cb, data) < 20);
++		BUILD_BUG_ON((offsetof(struct sk_buff, cb) +
++			      offsetof(struct sk_skb_cb, data)) %
++			     sizeof(__u64));
++
++		prog->cb_access = 1;
++		off  = si->off;
++		off -= offsetof(struct __sk_buff, cb[0]);
++		off += offsetof(struct sk_buff, cb);
++		off += offsetof(struct sk_skb_cb, data);
++		if (type == BPF_WRITE)
++			*insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg,
++					      si->src_reg, off);
++		else
++			*insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg,
++					      si->src_reg, off);
++		break;
++
++
+ 	default:
+ 		return bpf_convert_ctx_access(type, si, insn_buf, prog,
+ 					      target_size);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 53e85c70c6e58..704832723ab87 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -379,7 +379,7 @@ EXPORT_SYMBOL(neigh_ifdown);
+ 
+ static struct neighbour *neigh_alloc(struct neigh_table *tbl,
+ 				     struct net_device *dev,
+-				     bool exempt_from_gc)
++				     u8 flags, bool exempt_from_gc)
+ {
+ 	struct neighbour *n = NULL;
+ 	unsigned long now = jiffies;
+@@ -412,6 +412,7 @@ do_alloc:
+ 	n->updated	  = n->used = now;
+ 	n->nud_state	  = NUD_NONE;
+ 	n->output	  = neigh_blackhole;
++	n->flags	  = flags;
+ 	seqlock_init(&n->hh.hh_lock);
+ 	n->parms	  = neigh_parms_clone(&tbl->parms);
+ 	timer_setup(&n->timer, neigh_timer_handler, 0);
+@@ -575,19 +576,18 @@ struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net,
+ }
+ EXPORT_SYMBOL(neigh_lookup_nodev);
+ 
+-static struct neighbour *___neigh_create(struct neigh_table *tbl,
+-					 const void *pkey,
+-					 struct net_device *dev,
+-					 bool exempt_from_gc, bool want_ref)
++static struct neighbour *
++___neigh_create(struct neigh_table *tbl, const void *pkey,
++		struct net_device *dev, u8 flags,
++		bool exempt_from_gc, bool want_ref)
+ {
+-	struct neighbour *n1, *rc, *n = neigh_alloc(tbl, dev, exempt_from_gc);
+-	u32 hash_val;
+-	unsigned int key_len = tbl->key_len;
+-	int error;
++	u32 hash_val, key_len = tbl->key_len;
++	struct neighbour *n1, *rc, *n;
+ 	struct neigh_hash_table *nht;
++	int error;
+ 
++	n = neigh_alloc(tbl, dev, flags, exempt_from_gc);
+ 	trace_neigh_create(tbl, dev, pkey, n, exempt_from_gc);
+-
+ 	if (!n) {
+ 		rc = ERR_PTR(-ENOBUFS);
+ 		goto out;
+@@ -674,7 +674,7 @@ out_neigh_release:
+ struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey,
+ 				 struct net_device *dev, bool want_ref)
+ {
+-	return ___neigh_create(tbl, pkey, dev, false, want_ref);
++	return ___neigh_create(tbl, pkey, dev, 0, false, want_ref);
+ }
+ EXPORT_SYMBOL(__neigh_create);
+ 
+@@ -1221,7 +1221,7 @@ static void neigh_update_hhs(struct neighbour *neigh)
+ 				lladdr instead of overriding it
+ 				if it is different.
+ 	NEIGH_UPDATE_F_ADMIN	means that the change is administrative.
+-
++	NEIGH_UPDATE_F_USE	means that the entry is user triggered.
+ 	NEIGH_UPDATE_F_OVERRIDE_ISROUTER allows to override existing
+ 				NTF_ROUTER flag.
+ 	NEIGH_UPDATE_F_ISROUTER	indicates if the neighbour is known as
+@@ -1259,6 +1259,12 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
+ 		goto out;
+ 
+ 	ext_learn_change = neigh_update_ext_learned(neigh, flags, &notify);
++	if (flags & NEIGH_UPDATE_F_USE) {
++		new = old & ~NUD_PERMANENT;
++		neigh->nud_state = new;
++		err = 0;
++		goto out;
++	}
+ 
+ 	if (!(new & NUD_VALID)) {
+ 		neigh_del_timer(neigh);
+@@ -1947,7 +1953,9 @@ static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 		exempt_from_gc = ndm->ndm_state & NUD_PERMANENT ||
+ 				 ndm->ndm_flags & NTF_EXT_LEARNED;
+-		neigh = ___neigh_create(tbl, dst, dev, exempt_from_gc, true);
++		neigh = ___neigh_create(tbl, dst, dev,
++					ndm->ndm_flags & NTF_EXT_LEARNED,
++					exempt_from_gc, true);
+ 		if (IS_ERR(neigh)) {
+ 			err = PTR_ERR(neigh);
+ 			goto out;
+@@ -1966,22 +1974,20 @@ static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	if (protocol)
+ 		neigh->protocol = protocol;
+-
+ 	if (ndm->ndm_flags & NTF_EXT_LEARNED)
+ 		flags |= NEIGH_UPDATE_F_EXT_LEARNED;
+-
+ 	if (ndm->ndm_flags & NTF_ROUTER)
+ 		flags |= NEIGH_UPDATE_F_ISROUTER;
++	if (ndm->ndm_flags & NTF_USE)
++		flags |= NEIGH_UPDATE_F_USE;
+ 
+-	if (ndm->ndm_flags & NTF_USE) {
++	err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags,
++			     NETLINK_CB(skb).portid, extack);
++	if (!err && ndm->ndm_flags & NTF_USE) {
+ 		neigh_event_send(neigh, NULL);
+ 		err = 0;
+-	} else
+-		err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags,
+-				     NETLINK_CB(skb).portid, extack);
+-
++	}
+ 	neigh_release(neigh);
+-
+ out:
+ 	return err;
+ }
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index b2e49eb7001d6..dfa5ecff7f738 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -175,6 +175,14 @@ static int change_carrier(struct net_device *dev, unsigned long new_carrier)
+ static ssize_t carrier_store(struct device *dev, struct device_attribute *attr,
+ 			     const char *buf, size_t len)
+ {
++	struct net_device *netdev = to_net_dev(dev);
++
++	/* The check is also done in change_carrier; this helps returning early
++	 * without hitting the trylock/restart in netdev_store.
++	 */
++	if (!netdev->netdev_ops->ndo_change_carrier)
++		return -EOPNOTSUPP;
++
+ 	return netdev_store(dev, attr, buf, len, change_carrier);
+ }
+ 
+@@ -196,6 +204,12 @@ static ssize_t speed_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	int ret = -EINVAL;
+ 
++	/* The check is also done in __ethtool_get_link_ksettings; this helps
++	 * returning early without hitting the trylock/restart below.
++	 */
++	if (!netdev->ethtool_ops->get_link_ksettings)
++		return ret;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -216,6 +230,12 @@ static ssize_t duplex_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	int ret = -EINVAL;
+ 
++	/* The check is also done in __ethtool_get_link_ksettings; this helps
++	 * returning early without hitting the trylock/restart below.
++	 */
++	if (!netdev->ethtool_ops->get_link_ksettings)
++		return ret;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -468,6 +488,14 @@ static ssize_t proto_down_store(struct device *dev,
+ 				struct device_attribute *attr,
+ 				const char *buf, size_t len)
+ {
++	struct net_device *netdev = to_net_dev(dev);
++
++	/* The check is also done in change_proto_down; this helps returning
++	 * early without hitting the trylock/restart in netdev_store.
++	 */
++	if (!netdev->netdev_ops->ndo_change_proto_down)
++		return -EOPNOTSUPP;
++
+ 	return netdev_store(dev, attr, buf, len, change_proto_down);
+ }
+ NETDEVICE_SHOW_RW(proto_down, fmt_dec);
+@@ -478,6 +506,12 @@ static ssize_t phys_port_id_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	ssize_t ret = -EINVAL;
+ 
++	/* The check is also done in dev_get_phys_port_id; this helps returning
++	 * early without hitting the trylock/restart below.
++	 */
++	if (!netdev->netdev_ops->ndo_get_phys_port_id)
++		return -EOPNOTSUPP;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -500,6 +534,13 @@ static ssize_t phys_port_name_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	ssize_t ret = -EINVAL;
+ 
++	/* The checks are also done in dev_get_phys_port_name; this helps
++	 * returning early without hitting the trylock/restart below.
++	 */
++	if (!netdev->netdev_ops->ndo_get_phys_port_name &&
++	    !netdev->netdev_ops->ndo_get_devlink_port)
++		return -EOPNOTSUPP;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -522,6 +563,14 @@ static ssize_t phys_switch_id_show(struct device *dev,
+ 	struct net_device *netdev = to_net_dev(dev);
+ 	ssize_t ret = -EINVAL;
+ 
++	/* The checks are also done in dev_get_phys_port_name; this helps
++	 * returning early without hitting the trylock/restart below. This works
++	 * because recurse is false when calling dev_get_port_parent_id.
++	 */
++	if (!netdev->netdev_ops->ndo_get_port_parent_id &&
++	    !netdev->netdev_ops->ndo_get_devlink_port)
++		return -EOPNOTSUPP;
++
+ 	if (!rtnl_trylock())
+ 		return restart_syscall();
+ 
+@@ -1226,6 +1275,12 @@ static ssize_t tx_maxrate_store(struct netdev_queue *queue,
+ 	if (!capable(CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
++	/* The check is also done later; this helps returning early without
++	 * hitting the trylock/restart below.
++	 */
++	if (!dev->netdev_ops->ndo_set_tx_maxrate)
++		return -EOPNOTSUPP;
++
+ 	err = kstrtou32(buf, 10, &rate);
+ 	if (err < 0)
+ 		return err;
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index 9b5a767eddd5f..073db7d0fa790 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -477,7 +477,9 @@ struct net *copy_net_ns(unsigned long flags,
+ 
+ 	if (rv < 0) {
+ put_userns:
++#ifdef CONFIG_KEYS
+ 		key_remove_domain(net->key_domain);
++#endif
+ 		put_user_ns(user_ns);
+ 		net_drop_ns(net);
+ dec_ucounts:
+@@ -609,7 +611,9 @@ static void cleanup_net(struct work_struct *work)
+ 	list_for_each_entry_safe(net, tmp, &net_exit_list, exit_list) {
+ 		list_del_init(&net->exit_list);
+ 		dec_net_namespaces(net->ucounts);
++#ifdef CONFIG_KEYS
+ 		key_remove_domain(net->key_domain);
++#endif
+ 		put_user_ns(net->user_ns);
+ 		net_drop_ns(net);
+ 	}
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index b49c57d35a88e..1a6a86693b745 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -71,11 +71,8 @@ static int update_classid_sock(const void *v, struct file *file, unsigned n)
+ 	struct update_classid_context *ctx = (void *)v;
+ 	struct socket *sock = sock_from_file(file);
+ 
+-	if (sock) {
+-		spin_lock(&cgroup_sk_update_lock);
++	if (sock)
+ 		sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);
+-		spin_unlock(&cgroup_sk_update_lock);
+-	}
+ 	if (--ctx->batch == 0) {
+ 		ctx->batch = UPDATE_CLASSID_BATCH;
+ 		return n + 1;
+@@ -121,8 +118,6 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 	struct css_task_iter it;
+ 	struct task_struct *p;
+ 
+-	cgroup_sk_alloc_disable();
+-
+ 	cs->classid = (u32)value;
+ 
+ 	css_task_iter_start(css, 0, &it);
+diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c
+index 99a431c56f230..8456dfbe2eb40 100644
+--- a/net/core/netprio_cgroup.c
++++ b/net/core/netprio_cgroup.c
+@@ -207,8 +207,6 @@ static ssize_t write_priomap(struct kernfs_open_file *of,
+ 	if (!dev)
+ 		return -ENODEV;
+ 
+-	cgroup_sk_alloc_disable();
+-
+ 	rtnl_lock();
+ 
+ 	ret = netprio_set_prio(of_css(of), dev, prio);
+@@ -221,12 +219,10 @@ static ssize_t write_priomap(struct kernfs_open_file *of,
+ static int update_netprio(const void *v, struct file *file, unsigned n)
+ {
+ 	struct socket *sock = sock_from_file(file);
+-	if (sock) {
+-		spin_lock(&cgroup_sk_update_lock);
++
++	if (sock)
+ 		sock_cgroup_set_prioidx(&sock->sk->sk_cgrp_data,
+ 					(unsigned long)v);
+-		spin_unlock(&cgroup_sk_update_lock);
+-	}
+ 	return 0;
+ }
+ 
+@@ -235,8 +231,6 @@ static void net_prio_attach(struct cgroup_taskset *tset)
+ 	struct task_struct *p;
+ 	struct cgroup_subsys_state *css;
+ 
+-	cgroup_sk_alloc_disable();
+-
+ 	cgroup_taskset_for_each(p, css, tset) {
+ 		void *v = (void *)(unsigned long)css->id;
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 2d6249b289284..9701a1404ccb2 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -494,6 +494,7 @@ static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk,
+ }
+ 
+ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
++					u32 off, u32 len,
+ 					struct sk_psock *psock,
+ 					struct sock *sk,
+ 					struct sk_msg *msg)
+@@ -507,11 +508,11 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 	 */
+ 	if (skb_linearize(skb))
+ 		return -EAGAIN;
+-	num_sge = skb_to_sgvec(skb, msg->sg.data, 0, skb->len);
++	num_sge = skb_to_sgvec(skb, msg->sg.data, off, len);
+ 	if (unlikely(num_sge < 0))
+ 		return num_sge;
+ 
+-	copied = skb->len;
++	copied = len;
+ 	msg->sg.start = 0;
+ 	msg->sg.size = copied;
+ 	msg->sg.end = num_sge;
+@@ -522,9 +523,11 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 	return copied;
+ }
+ 
+-static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb);
++static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
++				     u32 off, u32 len);
+ 
+-static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
++static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
++				u32 off, u32 len)
+ {
+ 	struct sock *sk = psock->sk;
+ 	struct sk_msg *msg;
+@@ -535,7 +538,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ 	 * correctly.
+ 	 */
+ 	if (unlikely(skb->sk == sk))
+-		return sk_psock_skb_ingress_self(psock, skb);
++		return sk_psock_skb_ingress_self(psock, skb, off, len);
+ 	msg = sk_psock_create_ingress_msg(sk, skb);
+ 	if (!msg)
+ 		return -EAGAIN;
+@@ -547,7 +550,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ 	 * into user buffers.
+ 	 */
+ 	skb_set_owner_r(skb, sk);
+-	err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
+ 	if (err < 0)
+ 		kfree(msg);
+ 	return err;
+@@ -557,7 +560,8 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+  * skb. In this case we do not need to check memory limits or skb_set_owner_r
+  * because the skb is already accounted for here.
+  */
+-static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb)
++static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
++				     u32 off, u32 len)
+ {
+ 	struct sk_msg *msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_ATOMIC);
+ 	struct sock *sk = psock->sk;
+@@ -567,7 +571,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ 		return -EAGAIN;
+ 	sk_msg_init(msg);
+ 	skb_set_owner_r(skb, sk);
+-	err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
+ 	if (err < 0)
+ 		kfree(msg);
+ 	return err;
+@@ -581,7 +585,7 @@ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+ 			return -EAGAIN;
+ 		return skb_send_sock(psock->sk, skb, off, len);
+ 	}
+-	return sk_psock_skb_ingress(psock, skb);
++	return sk_psock_skb_ingress(psock, skb, off, len);
+ }
+ 
+ static void sk_psock_skb_state(struct sk_psock *psock,
+@@ -624,6 +628,12 @@ static void sk_psock_backlog(struct work_struct *work)
+ 	while ((skb = skb_dequeue(&psock->ingress_skb))) {
+ 		len = skb->len;
+ 		off = 0;
++		if (skb_bpf_strparser(skb)) {
++			struct strp_msg *stm = strp_msg(skb);
++
++			off = stm->offset;
++			len = stm->full_len;
++		}
+ start:
+ 		ingress = skb_bpf_ingress(skb);
+ 		skb_bpf_redirect_clear(skb);
+@@ -863,6 +873,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb)
+ 	 * return code, but then didn't set a redirect interface.
+ 	 */
+ 	if (unlikely(!sk_other)) {
++		skb_bpf_redirect_clear(skb);
+ 		sock_drop(from->sk, skb);
+ 		return -EIO;
+ 	}
+@@ -930,6 +941,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ {
+ 	struct sock *sk_other;
+ 	int err = 0;
++	u32 len, off;
+ 
+ 	switch (verdict) {
+ 	case __SK_PASS:
+@@ -937,6 +949,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 		sk_other = psock->sk;
+ 		if (sock_flag(sk_other, SOCK_DEAD) ||
+ 		    !sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
++			skb_bpf_redirect_clear(skb);
+ 			goto out_free;
+ 		}
+ 
+@@ -949,7 +962,15 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 		 * retrying later from workqueue.
+ 		 */
+ 		if (skb_queue_empty(&psock->ingress_skb)) {
+-			err = sk_psock_skb_ingress_self(psock, skb);
++			len = skb->len;
++			off = 0;
++			if (skb_bpf_strparser(skb)) {
++				struct strp_msg *stm = strp_msg(skb);
++
++				off = stm->offset;
++				len = stm->full_len;
++			}
++			err = sk_psock_skb_ingress_self(psock, skb, off, len);
+ 		}
+ 		if (err < 0) {
+ 			spin_lock_bh(&psock->ingress_lock);
+@@ -1015,6 +1036,8 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ 		skb_dst_drop(skb);
+ 		skb_bpf_redirect_clear(skb);
+ 		ret = bpf_prog_run_pin_on_cpu(prog, skb);
++		if (ret == SK_PASS)
++			skb_bpf_set_strparser(skb);
+ 		ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
+ 		skb->sk = NULL;
+ 	}
+diff --git a/net/core/stream.c b/net/core/stream.c
+index 4f1d4aa5fb38d..a166a32b411fa 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -195,9 +195,6 @@ void sk_stream_kill_queues(struct sock *sk)
+ 	/* First the read buffer. */
+ 	__skb_queue_purge(&sk->sk_receive_queue);
+ 
+-	/* Next, the error queue. */
+-	__skb_queue_purge(&sk->sk_error_queue);
+-
+ 	/* Next, the write queue. */
+ 	WARN_ON(!skb_queue_empty(&sk->sk_write_queue));
+ 
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index c8496c1142c9d..5f88526ad61cc 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -419,7 +419,7 @@ static struct ctl_table net_core_table[] = {
+ 		.mode		= 0600,
+ 		.proc_handler	= proc_dolongvec_minmax_bpf_restricted,
+ 		.extra1		= &long_one,
+-		.extra2		= &long_max,
++		.extra2		= &bpf_jit_limit_max,
+ 	},
+ #endif
+ 	{
+diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h
+index c5c1d2b8045e8..5183e627468d8 100644
+--- a/net/dccp/dccp.h
++++ b/net/dccp/dccp.h
+@@ -48,7 +48,7 @@ extern bool dccp_debug;
+ 
+ extern struct inet_hashinfo dccp_hashinfo;
+ 
+-extern struct percpu_counter dccp_orphan_count;
++DECLARE_PER_CPU(unsigned int, dccp_orphan_count);
+ 
+ void dccp_time_wait(struct sock *sk, int state, int timeo);
+ 
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 7eb0fb2319407..40e9c61bd14c2 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -42,8 +42,8 @@ DEFINE_SNMP_STAT(struct dccp_mib, dccp_statistics) __read_mostly;
+ 
+ EXPORT_SYMBOL_GPL(dccp_statistics);
+ 
+-struct percpu_counter dccp_orphan_count;
+-EXPORT_SYMBOL_GPL(dccp_orphan_count);
++DEFINE_PER_CPU(unsigned int, dccp_orphan_count);
++EXPORT_PER_CPU_SYMBOL_GPL(dccp_orphan_count);
+ 
+ struct inet_hashinfo dccp_hashinfo;
+ EXPORT_SYMBOL_GPL(dccp_hashinfo);
+@@ -1055,7 +1055,7 @@ adjudge_to_death:
+ 	bh_lock_sock(sk);
+ 	WARN_ON(sock_owned_by_user(sk));
+ 
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(dccp_orphan_count);
+ 
+ 	/* Have we already been destroyed by a softirq or backlog? */
+ 	if (state != DCCP_CLOSED && sk->sk_state == DCCP_CLOSED)
+@@ -1115,13 +1115,10 @@ static int __init dccp_init(void)
+ 
+ 	BUILD_BUG_ON(sizeof(struct dccp_skb_cb) >
+ 		     sizeof_field(struct sk_buff, cb));
+-	rc = percpu_counter_init(&dccp_orphan_count, 0, GFP_KERNEL);
+-	if (rc)
+-		goto out_fail;
+ 	inet_hashinfo_init(&dccp_hashinfo);
+ 	rc = inet_hashinfo2_init_mod(&dccp_hashinfo);
+ 	if (rc)
+-		goto out_free_percpu;
++		goto out_fail;
+ 	rc = -ENOBUFS;
+ 	dccp_hashinfo.bind_bucket_cachep =
+ 		kmem_cache_create("dccp_bind_bucket",
+@@ -1226,8 +1223,6 @@ out_free_bind_bucket_cachep:
+ 	kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
+ out_free_hashinfo2:
+ 	inet_hashinfo2_free_mod(&dccp_hashinfo);
+-out_free_percpu:
+-	percpu_counter_destroy(&dccp_orphan_count);
+ out_fail:
+ 	dccp_hashinfo.bhash = NULL;
+ 	dccp_hashinfo.ehash = NULL;
+@@ -1250,7 +1245,6 @@ static void __exit dccp_fini(void)
+ 	dccp_ackvec_exit();
+ 	dccp_sysctl_exit();
+ 	inet_hashinfo2_free_mod(&dccp_hashinfo);
+-	percpu_counter_destroy(&dccp_orphan_count);
+ }
+ 
+ module_init(dccp_init);
+diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig
+index 970906eb5b2cd..3d3015146f24f 100644
+--- a/net/dsa/Kconfig
++++ b/net/dsa/Kconfig
+@@ -101,8 +101,6 @@ config NET_DSA_TAG_RTL4_A
+ 
+ config NET_DSA_TAG_OCELOT
+ 	tristate "Tag driver for Ocelot family of switches, using NPI port"
+-	depends on MSCC_OCELOT_SWITCH_LIB || \
+-		   (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
+ 	select PACKING
+ 	help
+ 	  Say Y or M if you want to enable NPI tagging for the Ocelot switches
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index 9ef9125713321..41f62c3ab9671 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -243,7 +243,7 @@ static int dsa_switch_do_mdb_del(struct dsa_switch *ds, int port,
+ 
+ 	err = ds->ops->port_mdb_del(ds, port, mdb);
+ 	if (err) {
+-		refcount_inc(&a->refcount);
++		refcount_set(&a->refcount, 1);
+ 		return err;
+ 	}
+ 
+@@ -308,7 +308,7 @@ static int dsa_switch_do_fdb_del(struct dsa_switch *ds, int port,
+ 
+ 	err = ds->ops->port_fdb_del(ds, port, addr, vid);
+ 	if (err) {
+-		refcount_inc(&a->refcount);
++		refcount_set(&a->refcount, 1);
+ 		return err;
+ 	}
+ 
+diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c
+index 190f4bfd3bef6..028c1350ce530 100644
+--- a/net/dsa/tag_ocelot.c
++++ b/net/dsa/tag_ocelot.c
+@@ -2,7 +2,6 @@
+ /* Copyright 2019 NXP Semiconductors
+  */
+ #include <linux/dsa/ocelot.h>
+-#include <soc/mscc/ocelot.h>
+ #include "dsa_priv.h"
+ 
+ static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
+@@ -64,6 +63,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
+ 	struct dsa_port *dp;
+ 	u8 *extraction;
+ 	u16 vlan_tpid;
++	u64 rew_val;
+ 
+ 	/* Revert skb->data by the amount consumed by the DSA master,
+ 	 * so it points to the beginning of the frame.
+@@ -93,6 +93,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
+ 	ocelot_xfh_get_qos_class(extraction, &qos_class);
+ 	ocelot_xfh_get_tag_type(extraction, &tag_type);
+ 	ocelot_xfh_get_vlan_tci(extraction, &vlan_tci);
++	ocelot_xfh_get_rew_val(extraction, &rew_val);
+ 
+ 	skb->dev = dsa_master_find_slave(netdev, 0, src_port);
+ 	if (!skb->dev)
+@@ -106,6 +107,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
+ 
+ 	skb->offload_fwd_mark = 1;
+ 	skb->priority = qos_class;
++	OCELOT_SKB_CB(skb)->tstamp_lo = rew_val;
+ 
+ 	/* Ocelot switches copy frames unmodified to the CPU. However, it is
+ 	 * possible for the user to request a VLAN modification through
+diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c
+index 85ac85c3af8c0..b4cd6842b69a2 100644
+--- a/net/dsa/tag_ocelot_8021q.c
++++ b/net/dsa/tag_ocelot_8021q.c
+@@ -9,6 +9,7 @@
+  *   that on egress
+  */
+ #include <linux/dsa/8021q.h>
++#include <linux/dsa/ocelot.h>
+ #include <soc/mscc/ocelot.h>
+ #include <soc/mscc/ocelot_ptp.h>
+ #include "dsa_priv.h"
+diff --git a/net/ethtool/pause.c b/net/ethtool/pause.c
+index 9009f412151e7..ee1e5806bc93a 100644
+--- a/net/ethtool/pause.c
++++ b/net/ethtool/pause.c
+@@ -56,8 +56,7 @@ static int pause_reply_size(const struct ethnl_req_info *req_base,
+ 
+ 	if (req_base->flags & ETHTOOL_FLAG_STATS)
+ 		n += nla_total_size(0) +	/* _PAUSE_STATS */
+-			nla_total_size_64bit(sizeof(u64)) *
+-				(ETHTOOL_A_PAUSE_STAT_MAX - 2);
++		     nla_total_size_64bit(sizeof(u64)) * ETHTOOL_PAUSE_STAT_CNT;
+ 	return n;
+ }
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 754013fa393bb..e0f9ff4807bbb 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1014,7 +1014,7 @@ void inet_csk_destroy_sock(struct sock *sk)
+ 
+ 	sk_refcnt_debug_release(sk);
+ 
+-	percpu_counter_dec(sk->sk_prot->orphan_count);
++	this_cpu_dec(*sk->sk_prot->orphan_count);
+ 
+ 	sock_put(sk);
+ }
+@@ -1073,7 +1073,7 @@ static void inet_child_forget(struct sock *sk, struct request_sock *req,
+ 
+ 	sock_orphan(child);
+ 
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(*sk->sk_prot->orphan_count);
+ 
+ 	if (sk->sk_protocol == IPPROTO_TCP && tcp_rsk(req)->tfo_listener) {
+ 		BUG_ON(rcu_access_pointer(tcp_sk(child)->fastopen_rsk) != req);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index bfb522e513461..75737267746f8 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -598,7 +598,7 @@ bool inet_ehash_nolisten(struct sock *sk, struct sock *osk, bool *found_dup_sk)
+ 	if (ok) {
+ 		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ 	} else {
+-		percpu_counter_inc(sk->sk_prot->orphan_count);
++		this_cpu_inc(*sk->sk_prot->orphan_count);
+ 		inet_sk_set_state(sk, TCP_CLOSE);
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 		inet_csk_destroy_sock(sk);
+diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
+index b0d3a09dc84e7..f30273afb5399 100644
+--- a/net/ipv4/proc.c
++++ b/net/ipv4/proc.c
+@@ -53,7 +53,7 @@ static int sockstat_seq_show(struct seq_file *seq, void *v)
+ 	struct net *net = seq->private;
+ 	int orphans, sockets;
+ 
+-	orphans = percpu_counter_sum_positive(&tcp_orphan_count);
++	orphans = tcp_orphan_count_sum();
+ 	sockets = proto_sockets_allocated_sum_positive(&tcp_prot);
+ 
+ 	socket_seq_show(seq);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 8cb44040ec68b..d681404c8bfb0 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -287,8 +287,8 @@ enum {
+ 	TCP_CMSG_TS = 2
+ };
+ 
+-struct percpu_counter tcp_orphan_count;
+-EXPORT_SYMBOL_GPL(tcp_orphan_count);
++DEFINE_PER_CPU(unsigned int, tcp_orphan_count);
++EXPORT_PER_CPU_SYMBOL_GPL(tcp_orphan_count);
+ 
+ long sysctl_tcp_mem[3] __read_mostly;
+ EXPORT_SYMBOL(sysctl_tcp_mem);
+@@ -955,7 +955,7 @@ int tcp_send_mss(struct sock *sk, int *size_goal, int flags)
+  */
+ void tcp_remove_empty_skb(struct sock *sk, struct sk_buff *skb)
+ {
+-	if (skb && !skb->len) {
++	if (skb && TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
+ 		tcp_unlink_write_queue(skb, sk);
+ 		if (tcp_write_queue_empty(sk))
+ 			tcp_chrono_stop(sk, TCP_CHRONO_BUSY);
+@@ -2690,11 +2690,36 @@ void tcp_shutdown(struct sock *sk, int how)
+ }
+ EXPORT_SYMBOL(tcp_shutdown);
+ 
++int tcp_orphan_count_sum(void)
++{
++	int i, total = 0;
++
++	for_each_possible_cpu(i)
++		total += per_cpu(tcp_orphan_count, i);
++
++	return max(total, 0);
++}
++
++static int tcp_orphan_cache;
++static struct timer_list tcp_orphan_timer;
++#define TCP_ORPHAN_TIMER_PERIOD msecs_to_jiffies(100)
++
++static void tcp_orphan_update(struct timer_list *unused)
++{
++	WRITE_ONCE(tcp_orphan_cache, tcp_orphan_count_sum());
++	mod_timer(&tcp_orphan_timer, jiffies + TCP_ORPHAN_TIMER_PERIOD);
++}
++
++static bool tcp_too_many_orphans(int shift)
++{
++	return READ_ONCE(tcp_orphan_cache) << shift > sysctl_tcp_max_orphans;
++}
++
+ bool tcp_check_oom(struct sock *sk, int shift)
+ {
+ 	bool too_many_orphans, out_of_socket_memory;
+ 
+-	too_many_orphans = tcp_too_many_orphans(sk, shift);
++	too_many_orphans = tcp_too_many_orphans(shift);
+ 	out_of_socket_memory = tcp_out_of_memory(sk);
+ 
+ 	if (too_many_orphans)
+@@ -2803,7 +2828,7 @@ adjudge_to_death:
+ 	/* remove backlog if any, without releasing ownership. */
+ 	__release_sock(sk);
+ 
+-	percpu_counter_inc(sk->sk_prot->orphan_count);
++	this_cpu_inc(tcp_orphan_count);
+ 
+ 	/* Have we already been destroyed by a softirq or backlog? */
+ 	if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE)
+@@ -4504,7 +4529,10 @@ void __init tcp_init(void)
+ 		     sizeof_field(struct sk_buff, cb));
+ 
+ 	percpu_counter_init(&tcp_sockets_allocated, 0, GFP_KERNEL);
+-	percpu_counter_init(&tcp_orphan_count, 0, GFP_KERNEL);
++
++	timer_setup(&tcp_orphan_timer, tcp_orphan_update, TIMER_DEFERRABLE);
++	mod_timer(&tcp_orphan_timer, jiffies + TCP_ORPHAN_TIMER_PERIOD);
++
+ 	inet_hashinfo_init(&tcp_hashinfo);
+ 	inet_hashinfo2_init(&tcp_hashinfo, "tcp_listen_portaddr_hash",
+ 			    thash_entries, 21,  /* one slot per 2 MB*/
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 9d068153c3168..04886df62b775 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -185,6 +185,41 @@ static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock,
+ 	return ret;
+ }
+ 
++static int tcp_bpf_recvmsg_parser(struct sock *sk,
++				  struct msghdr *msg,
++				  size_t len,
++				  int nonblock,
++				  int flags,
++				  int *addr_len)
++{
++	struct sk_psock *psock;
++	int copied;
++
++	if (unlikely(flags & MSG_ERRQUEUE))
++		return inet_recv_error(sk, msg, len, addr_len);
++
++	psock = sk_psock_get(sk);
++	if (unlikely(!psock))
++		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
++
++	lock_sock(sk);
++msg_bytes_ready:
++	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
++	if (!copied) {
++		long timeo;
++		int data;
++
++		timeo = sock_rcvtimeo(sk, nonblock);
++		data = tcp_msg_wait_data(sk, psock, timeo);
++		if (data && !sk_psock_queue_empty(psock))
++			goto msg_bytes_ready;
++		copied = -EAGAIN;
++	}
++	release_sock(sk);
++	sk_psock_put(sk, psock);
++	return copied;
++}
++
+ static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		    int nonblock, int flags, int *addr_len)
+ {
+@@ -477,6 +512,8 @@ enum {
+ enum {
+ 	TCP_BPF_BASE,
+ 	TCP_BPF_TX,
++	TCP_BPF_RX,
++	TCP_BPF_TXRX,
+ 	TCP_BPF_NUM_CFGS,
+ };
+ 
+@@ -488,7 +525,6 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
+ 				   struct proto *base)
+ {
+ 	prot[TCP_BPF_BASE]			= *base;
+-	prot[TCP_BPF_BASE].unhash		= sock_map_unhash;
+ 	prot[TCP_BPF_BASE].close		= sock_map_close;
+ 	prot[TCP_BPF_BASE].recvmsg		= tcp_bpf_recvmsg;
+ 	prot[TCP_BPF_BASE].stream_memory_read	= tcp_bpf_stream_read;
+@@ -496,6 +532,12 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
+ 	prot[TCP_BPF_TX]			= prot[TCP_BPF_BASE];
+ 	prot[TCP_BPF_TX].sendmsg		= tcp_bpf_sendmsg;
+ 	prot[TCP_BPF_TX].sendpage		= tcp_bpf_sendpage;
++
++	prot[TCP_BPF_RX]			= prot[TCP_BPF_BASE];
++	prot[TCP_BPF_RX].recvmsg		= tcp_bpf_recvmsg_parser;
++
++	prot[TCP_BPF_TXRX]			= prot[TCP_BPF_TX];
++	prot[TCP_BPF_TXRX].recvmsg		= tcp_bpf_recvmsg_parser;
+ }
+ 
+ static void tcp_bpf_check_v6_needs_rebuild(struct proto *ops)
+@@ -533,6 +575,10 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
+ 	int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
+ 	int config = psock->progs.msg_parser   ? TCP_BPF_TX   : TCP_BPF_BASE;
+ 
++	if (psock->progs.stream_verdict || psock->progs.skb_verdict) {
++		config = (config == TCP_BPF_TX) ? TCP_BPF_TXRX : TCP_BPF_RX;
++	}
++
+ 	if (restore) {
+ 		if (inet_csk_has_ulp(sk)) {
+ 			/* TLS does not have an unhash proto in SW cases,
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 3bf685fe64b96..eb745213561c7 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3100,6 +3100,9 @@ static void sit_add_v4_addrs(struct inet6_dev *idev)
+ 	memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);
+ 
+ 	if (idev->dev->flags&IFF_POINTOPOINT) {
++		if (idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_NONE)
++			return;
++
+ 		addr.s6_addr32[0] = htonl(0xfe800000);
+ 		scope = IFA_LINK;
+ 		plen = 64;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index ba77955d75fbd..6a990afc2eee7 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1435,7 +1435,6 @@ do_udp_sendmsg:
+ 	if (!fl6.flowi6_oif)
+ 		fl6.flowi6_oif = np->sticky_pktinfo.ipi6_ifindex;
+ 
+-	fl6.flowi6_mark = ipc6.sockc.mark;
+ 	fl6.flowi6_uid = sk->sk_uid;
+ 
+ 	if (msg->msg_controllen) {
+@@ -1471,6 +1470,7 @@ do_udp_sendmsg:
+ 	ipc6.opt = opt;
+ 
+ 	fl6.flowi6_proto = sk->sk_protocol;
++	fl6.flowi6_mark = ipc6.sockc.mark;
+ 	fl6.daddr = *daddr;
+ 	if (ipv6_addr_any(&fl6.saddr) && !ipv6_addr_any(&np->saddr))
+ 		fl6.saddr = np->saddr;
+diff --git a/net/netfilter/nf_conntrack_proto_udp.c b/net/netfilter/nf_conntrack_proto_udp.c
+index f8e3c0d2602f6..3b516cffc779b 100644
+--- a/net/netfilter/nf_conntrack_proto_udp.c
++++ b/net/netfilter/nf_conntrack_proto_udp.c
+@@ -104,10 +104,13 @@ int nf_conntrack_udp_packet(struct nf_conn *ct,
+ 	 */
+ 	if (test_bit(IPS_SEEN_REPLY_BIT, &ct->status)) {
+ 		unsigned long extra = timeouts[UDP_CT_UNREPLIED];
++		bool stream = false;
+ 
+ 		/* Still active after two seconds? Extend timeout. */
+-		if (time_after(jiffies, ct->proto.udp.stream_ts))
++		if (time_after(jiffies, ct->proto.udp.stream_ts)) {
+ 			extra = timeouts[UDP_CT_REPLIED];
++			stream = true;
++		}
+ 
+ 		nf_ct_refresh_acct(ct, ctinfo, skb, extra);
+ 
+@@ -116,7 +119,7 @@ int nf_conntrack_udp_packet(struct nf_conn *ct,
+ 			return NF_ACCEPT;
+ 
+ 		/* Also, more likely to be important, and not a probe */
+-		if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
++		if (stream && !test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
+ 			nf_conntrack_event_cache(IPCT_ASSURED, ct);
+ 	} else {
+ 		nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[UDP_CT_UNREPLIED]);
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index f774de0fc24f8..cd8da91fa3fe4 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -560,7 +560,7 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ 		goto nla_put_failure;
+ 
+ 	if (indev && entskb->dev &&
+-	    entskb->mac_header != entskb->network_header) {
++	    skb_mac_header_was_set(entskb)) {
+ 		struct nfqnl_msg_packet_hw phw;
+ 		int len;
+ 
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 6ba3256fa8449..87f3af4645d9c 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -198,17 +198,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx,
+ 		return -EBUSY;
+ 
+ 	priv->op = ntohl(nla_get_be32(tb[NFTA_DYNSET_OP]));
+-	switch (priv->op) {
+-	case NFT_DYNSET_OP_ADD:
+-	case NFT_DYNSET_OP_DELETE:
+-		break;
+-	case NFT_DYNSET_OP_UPDATE:
+-		if (!(set->flags & NFT_SET_TIMEOUT))
+-			return -EOPNOTSUPP;
+-		break;
+-	default:
++	if (priv->op > NFT_DYNSET_OP_DELETE)
+ 		return -EOPNOTSUPP;
+-	}
+ 
+ 	timeout = 0;
+ 	if (tb[NFTA_DYNSET_TIMEOUT] != NULL) {
+diff --git a/net/rxrpc/rtt.c b/net/rxrpc/rtt.c
+index 4e565eeab4260..be61d6f5be8d1 100644
+--- a/net/rxrpc/rtt.c
++++ b/net/rxrpc/rtt.c
+@@ -22,7 +22,7 @@ static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer)
+ 
+ static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer)
+ {
+-	return _usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us);
++	return usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us);
+ }
+ 
+ static u32 rxrpc_bound_rto(u32 rto)
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index a8dd06c74e318..66d2fbe9ef501 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -1330,6 +1330,15 @@ static int qdisc_change_tx_queue_len(struct net_device *dev,
+ 	return 0;
+ }
+ 
++void dev_qdisc_change_real_num_tx(struct net_device *dev,
++				  unsigned int new_real_tx)
++{
++	struct Qdisc *qdisc = dev->qdisc;
++
++	if (qdisc->ops->change_real_num_tx)
++		qdisc->ops->change_real_num_tx(qdisc, new_real_tx);
++}
++
+ int dev_qdisc_change_tx_queue_len(struct net_device *dev)
+ {
+ 	bool up = dev->flags & IFF_UP;
+diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
+index e79f1afe0cfd6..db18d8a860f9c 100644
+--- a/net/sched/sch_mq.c
++++ b/net/sched/sch_mq.c
+@@ -125,6 +125,29 @@ static void mq_attach(struct Qdisc *sch)
+ 	priv->qdiscs = NULL;
+ }
+ 
++static void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx)
++{
++#ifdef CONFIG_NET_SCHED
++	struct net_device *dev = qdisc_dev(sch);
++	struct Qdisc *qdisc;
++	unsigned int i;
++
++	for (i = new_real_tx; i < dev->real_num_tx_queues; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		/* Only update the default qdiscs we created,
++		 * qdiscs with handles are always hashed.
++		 */
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_del(qdisc);
++	}
++	for (i = dev->real_num_tx_queues; i < new_real_tx; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_add(qdisc, false);
++	}
++#endif
++}
++
+ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb)
+ {
+ 	struct net_device *dev = qdisc_dev(sch);
+@@ -288,6 +311,7 @@ struct Qdisc_ops mq_qdisc_ops __read_mostly = {
+ 	.init		= mq_init,
+ 	.destroy	= mq_destroy,
+ 	.attach		= mq_attach,
++	.change_real_num_tx = mq_change_real_num_tx,
+ 	.dump		= mq_dump,
+ 	.owner		= THIS_MODULE,
+ };
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 5eb3b1b7ae5e7..50e15add6068f 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -306,6 +306,28 @@ static void mqprio_attach(struct Qdisc *sch)
+ 	priv->qdiscs = NULL;
+ }
+ 
++static void mqprio_change_real_num_tx(struct Qdisc *sch,
++				      unsigned int new_real_tx)
++{
++	struct net_device *dev = qdisc_dev(sch);
++	struct Qdisc *qdisc;
++	unsigned int i;
++
++	for (i = new_real_tx; i < dev->real_num_tx_queues; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		/* Only update the default qdiscs we created,
++		 * qdiscs with handles are always hashed.
++		 */
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_del(qdisc);
++	}
++	for (i = dev->real_num_tx_queues; i < new_real_tx; i++) {
++		qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping;
++		if (qdisc != &noop_qdisc && !qdisc->handle)
++			qdisc_hash_add(qdisc, false);
++	}
++}
++
+ static struct netdev_queue *mqprio_queue_get(struct Qdisc *sch,
+ 					     unsigned long cl)
+ {
+@@ -629,6 +651,7 @@ static struct Qdisc_ops mqprio_qdisc_ops __read_mostly = {
+ 	.init		= mqprio_init,
+ 	.destroy	= mqprio_destroy,
+ 	.attach		= mqprio_attach,
++	.change_real_num_tx = mqprio_change_real_num_tx,
+ 	.dump		= mqprio_dump,
+ 	.owner		= THIS_MODULE,
+ };
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index b9fd18d986464..a66398fb2d6d0 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -95,18 +95,22 @@ static ktime_t sched_base_time(const struct sched_gate_list *sched)
+ 	return ns_to_ktime(sched->base_time);
+ }
+ 
+-static ktime_t taprio_get_time(struct taprio_sched *q)
++static ktime_t taprio_mono_to_any(const struct taprio_sched *q, ktime_t mono)
+ {
+-	ktime_t mono = ktime_get();
++	/* This pairs with WRITE_ONCE() in taprio_parse_clockid() */
++	enum tk_offsets tk_offset = READ_ONCE(q->tk_offset);
+ 
+-	switch (q->tk_offset) {
++	switch (tk_offset) {
+ 	case TK_OFFS_MAX:
+ 		return mono;
+ 	default:
+-		return ktime_mono_to_any(mono, q->tk_offset);
++		return ktime_mono_to_any(mono, tk_offset);
+ 	}
++}
+ 
+-	return KTIME_MAX;
++static ktime_t taprio_get_time(const struct taprio_sched *q)
++{
++	return taprio_mono_to_any(q, ktime_get());
+ }
+ 
+ static void taprio_free_sched_cb(struct rcu_head *head)
+@@ -319,7 +323,7 @@ static ktime_t get_tcp_tstamp(struct taprio_sched *q, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	return ktime_mono_to_any(skb->skb_mstamp_ns, q->tk_offset);
++	return taprio_mono_to_any(q, skb->skb_mstamp_ns);
+ }
+ 
+ /* There are a few scenarios where we will have to modify the txtime from
+@@ -1352,6 +1356,7 @@ static int taprio_parse_clockid(struct Qdisc *sch, struct nlattr **tb,
+ 		}
+ 	} else if (tb[TCA_TAPRIO_ATTR_SCHED_CLOCKID]) {
+ 		int clockid = nla_get_s32(tb[TCA_TAPRIO_ATTR_SCHED_CLOCKID]);
++		enum tk_offsets tk_offset;
+ 
+ 		/* We only support static clockids and we don't allow
+ 		 * for it to be modified after the first init.
+@@ -1366,22 +1371,24 @@ static int taprio_parse_clockid(struct Qdisc *sch, struct nlattr **tb,
+ 
+ 		switch (clockid) {
+ 		case CLOCK_REALTIME:
+-			q->tk_offset = TK_OFFS_REAL;
++			tk_offset = TK_OFFS_REAL;
+ 			break;
+ 		case CLOCK_MONOTONIC:
+-			q->tk_offset = TK_OFFS_MAX;
++			tk_offset = TK_OFFS_MAX;
+ 			break;
+ 		case CLOCK_BOOTTIME:
+-			q->tk_offset = TK_OFFS_BOOT;
++			tk_offset = TK_OFFS_BOOT;
+ 			break;
+ 		case CLOCK_TAI:
+-			q->tk_offset = TK_OFFS_TAI;
++			tk_offset = TK_OFFS_TAI;
+ 			break;
+ 		default:
+ 			NL_SET_ERR_MSG(extack, "Invalid 'clockid'");
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
++		/* This pairs with READ_ONCE() in taprio_mono_to_any */
++		WRITE_ONCE(q->tk_offset, tk_offset);
+ 
+ 		q->clockid = clockid;
+ 	} else {
+diff --git a/net/sctp/output.c b/net/sctp/output.c
+index 4dfb5ea82b05b..cdfdbd353c678 100644
+--- a/net/sctp/output.c
++++ b/net/sctp/output.c
+@@ -581,13 +581,16 @@ int sctp_packet_transmit(struct sctp_packet *packet, gfp_t gfp)
+ 	chunk = list_entry(packet->chunk_list.next, struct sctp_chunk, list);
+ 	sk = chunk->skb->sk;
+ 
+-	/* check gso */
+ 	if (packet->size > tp->pathmtu && !packet->ipfragok && !chunk->pmtu_probe) {
+-		if (!sk_can_gso(sk)) {
+-			pr_err_once("Trying to GSO but underlying device doesn't support it.");
+-			goto out;
++		if (tp->pl.state == SCTP_PL_ERROR) { /* do IP fragmentation if in Error state */
++			packet->ipfragok = 1;
++		} else {
++			if (!sk_can_gso(sk)) { /* check gso */
++				pr_err_once("Trying to GSO but underlying device doesn't support it.");
++				goto out;
++			}
++			gso = 1;
+ 		}
+-		gso = 1;
+ 	}
+ 
+ 	/* alloc head skb */
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index a3d3ca6dd63dd..133f1719bf1b7 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -269,7 +269,7 @@ bool sctp_transport_pl_send(struct sctp_transport *t)
+ 		if (t->pl.probe_size == SCTP_BASE_PLPMTU) { /* BASE_PLPMTU Confirmation Failed */
+ 			t->pl.state = SCTP_PL_ERROR; /* Base -> Error */
+ 
+-			t->pl.pmtu = SCTP_MIN_PLPMTU;
++			t->pl.pmtu = SCTP_BASE_PLPMTU;
+ 			t->pathmtu = t->pl.pmtu + sctp_transport_pl_hlen(t);
+ 			sctp_assoc_sync_pmtu(t->asoc);
+ 		}
+@@ -366,8 +366,9 @@ static bool sctp_transport_pl_toobig(struct sctp_transport *t, u32 pmtu)
+ 		if (pmtu >= SCTP_MIN_PLPMTU && pmtu < SCTP_BASE_PLPMTU) {
+ 			t->pl.state = SCTP_PL_ERROR; /* Base -> Error */
+ 
+-			t->pl.pmtu = SCTP_MIN_PLPMTU;
++			t->pl.pmtu = SCTP_BASE_PLPMTU;
+ 			t->pathmtu = t->pl.pmtu + sctp_transport_pl_hlen(t);
++			return true;
+ 		}
+ 	} else if (t->pl.state == SCTP_PL_SEARCH) {
+ 		if (pmtu >= SCTP_BASE_PLPMTU && pmtu < t->pl.pmtu) {
+@@ -378,11 +379,10 @@ static bool sctp_transport_pl_toobig(struct sctp_transport *t, u32 pmtu)
+ 			t->pl.probe_high = 0;
+ 			t->pl.pmtu = SCTP_BASE_PLPMTU;
+ 			t->pathmtu = t->pl.pmtu + sctp_transport_pl_hlen(t);
++			return true;
+ 		} else if (pmtu > t->pl.pmtu && pmtu < t->pl.probe_size) {
+ 			t->pl.probe_size = pmtu;
+ 			t->pl.probe_count = 0;
+-
+-			return false;
+ 		}
+ 	} else if (t->pl.state == SCTP_PL_COMPLETE) {
+ 		if (pmtu >= SCTP_BASE_PLPMTU && pmtu < t->pl.pmtu) {
+@@ -393,10 +393,11 @@ static bool sctp_transport_pl_toobig(struct sctp_transport *t, u32 pmtu)
+ 			t->pl.probe_high = 0;
+ 			t->pl.pmtu = SCTP_BASE_PLPMTU;
+ 			t->pathmtu = t->pl.pmtu + sctp_transport_pl_hlen(t);
++			return true;
+ 		}
+ 	}
+ 
+-	return true;
++	return false;
+ }
+ 
+ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index c038efc23ce38..32c1c7ce856d3 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -148,14 +148,18 @@ static int __smc_release(struct smc_sock *smc)
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 		sk->sk_shutdown |= SHUTDOWN_MASK;
+ 	} else {
+-		if (sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_INIT)
+-			sock_put(sk); /* passive closing */
+-		if (sk->sk_state == SMC_LISTEN) {
+-			/* wake up clcsock accept */
+-			rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR);
++		if (sk->sk_state != SMC_CLOSED) {
++			if (sk->sk_state != SMC_LISTEN &&
++			    sk->sk_state != SMC_INIT)
++				sock_put(sk); /* passive closing */
++			if (sk->sk_state == SMC_LISTEN) {
++				/* wake up clcsock accept */
++				rc = kernel_sock_shutdown(smc->clcsock,
++							  SHUT_RDWR);
++			}
++			sk->sk_state = SMC_CLOSED;
++			sk->sk_state_change(sk);
+ 		}
+-		sk->sk_state = SMC_CLOSED;
+-		sk->sk_state_change(sk);
+ 		smc_restore_fallback_changes(smc);
+ 	}
+ 
+@@ -1057,7 +1061,7 @@ static void smc_connect_work(struct work_struct *work)
+ 	if (smc->clcsock->sk->sk_err) {
+ 		smc->sk.sk_err = smc->clcsock->sk->sk_err;
+ 	} else if ((1 << smc->clcsock->sk->sk_state) &
+-					(TCPF_SYN_SENT | TCP_SYN_RECV)) {
++					(TCPF_SYN_SENT | TCPF_SYN_RECV)) {
+ 		rc = sk_stream_wait_connect(smc->clcsock->sk, &timeo);
+ 		if ((rc == -EPIPE) &&
+ 		    ((1 << smc->clcsock->sk->sk_state) &
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 72f4b72eb1753..f1d323439a2af 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -1822,7 +1822,7 @@ void smc_llc_link_active(struct smc_link *link)
+ 			    link->smcibdev->ibdev->name, link->ibport);
+ 	link->state = SMC_LNK_ACTIVE;
+ 	if (link->lgr->llc_testlink_time) {
+-		link->llc_testlink_time = link->lgr->llc_testlink_time * HZ;
++		link->llc_testlink_time = link->lgr->llc_testlink_time;
+ 		schedule_delayed_work(&link->llc_testlink_wrk,
+ 				      link->llc_testlink_time);
+ 	}
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index 9c0343568d2a0..1a72c67afed5e 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -27,18 +27,10 @@
+ 
+ static struct workqueue_struct *strp_wq;
+ 
+-struct _strp_msg {
+-	/* Internal cb structure. struct strp_msg must be first for passing
+-	 * to upper layer.
+-	 */
+-	struct strp_msg strp;
+-	int accum_len;
+-};
+-
+ static inline struct _strp_msg *_strp_msg(struct sk_buff *skb)
+ {
+ 	return (struct _strp_msg *)((void *)skb->cb +
+-		offsetof(struct qdisc_skb_cb, data));
++		offsetof(struct sk_skb_cb, strp));
+ }
+ 
+ /* Lower lock held */
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index 6e4dbd577a39f..d435bffc61999 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -162,8 +162,10 @@ static int rpc_parse_scope_id(struct net *net, const char *buf,
+ 			      const size_t buflen, const char *delim,
+ 			      struct sockaddr_in6 *sin6)
+ {
+-	char *p;
++	char p[IPV6_SCOPE_ID_LEN + 1];
+ 	size_t len;
++	u32 scope_id = 0;
++	struct net_device *dev;
+ 
+ 	if ((buf + buflen) == delim)
+ 		return 1;
+@@ -175,29 +177,23 @@ static int rpc_parse_scope_id(struct net *net, const char *buf,
+ 		return 0;
+ 
+ 	len = (buf + buflen) - delim - 1;
+-	p = kmemdup_nul(delim + 1, len, GFP_KERNEL);
+-	if (p) {
+-		u32 scope_id = 0;
+-		struct net_device *dev;
+-
+-		dev = dev_get_by_name(net, p);
+-		if (dev != NULL) {
+-			scope_id = dev->ifindex;
+-			dev_put(dev);
+-		} else {
+-			if (kstrtou32(p, 10, &scope_id) != 0) {
+-				kfree(p);
+-				return 0;
+-			}
+-		}
+-
+-		kfree(p);
+-
+-		sin6->sin6_scope_id = scope_id;
+-		return 1;
++	if (len > IPV6_SCOPE_ID_LEN)
++		return 0;
++
++	memcpy(p, delim + 1, len);
++	p[len] = 0;
++
++	dev = dev_get_by_name(net, p);
++	if (dev != NULL) {
++		scope_id = dev->ifindex;
++		dev_put(dev);
++	} else {
++		if (kstrtou32(p, 10, &scope_id) != 0)
++			return 0;
+ 	}
+ 
+-	return 0;
++	sin6->sin6_scope_id = scope_id;
++	return 1;
+ }
+ 
+ static size_t rpc_pton6(struct net *net, const char *buf, const size_t buflen,
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index d55e980521da8..565dc9e477fc7 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1585,15 +1585,14 @@ xprt_transmit(struct rpc_task *task)
+ {
+ 	struct rpc_rqst *next, *req = task->tk_rqstp;
+ 	struct rpc_xprt	*xprt = req->rq_xprt;
+-	int counter, status;
++	int status;
+ 
+ 	spin_lock(&xprt->queue_lock);
+-	counter = 0;
+-	while (!list_empty(&xprt->xmit_queue)) {
+-		if (++counter == 20)
++	for (;;) {
++		next = list_first_entry_or_null(&xprt->xmit_queue,
++						struct rpc_rqst, rq_xmit);
++		if (!next)
+ 			break;
+-		next = list_first_entry(&xprt->xmit_queue,
+-				struct rpc_rqst, rq_xmit);
+ 		xprt_pin_rqst(next);
+ 		spin_unlock(&xprt->queue_lock);
+ 		status = xprt_request_transmit(next, task);
+@@ -1601,13 +1600,16 @@ xprt_transmit(struct rpc_task *task)
+ 			status = 0;
+ 		spin_lock(&xprt->queue_lock);
+ 		xprt_unpin_rqst(next);
+-		if (status == 0) {
+-			if (!xprt_request_data_received(task) ||
+-			    test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+-				continue;
+-		} else if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+-			task->tk_status = status;
+-		break;
++		if (status < 0) {
++			if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
++				task->tk_status = status;
++			break;
++		}
++		/* Was @task transmitted, and has it received a reply? */
++		if (xprt_request_data_received(task) &&
++		    !test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
++			break;
++		cond_resched_lock(&xprt->queue_lock);
+ 	}
+ 	spin_unlock(&xprt->queue_lock);
+ }
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 3e02cc3b24f8a..bcc42a901c752 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1322,6 +1322,8 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ 		 * non-blocking call.
+ 		 */
+ 		err = -EALREADY;
++		if (flags & O_NONBLOCK)
++			goto out;
+ 		break;
+ 	default:
+ 		if ((sk->sk_state == TCP_LISTEN) ||
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index aaba847d79eb2..eb297e1015e05 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -1081,6 +1081,16 @@ void cfg80211_dev_free(struct cfg80211_registered_device *rdev)
+ 	list_for_each_entry_safe(scan, tmp, &rdev->bss_list, list)
+ 		cfg80211_put_bss(&rdev->wiphy, &scan->pub);
+ 	mutex_destroy(&rdev->wiphy.mtx);
++
++	/*
++	 * The 'regd' can only be non-NULL if we never finished
++	 * initializing the wiphy and thus never went through the
++	 * unregister path - e.g. in failure scenarios. Thus, it
++	 * cannot have been visible to anyone if non-NULL, so we
++	 * can just free it here.
++	 */
++	kfree(rcu_dereference_raw(rdev->wiphy.regd));
++
+ 	kfree(rdev);
+ }
+ 
+diff --git a/samples/kprobes/kretprobe_example.c b/samples/kprobes/kretprobe_example.c
+index 5dc1bf3baa98b..228321ecb1616 100644
+--- a/samples/kprobes/kretprobe_example.c
++++ b/samples/kprobes/kretprobe_example.c
+@@ -86,7 +86,7 @@ static int __init kretprobe_init(void)
+ 	ret = register_kretprobe(&my_kretprobe);
+ 	if (ret < 0) {
+ 		pr_err("register_kretprobe failed, returned %d\n", ret);
+-		return -1;
++		return ret;
+ 	}
+ 	pr_info("Planted return probe at %s: %p\n",
+ 			my_kretprobe.kp.symbol_name, my_kretprobe.kp.addr);
+diff --git a/scripts/leaking_addresses.pl b/scripts/leaking_addresses.pl
+index b2d8b8aa2d99e..8f636a23bc3f2 100755
+--- a/scripts/leaking_addresses.pl
++++ b/scripts/leaking_addresses.pl
+@@ -455,8 +455,9 @@ sub parse_file
+ 
+ 	open my $fh, "<", $file or return;
+ 	while ( <$fh> ) {
++		chomp;
+ 		if (may_leak_address($_)) {
+-			print $file . ': ' . $_;
++			printf("$file: $_\n");
+ 		}
+ 	}
+ 	close $fh;
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index e68bcedca976b..6222fdfebe4e5 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -1454,7 +1454,7 @@ bool aa_update_label_name(struct aa_ns *ns, struct aa_label *label, gfp_t gfp)
+ 	if (label->hname || labels_ns(label) != ns)
+ 		return res;
+ 
+-	if (aa_label_acntsxprint(&name, ns, label, FLAGS_NONE, gfp) == -1)
++	if (aa_label_acntsxprint(&name, ns, label, FLAGS_NONE, gfp) < 0)
+ 		return res;
+ 
+ 	ls = labels_set(label);
+@@ -1704,7 +1704,7 @@ int aa_label_asxprint(char **strp, struct aa_ns *ns, struct aa_label *label,
+ 
+ /**
+  * aa_label_acntsxprint - allocate a __counted string buffer and print label
+- * @strp: buffer to write to. (MAY BE NULL if @size == 0)
++ * @strp: buffer to write to.
+  * @ns: namespace profile is being viewed from
+  * @label: label to view (NOT NULL)
+  * @flags: flags controlling what label info is printed
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 1c8435dfabeea..08f907382c618 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -78,7 +78,7 @@ static struct xattr_list evm_config_default_xattrnames[] = {
+ 
+ LIST_HEAD(evm_config_xattrnames);
+ 
+-static int evm_fixmode;
++static int evm_fixmode __ro_after_init;
+ static int __init evm_set_fixmode(char *str)
+ {
+ 	if (strncmp(str, "fix", 3) == 0)
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index f0e448ed1f9fb..40fe3a571f898 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -319,7 +319,7 @@ int ima_must_appraise(struct user_namespace *mnt_userns, struct inode *inode,
+ void ima_update_xattr(struct integrity_iint_cache *iint, struct file *file);
+ enum integrity_status ima_get_cache_status(struct integrity_iint_cache *iint,
+ 					   enum ima_hooks func);
+-enum hash_algo ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value,
++enum hash_algo ima_get_hash_algo(const struct evm_ima_xattr_data *xattr_value,
+ 				 int xattr_len);
+ int ima_read_xattr(struct dentry *dentry,
+ 		   struct evm_ima_xattr_data **xattr_value);
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index ef9dcfce45d45..0f868df0f30ea 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -171,7 +171,7 @@ static void ima_cache_flags(struct integrity_iint_cache *iint,
+ 	}
+ }
+ 
+-enum hash_algo ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value,
++enum hash_algo ima_get_hash_algo(const struct evm_ima_xattr_data *xattr_value,
+ 				 int xattr_len)
+ {
+ 	struct signature_v2_hdr *sig;
+@@ -184,7 +184,8 @@ enum hash_algo ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value,
+ 	switch (xattr_value->type) {
+ 	case EVM_IMA_XATTR_DIGSIG:
+ 		sig = (typeof(sig))xattr_value;
+-		if (sig->version != 2 || xattr_len <= sizeof(*sig))
++		if (sig->version != 2 || xattr_len <= sizeof(*sig)
++		    || sig->hash_algo >= HASH_ALGO__LAST)
+ 			return ima_hash_algo;
+ 		return sig->hash_algo;
+ 		break;
+@@ -575,6 +576,47 @@ static void ima_reset_appraise_flags(struct inode *inode, int digsig)
+ 		clear_bit(IMA_DIGSIG, &iint->atomic_flags);
+ }
+ 
++/**
++ * validate_hash_algo() - Block setxattr with unsupported hash algorithms
++ * @dentry: object of the setxattr()
++ * @xattr_value: userland supplied xattr value
++ * @xattr_value_len: length of xattr_value
++ *
++ * The xattr value is mapped to its hash algorithm, and this algorithm
++ * must be built in the kernel for the setxattr to be allowed.
++ *
++ * Emit an audit message when the algorithm is invalid.
++ *
++ * Return: 0 on success, else an error.
++ */
++static int validate_hash_algo(struct dentry *dentry,
++			      const struct evm_ima_xattr_data *xattr_value,
++			      size_t xattr_value_len)
++{
++	char *path = NULL, *pathbuf = NULL;
++	enum hash_algo xattr_hash_algo;
++
++	xattr_hash_algo = ima_get_hash_algo(xattr_value, xattr_value_len);
++
++	if (likely(xattr_hash_algo == ima_hash_algo ||
++		   crypto_has_alg(hash_algo_name[xattr_hash_algo], 0, 0)))
++		return 0;
++
++	pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
++	if (!pathbuf)
++		return -EACCES;
++
++	path = dentry_path(dentry, pathbuf, PATH_MAX);
++
++	integrity_audit_msg(AUDIT_INTEGRITY_DATA, d_inode(dentry), path,
++			    "set_data", "unavailable-hash-algorithm",
++			    -EACCES, 0);
++
++	kfree(pathbuf);
++
++	return -EACCES;
++}
++
+ int ima_inode_setxattr(struct dentry *dentry, const char *xattr_name,
+ 		       const void *xattr_value, size_t xattr_value_len)
+ {
+@@ -592,9 +634,11 @@ int ima_inode_setxattr(struct dentry *dentry, const char *xattr_name,
+ 		digsig = (xvalue->type == EVM_XATTR_PORTABLE_DIGSIG);
+ 	}
+ 	if (result == 1 || evm_revalidate_status(xattr_name)) {
++		result = validate_hash_algo(dentry, xvalue, xattr_value_len);
++		if (result)
++			return result;
++
+ 		ima_reset_appraise_flags(d_backing_inode(dentry), digsig);
+-		if (result == 1)
+-			result = 0;
+ 	}
+ 	return result;
+ }
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index d84c77f370dc4..c2767da78eeb6 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -2374,6 +2374,43 @@ err_policy:
+ 	return rc;
+ }
+ 
++/**
++ * ocontext_to_sid - Helper to safely get sid for an ocontext
++ * @sidtab: SID table
++ * @c: ocontext structure
++ * @index: index of the context entry (0 or 1)
++ * @out_sid: pointer to the resulting SID value
++ *
++ * For all ocontexts except OCON_ISID the SID fields are populated
++ * on-demand when needed. Since updating the SID value is an SMP-sensitive
++ * operation, this helper must be used to do that safely.
++ *
++ * WARNING: This function may return -ESTALE, indicating that the caller
++ * must retry the operation after re-acquiring the policy pointer!
++ */
++static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
++			   size_t index, u32 *out_sid)
++{
++	int rc;
++	u32 sid;
++
++	/* Ensure the associated sidtab entry is visible to this thread. */
++	sid = smp_load_acquire(&c->sid[index]);
++	if (!sid) {
++		rc = sidtab_context_to_sid(sidtab, &c->context[index], &sid);
++		if (rc)
++			return rc;
++
++		/*
++		 * Ensure the new sidtab entry is visible to other threads
++		 * when they see the SID.
++		 */
++		smp_store_release(&c->sid[index], sid);
++	}
++	*out_sid = sid;
++	return 0;
++}
++
+ /**
+  * security_port_sid - Obtain the SID for a port.
+  * @state: SELinux state
+@@ -2412,17 +2449,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else {
+ 		*out_sid = SECINITSID_PORT;
+ 	}
+@@ -2471,18 +2504,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab,
+-						   &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else
+ 		*out_sid = SECINITSID_UNLABELED;
+ 
+@@ -2531,17 +2559,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else
+ 		*out_sid = SECINITSID_UNLABELED;
+ 
+@@ -2585,25 +2609,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0] || !c->sid[1]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
+-			rc = sidtab_context_to_sid(sidtab, &c->context[1],
+-						   &c->sid[1]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, if_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*if_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else
+ 		*if_sid = SECINITSID_NETIF;
+ 
+@@ -2695,18 +2707,13 @@ retry:
+ 	}
+ 
+ 	if (c) {
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab,
+-						   &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		*out_sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else {
+ 		*out_sid = SECINITSID_NODE;
+ 	}
+@@ -2871,7 +2878,7 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 	u16 sclass;
+ 	struct genfs *genfs;
+ 	struct ocontext *c;
+-	int rc, cmp = 0;
++	int cmp = 0;
+ 
+ 	while (path[0] == '/' && path[1] == '/')
+ 		path++;
+@@ -2885,9 +2892,8 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 			break;
+ 	}
+ 
+-	rc = -ENOENT;
+ 	if (!genfs || cmp)
+-		goto out;
++		return -ENOENT;
+ 
+ 	for (c = genfs->head; c; c = c->next) {
+ 		len = strlen(c->u.name);
+@@ -2896,20 +2902,10 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
+ 			break;
+ 	}
+ 
+-	rc = -ENOENT;
+ 	if (!c)
+-		goto out;
+-
+-	if (!c->sid[0]) {
+-		rc = sidtab_context_to_sid(sidtab, &c->context[0], &c->sid[0]);
+-		if (rc)
+-			goto out;
+-	}
++		return -ENOENT;
+ 
+-	*sid = c->sid[0];
+-	rc = 0;
+-out:
+-	return rc;
++	return ocontext_to_sid(sidtab, c, 0, sid);
+ }
+ 
+ /**
+@@ -2994,17 +2990,13 @@ retry:
+ 
+ 	if (c) {
+ 		sbsec->behavior = c->v.behavior;
+-		if (!c->sid[0]) {
+-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
+-						   &c->sid[0]);
+-			if (rc == -ESTALE) {
+-				rcu_read_unlock();
+-				goto retry;
+-			}
+-			if (rc)
+-				goto out;
++		rc = ocontext_to_sid(sidtab, c, 0, &sbsec->sid);
++		if (rc == -ESTALE) {
++			rcu_read_unlock();
++			goto retry;
+ 		}
+-		sbsec->sid = c->sid[0];
++		if (rc)
++			goto out;
+ 	} else {
+ 		rc = __security_genfs_sid(policy, fstype, "/",
+ 					SECCLASS_DIR, &sbsec->sid);
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 3a75d2a8f5178..658eab05599e6 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -693,9 +693,7 @@ static void smk_cipso_doi(void)
+ 		printk(KERN_WARNING "%s:%d remove rc = %d\n",
+ 		       __func__, __LINE__, rc);
+ 
+-	doip = kmalloc(sizeof(struct cipso_v4_doi), GFP_KERNEL);
+-	if (doip == NULL)
+-		panic("smack:  Failed to initialize cipso DOI.\n");
++	doip = kmalloc(sizeof(struct cipso_v4_doi), GFP_KERNEL | __GFP_NOFAIL);
+ 	doip->map.std = NULL;
+ 	doip->doi = smk_cipso_doi_value;
+ 	doip->type = CIPSO_V4_MAP_PASS;
+@@ -714,7 +712,7 @@ static void smk_cipso_doi(void)
+ 	if (rc != 0) {
+ 		printk(KERN_WARNING "%s:%d map add rc = %d\n",
+ 		       __func__, __LINE__, rc);
+-		kfree(doip);
++		netlbl_cfg_cipsov4_del(doip->doi, &nai);
+ 		return;
+ 	}
+ }
+@@ -831,6 +829,7 @@ static int smk_open_cipso(struct inode *inode, struct file *file)
+ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 				size_t count, loff_t *ppos, int format)
+ {
++	struct netlbl_lsm_catmap *old_cat;
+ 	struct smack_known *skp;
+ 	struct netlbl_lsm_secattr ncats;
+ 	char mapcatset[SMK_CIPSOLEN];
+@@ -920,9 +919,11 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 
+ 	rc = smk_netlbl_mls(maplevel, mapcatset, &ncats, SMK_CIPSOLEN);
+ 	if (rc >= 0) {
+-		netlbl_catmap_free(skp->smk_netlabel.attr.mls.cat);
++		old_cat = skp->smk_netlabel.attr.mls.cat;
+ 		skp->smk_netlabel.attr.mls.cat = ncats.attr.mls.cat;
+ 		skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl;
++		synchronize_rcu();
++		netlbl_catmap_free(old_cat);
+ 		rc = count;
+ 		/*
+ 		 * This mapping may have been cached, so clear the cache.
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 439a358ecfe94..fcb082b4e6bd3 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -135,8 +135,11 @@ EXPORT_SYMBOL(snd_dma_free_pages);
+ int snd_dma_buffer_mmap(struct snd_dma_buffer *dmab,
+ 			struct vm_area_struct *area)
+ {
+-	const struct snd_malloc_ops *ops = snd_dma_get_ops(dmab);
++	const struct snd_malloc_ops *ops;
+ 
++	if (!dmab)
++		return -ENOENT;
++	ops = snd_dma_get_ops(dmab);
+ 	if (ops && ops->mmap)
+ 		return ops->mmap(dmab, area);
+ 	else
+@@ -400,6 +403,8 @@ static const struct snd_malloc_ops *dma_ops[] = {
+ 
+ static const struct snd_malloc_ops *snd_dma_get_ops(struct snd_dma_buffer *dmab)
+ {
++	if (WARN_ON_ONCE(!dmab))
++		return NULL;
+ 	if (WARN_ON_ONCE(dmab->dev.type <= SNDRV_DMA_TYPE_UNKNOWN ||
+ 			 dmab->dev.type >= ARRAY_SIZE(dma_ops)))
+ 		return NULL;
+diff --git a/sound/core/oss/mixer_oss.c b/sound/core/oss/mixer_oss.c
+index 6a5abdd4271ba..9620115cfdc09 100644
+--- a/sound/core/oss/mixer_oss.c
++++ b/sound/core/oss/mixer_oss.c
+@@ -130,11 +130,13 @@ static int snd_mixer_oss_devmask(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	for (chn = 0; chn < 31; chn++) {
+ 		pslot = &mixer->slots[chn];
+ 		if (pslot->put_volume || pslot->put_recsrc)
+ 			result |= 1 << chn;
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -146,11 +148,13 @@ static int snd_mixer_oss_stereodevs(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	for (chn = 0; chn < 31; chn++) {
+ 		pslot = &mixer->slots[chn];
+ 		if (pslot->put_volume && pslot->stereo)
+ 			result |= 1 << chn;
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -161,6 +165,7 @@ static int snd_mixer_oss_recmask(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	if (mixer->put_recsrc && mixer->get_recsrc) {	/* exclusive */
+ 		result = mixer->mask_recsrc;
+ 	} else {
+@@ -172,6 +177,7 @@ static int snd_mixer_oss_recmask(struct snd_mixer_oss_file *fmixer)
+ 				result |= 1 << chn;
+ 		}
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -182,12 +188,12 @@ static int snd_mixer_oss_get_recsrc(struct snd_mixer_oss_file *fmixer)
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	if (mixer->put_recsrc && mixer->get_recsrc) {	/* exclusive */
+-		int err;
+ 		unsigned int index;
+-		err = mixer->get_recsrc(fmixer, &index);
+-		if (err < 0)
+-			return err;
++		result = mixer->get_recsrc(fmixer, &index);
++		if (result < 0)
++			goto unlock;
+ 		result = 1 << index;
+ 	} else {
+ 		struct snd_mixer_oss_slot *pslot;
+@@ -202,7 +208,10 @@ static int snd_mixer_oss_get_recsrc(struct snd_mixer_oss_file *fmixer)
+ 			}
+ 		}
+ 	}
+-	return mixer->oss_recsrc = result;
++	mixer->oss_recsrc = result;
++ unlock:
++	mutex_unlock(&mixer->reg_mutex);
++	return result;
+ }
+ 
+ static int snd_mixer_oss_set_recsrc(struct snd_mixer_oss_file *fmixer, int recsrc)
+@@ -215,6 +224,7 @@ static int snd_mixer_oss_set_recsrc(struct snd_mixer_oss_file *fmixer, int recsr
+ 
+ 	if (mixer == NULL)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	if (mixer->get_recsrc && mixer->put_recsrc) {	/* exclusive input */
+ 		if (recsrc & ~mixer->oss_recsrc)
+ 			recsrc &= ~mixer->oss_recsrc;
+@@ -240,6 +250,7 @@ static int snd_mixer_oss_set_recsrc(struct snd_mixer_oss_file *fmixer, int recsr
+ 			}
+ 		}
+ 	}
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -251,6 +262,7 @@ static int snd_mixer_oss_get_volume(struct snd_mixer_oss_file *fmixer, int slot)
+ 
+ 	if (mixer == NULL || slot > 30)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	pslot = &mixer->slots[slot];
+ 	left = pslot->volume[0];
+ 	right = pslot->volume[1];
+@@ -258,15 +270,21 @@ static int snd_mixer_oss_get_volume(struct snd_mixer_oss_file *fmixer, int slot)
+ 		result = pslot->get_volume(fmixer, pslot, &left, &right);
+ 	if (!pslot->stereo)
+ 		right = left;
+-	if (snd_BUG_ON(left < 0 || left > 100))
+-		return -EIO;
+-	if (snd_BUG_ON(right < 0 || right > 100))
+-		return -EIO;
++	if (snd_BUG_ON(left < 0 || left > 100)) {
++		result = -EIO;
++		goto unlock;
++	}
++	if (snd_BUG_ON(right < 0 || right > 100)) {
++		result = -EIO;
++		goto unlock;
++	}
+ 	if (result >= 0) {
+ 		pslot->volume[0] = left;
+ 		pslot->volume[1] = right;
+ 	 	result = (left & 0xff) | ((right & 0xff) << 8);
+ 	}
++ unlock:
++	mutex_unlock(&mixer->reg_mutex);
+ 	return result;
+ }
+ 
+@@ -279,6 +297,7 @@ static int snd_mixer_oss_set_volume(struct snd_mixer_oss_file *fmixer,
+ 
+ 	if (mixer == NULL || slot > 30)
+ 		return -EIO;
++	mutex_lock(&mixer->reg_mutex);
+ 	pslot = &mixer->slots[slot];
+ 	if (left > 100)
+ 		left = 100;
+@@ -289,10 +308,13 @@ static int snd_mixer_oss_set_volume(struct snd_mixer_oss_file *fmixer,
+ 	if (pslot->put_volume)
+ 		result = pslot->put_volume(fmixer, pslot, left, right);
+ 	if (result < 0)
+-		return result;
++		goto unlock;
+ 	pslot->volume[0] = left;
+ 	pslot->volume[1] = right;
+- 	return (left & 0xff) | ((right & 0xff) << 8);
++	result = (left & 0xff) | ((right & 0xff) << 8);
++ unlock:
++	mutex_unlock(&mixer->reg_mutex);
++	return result;
+ }
+ 
+ static int snd_mixer_oss_ioctl1(struct snd_mixer_oss_file *fmixer, unsigned int cmd, unsigned long arg)
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 92b7008fcdb86..b3214baa89193 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -624,13 +624,13 @@ static int snd_timer_stop1(struct snd_timer_instance *timeri, bool stop)
+ 	if (!timer)
+ 		return -EINVAL;
+ 	spin_lock_irqsave(&timer->lock, flags);
++	list_del_init(&timeri->ack_list);
++	list_del_init(&timeri->active_list);
+ 	if (!(timeri->flags & (SNDRV_TIMER_IFLG_RUNNING |
+ 			       SNDRV_TIMER_IFLG_START))) {
+ 		result = -EBUSY;
+ 		goto unlock;
+ 	}
+-	list_del_init(&timeri->ack_list);
+-	list_del_init(&timeri->active_list);
+ 	if (timer->card && timer->card->shutdown)
+ 		goto unlock;
+ 	if (stop) {
+@@ -665,23 +665,22 @@ static int snd_timer_stop1(struct snd_timer_instance *timeri, bool stop)
+ static int snd_timer_stop_slave(struct snd_timer_instance *timeri, bool stop)
+ {
+ 	unsigned long flags;
++	bool running;
+ 
+ 	spin_lock_irqsave(&slave_active_lock, flags);
+-	if (!(timeri->flags & SNDRV_TIMER_IFLG_RUNNING)) {
+-		spin_unlock_irqrestore(&slave_active_lock, flags);
+-		return -EBUSY;
+-	}
++	running = timeri->flags & SNDRV_TIMER_IFLG_RUNNING;
+ 	timeri->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
+ 	if (timeri->timer) {
+ 		spin_lock(&timeri->timer->lock);
+ 		list_del_init(&timeri->ack_list);
+ 		list_del_init(&timeri->active_list);
+-		snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP :
+-				  SNDRV_TIMER_EVENT_PAUSE);
++		if (running)
++			snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP :
++					  SNDRV_TIMER_EVENT_PAUSE);
+ 		spin_unlock(&timeri->timer->lock);
+ 	}
+ 	spin_unlock_irqrestore(&slave_active_lock, flags);
+-	return 0;
++	return running ? 0 : -EBUSY;
+ }
+ 
+ /*
+diff --git a/sound/firewire/oxfw/oxfw-stream.c b/sound/firewire/oxfw/oxfw-stream.c
+index fff18b5d4e052..f4a702def3979 100644
+--- a/sound/firewire/oxfw/oxfw-stream.c
++++ b/sound/firewire/oxfw/oxfw-stream.c
+@@ -9,7 +9,7 @@
+ #include <linux/delay.h>
+ 
+ #define AVC_GENERIC_FRAME_MAXIMUM_BYTES	512
+-#define READY_TIMEOUT_MS	200
++#define READY_TIMEOUT_MS	600
+ 
+ /*
+  * According to datasheet of Oxford Semiconductor:
+@@ -367,6 +367,11 @@ int snd_oxfw_stream_start_duplex(struct snd_oxfw *oxfw)
+ 				// Just after changing sampling transfer frequency, many cycles are
+ 				// skipped for packet transmission.
+ 				tx_init_skip_cycles = 400;
++			} else if (oxfw->quirks & SND_OXFW_QUIRK_VOLUNTARY_RECOVERY) {
++				// It takes a bit time for target device to adjust event frequency
++				// according to nominal event frequency in isochronous packets from
++				// ALSA oxfw driver.
++				tx_init_skip_cycles = 4000;
+ 			} else {
+ 				replay_seq = true;
+ 			}
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index daf731364695b..b496f87841aec 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -25,6 +25,7 @@
+ #define MODEL_SATELLITE		0x00200f
+ #define MODEL_SCS1M		0x001000
+ #define MODEL_DUET_FW		0x01dddd
++#define MODEL_ONYX_1640I	0x001640
+ 
+ #define SPECIFIER_1394TA	0x00a02d
+ #define VERSION_AVC		0x010001
+@@ -192,6 +193,13 @@ static int detect_quirks(struct snd_oxfw *oxfw, const struct ieee1394_device_id
+ 		// OXFW971-based models may transfer events by blocking method.
+ 		if (!(oxfw->quirks & SND_OXFW_QUIRK_JUMBO_PAYLOAD))
+ 			oxfw->quirks |= SND_OXFW_QUIRK_BLOCKING_TRANSMISSION;
++
++		if (model == MODEL_ONYX_1640I) {
++			//Unless receiving packets without NOINFO packet, the device transfers
++			//mostly half of events in packets than expected.
++			oxfw->quirks |= SND_OXFW_QUIRK_IGNORE_NO_INFO_PACKET |
++					SND_OXFW_QUIRK_VOLUNTARY_RECOVERY;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/firewire/oxfw/oxfw.h b/sound/firewire/oxfw/oxfw.h
+index c13034f6c2ca5..d728e451a25c6 100644
+--- a/sound/firewire/oxfw/oxfw.h
++++ b/sound/firewire/oxfw/oxfw.h
+@@ -47,6 +47,11 @@ enum snd_oxfw_quirk {
+ 	// the device to process audio data even if the value is invalid in a point of
+ 	// IEC 61883-1/6.
+ 	SND_OXFW_QUIRK_IGNORE_NO_INFO_PACKET = 0x10,
++	// Loud Technologies Mackie Onyx 1640i seems to configure OXFW971 ASIC so that it decides
++	// event frequency according to events in received isochronous packets. The device looks to
++	// performs media clock recovery voluntarily. In the recovery, the packets with NO_INFO
++	// are ignored, thus driver should transfer packets with timestamp.
++	SND_OXFW_QUIRK_VOLUNTARY_RECOVERY = 0x20,
+ };
+ 
+ /* This is an arbitrary number for convinience. */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 89f135a6a1f6d..ec17e40c710ea 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -638,13 +638,17 @@ static int azx_position_check(struct azx *chip, struct azx_dev *azx_dev)
+  * the update-IRQ timing.  The IRQ is issued before actually the
+  * data is processed.  So, we need to process it afterwords in a
+  * workqueue.
++ *
++ * Returns 1 if OK to proceed, 0 for delay handling, -1 for skipping update
+  */
+ static int azx_position_ok(struct azx *chip, struct azx_dev *azx_dev)
+ {
+ 	struct snd_pcm_substream *substream = azx_dev->core.substream;
++	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	int stream = substream->stream;
+ 	u32 wallclk;
+ 	unsigned int pos;
++	snd_pcm_uframes_t hwptr, target;
+ 
+ 	wallclk = azx_readl(chip, WALLCLK) - azx_dev->core.start_wallclk;
+ 	if (wallclk < (azx_dev->core.period_wallclk * 2) / 3)
+@@ -681,6 +685,24 @@ static int azx_position_ok(struct azx *chip, struct azx_dev *azx_dev)
+ 		/* NG - it's below the first next period boundary */
+ 		return chip->bdl_pos_adj ? 0 : -1;
+ 	azx_dev->core.start_wallclk += wallclk;
++
++	if (azx_dev->core.no_period_wakeup)
++		return 1; /* OK, no need to check period boundary */
++
++	if (runtime->hw_ptr_base != runtime->hw_ptr_interrupt)
++		return 1; /* OK, already in hwptr updating process */
++
++	/* check whether the period gets really elapsed */
++	pos = bytes_to_frames(runtime, pos);
++	hwptr = runtime->hw_ptr_base + pos;
++	if (hwptr < runtime->status->hw_ptr)
++		hwptr += runtime->buffer_size;
++	target = runtime->hw_ptr_interrupt + runtime->period_size;
++	if (hwptr < target) {
++		/* too early wakeup, process it later */
++		return chip->bdl_pos_adj ? 0 : -1;
++	}
++
+ 	return 1; /* OK, it's fine */
+ }
+ 
+@@ -859,31 +881,6 @@ static int azx_get_delay_from_fifo(struct azx *chip, struct azx_dev *azx_dev,
+ 	return substream->runtime->delay;
+ }
+ 
+-static unsigned int azx_skl_get_dpib_pos(struct azx *chip,
+-					 struct azx_dev *azx_dev)
+-{
+-	return _snd_hdac_chip_readl(azx_bus(chip),
+-				    AZX_REG_VS_SDXDPIB_XBASE +
+-				    (AZX_REG_VS_SDXDPIB_XINTERVAL *
+-				     azx_dev->core.index));
+-}
+-
+-/* get the current DMA position with correction on SKL+ chips */
+-static unsigned int azx_get_pos_skl(struct azx *chip, struct azx_dev *azx_dev)
+-{
+-	/* DPIB register gives a more accurate position for playback */
+-	if (azx_dev->core.substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		return azx_skl_get_dpib_pos(chip, azx_dev);
+-
+-	/* For capture, we need to read posbuf, but it requires a delay
+-	 * for the possible boundary overlap; the read of DPIB fetches the
+-	 * actual posbuf
+-	 */
+-	udelay(20);
+-	azx_skl_get_dpib_pos(chip, azx_dev);
+-	return azx_get_pos_posbuf(chip, azx_dev);
+-}
+-
+ static void __azx_shutdown_chip(struct azx *chip, bool skip_link_reset)
+ {
+ 	azx_stop_chip(chip);
+@@ -1580,7 +1577,7 @@ static void assign_position_fix(struct azx *chip, int fix)
+ 		[POS_FIX_POSBUF] = azx_get_pos_posbuf,
+ 		[POS_FIX_VIACOMBO] = azx_via_get_position,
+ 		[POS_FIX_COMBO] = azx_get_pos_lpib,
+-		[POS_FIX_SKL] = azx_get_pos_skl,
++		[POS_FIX_SKL] = azx_get_pos_posbuf,
+ 		[POS_FIX_FIFO] = azx_get_pos_fifo,
+ 	};
+ 
+@@ -2358,7 +2355,8 @@ static int azx_probe_continue(struct azx *chip)
+ 
+ out_free:
+ 	if (err < 0) {
+-		azx_free(chip);
++		pci_set_drvdata(pci, NULL);
++		snd_card_free(chip->card);
+ 		return err;
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b30e1843273bf..752857908e466 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2551,6 +2551,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67f1, "Clevo PC70H[PRS]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),
+@@ -4367,6 +4368,16 @@ static void alc287_fixup_hp_gpio_led(struct hda_codec *codec,
+ 	alc_fixup_hp_gpio_led(codec, action, 0x10, 0);
+ }
+ 
++static void alc245_fixup_hp_gpio_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		spec->micmute_led_polarity = 1;
++	alc_fixup_hp_gpio_led(codec, action, 0, 0x04);
++}
++
+ /* turn on/off mic-mute LED per capture hook via VREF change */
+ static int vref_micmute_led_set(struct led_classdev *led_cdev,
+ 				enum led_brightness brightness)
+@@ -6419,6 +6430,44 @@ static void alc_fixup_no_int_mic(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* GPIO1 = amplifier on/off
++ * GPIO3 = mic mute LED
++ */
++static void alc285_fixup_hp_spectre_x360_eb1(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	static const hda_nid_t conn[] = { 0x02 };
++
++	struct alc_spec *spec = codec->spec;
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x14, 0x90170110 },  /* front/high speakers */
++		{ 0x17, 0x90170130 },  /* back/bass speakers */
++		{ }
++	};
++
++	//enable micmute led
++	alc_fixup_hp_gpio_led(codec, action, 0x00, 0x04);
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		spec->micmute_led_polarity = 1;
++		/* needed for amp of back speakers */
++		spec->gpio_mask |= 0x01;
++		spec->gpio_dir |= 0x01;
++		snd_hda_apply_pincfgs(codec, pincfgs);
++		/* share DAC to have unified volume control */
++		snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn), conn);
++		snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		/* need to toggle GPIO to enable the amp of back speakers */
++		alc_update_gpio_data(codec, 0x01, true);
++		msleep(100);
++		alc_update_gpio_data(codec, 0x01, false);
++		break;
++	}
++}
++
+ static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
+ 					  const struct hda_fixup *fix, int action)
+ {
+@@ -6571,6 +6620,7 @@ enum {
+ 	ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED,
+ 	ALC280_FIXUP_HP_9480M,
+ 	ALC245_FIXUP_HP_X360_AMP,
++	ALC285_FIXUP_HP_SPECTRE_X360_EB1,
+ 	ALC288_FIXUP_DELL_HEADSET_MODE,
+ 	ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC288_FIXUP_DELL_XPS_13,
+@@ -6683,6 +6733,7 @@ enum {
+ 	ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 	ALC287_FIXUP_HP_GPIO_LED,
+ 	ALC256_FIXUP_HP_HEADSET_MIC,
++	ALC245_FIXUP_HP_GPIO_LED,
+ 	ALC236_FIXUP_DELL_AIO_HEADSET_MIC,
+ 	ALC282_FIXUP_ACER_DISABLE_LINEOUT,
+ 	ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST,
+@@ -6701,6 +6752,7 @@ enum {
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ 	ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ 	ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,
++	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7307,6 +7359,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC245_FIXUP_HP_X360_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc245_fixup_hp_x360_amp,
++		.chained = true,
++		.chain_id = ALC245_FIXUP_HP_GPIO_LED
+ 	},
+ 	[ALC288_FIXUP_DELL_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8264,6 +8318,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_spectre_x360,
+ 	},
++	[ALC285_FIXUP_HP_SPECTRE_X360_EB1] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_spectre_x360_eb1
++	},
+ 	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_ideapad_s740_coef,
+@@ -8402,6 +8460,19 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc256_fixup_tongfang_reset_persistent_settings,
+ 	},
++	[ALC245_FIXUP_HP_GPIO_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc245_fixup_hp_gpio_led,
++	},
++	[ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11120 }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8438,6 +8509,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x141f, "Acer Spin SP513-54N", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+@@ -8577,6 +8649,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8728, "HP EliteBook 840 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -8587,6 +8660,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation",
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
++	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -8598,6 +8672,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++	SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -8636,6 +8712,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
++	SND_PCI_QUIRK(0x1043, 0x1970, "ASUS UX550VE", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1982, "ASUS B1400CEPE", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -8699,11 +8776,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x40a1, "Clevo NL40GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40c1, "Clevo NL40[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x40d1, "Clevo NL41DU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x5015, "Clevo NH5[58]H[HJK]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x5017, "Clevo NH7[79]H[HJK]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50a3, "Clevo NJ51GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50b3, "Clevo NK50S[BEZ]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50b6, "Clevo NK50S5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50b8, "Clevo NK50SZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50d5, "Clevo NP50D5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50e1, "Clevo NH5[58]HPQ", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x50e2, "Clevo NH7[79]HPQ", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f0, "Clevo NH50A[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f2, "Clevo NH50E[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0x50f3, "Clevo NH58DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -9019,6 +9100,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+ 	{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
++	{.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
+ 	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+ 	{.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 99c022be94a68..7d18b6639531c 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -93,7 +93,7 @@ static const struct reg_default cs42l42_reg_defaults[] = {
+ 	{ CS42L42_ASP_RX_INT_MASK,		0x1F },
+ 	{ CS42L42_ASP_TX_INT_MASK,		0x0F },
+ 	{ CS42L42_CODEC_INT_MASK,		0x03 },
+-	{ CS42L42_SRCPL_INT_MASK,		0xFF },
++	{ CS42L42_SRCPL_INT_MASK,		0x7F },
+ 	{ CS42L42_VPMON_INT_MASK,		0x01 },
+ 	{ CS42L42_PLL_LOCK_INT_MASK,		0x01 },
+ 	{ CS42L42_TSRS_PLUG_INT_MASK,		0x0F },
+@@ -130,7 +130,7 @@ static const struct reg_default cs42l42_reg_defaults[] = {
+ 	{ CS42L42_MIXER_CHA_VOL,		0x3F },
+ 	{ CS42L42_MIXER_ADC_VOL,		0x3F },
+ 	{ CS42L42_MIXER_CHB_VOL,		0x3F },
+-	{ CS42L42_EQ_COEF_IN0,			0x22 },
++	{ CS42L42_EQ_COEF_IN0,			0x00 },
+ 	{ CS42L42_EQ_COEF_IN1,			0x00 },
+ 	{ CS42L42_EQ_COEF_IN2,			0x00 },
+ 	{ CS42L42_EQ_COEF_IN3,			0x00 },
+@@ -845,11 +845,10 @@ static int cs42l42_pcm_hw_params(struct snd_pcm_substream *substream,
+ 
+ 	switch(substream->stream) {
+ 	case SNDRV_PCM_STREAM_CAPTURE:
+-		if (channels == 2) {
+-			val |= CS42L42_ASP_TX_CH2_AP_MASK;
+-			val |= width << CS42L42_ASP_TX_CH2_RES_SHIFT;
+-		}
+-		val |= width << CS42L42_ASP_TX_CH1_RES_SHIFT;
++		/* channel 2 on high LRCLK */
++		val = CS42L42_ASP_TX_CH2_AP_MASK |
++		      (width << CS42L42_ASP_TX_CH2_RES_SHIFT) |
++		      (width << CS42L42_ASP_TX_CH1_RES_SHIFT);
+ 
+ 		snd_soc_component_update_bits(component, CS42L42_ASP_TX_CH_AP_RES,
+ 				CS42L42_ASP_TX_CH1_AP_MASK | CS42L42_ASP_TX_CH2_AP_MASK |
+@@ -901,7 +900,6 @@ static int cs42l42_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ 	struct snd_soc_component *component = dai->component;
+ 	struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component);
+ 	unsigned int regval;
+-	u8 fullScaleVol;
+ 	int ret;
+ 
+ 	if (mute) {
+@@ -972,20 +970,11 @@ static int cs42l42_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ 		cs42l42->stream_use |= 1 << stream;
+ 
+ 		if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+-			/* Read the headphone load */
+-			regval = snd_soc_component_read(component, CS42L42_LOAD_DET_RCSTAT);
+-			if (((regval & CS42L42_RLA_STAT_MASK) >> CS42L42_RLA_STAT_SHIFT) ==
+-			    CS42L42_RLA_STAT_15_OHM) {
+-				fullScaleVol = CS42L42_HP_FULL_SCALE_VOL_MASK;
+-			} else {
+-				fullScaleVol = 0;
+-			}
+-
+-			/* Un-mute the headphone, set the full scale volume flag */
++			/* Un-mute the headphone */
+ 			snd_soc_component_update_bits(component, CS42L42_HP_CTL,
+ 						      CS42L42_HP_ANA_AMUTE_MASK |
+-						      CS42L42_HP_ANA_BMUTE_MASK |
+-						      CS42L42_HP_FULL_SCALE_VOL_MASK, fullScaleVol);
++						      CS42L42_HP_ANA_BMUTE_MASK,
++						      0);
+ 		}
+ 	}
+ 
+@@ -1674,12 +1663,15 @@ static void cs42l42_setup_hs_type_detect(struct cs42l42_private *cs42l42)
+ 			(1 << CS42L42_HS_CLAMP_DISABLE_SHIFT));
+ 
+ 	/* Enable the tip sense circuit */
++	regmap_update_bits(cs42l42->regmap, CS42L42_TSENSE_CTL,
++			   CS42L42_TS_INV_MASK, CS42L42_TS_INV_MASK);
++
+ 	regmap_update_bits(cs42l42->regmap, CS42L42_TIPSENSE_CTL,
+ 			CS42L42_TIP_SENSE_CTRL_MASK |
+ 			CS42L42_TIP_SENSE_INV_MASK |
+ 			CS42L42_TIP_SENSE_DEBOUNCE_MASK,
+ 			(3 << CS42L42_TIP_SENSE_CTRL_SHIFT) |
+-			(0 << CS42L42_TIP_SENSE_INV_SHIFT) |
++			(!cs42l42->ts_inv << CS42L42_TIP_SENSE_INV_SHIFT) |
+ 			(2 << CS42L42_TIP_SENSE_DEBOUNCE_SHIFT));
+ 
+ 	/* Save the initial status of the tip sense */
+@@ -1723,10 +1715,6 @@ static int cs42l42_handle_device_data(struct device *dev,
+ 		cs42l42->ts_inv = CS42L42_TS_INV_DIS;
+ 	}
+ 
+-	regmap_update_bits(cs42l42->regmap, CS42L42_TSENSE_CTL,
+-			CS42L42_TS_INV_MASK,
+-			(cs42l42->ts_inv << CS42L42_TS_INV_SHIFT));
+-
+ 	ret = device_property_read_u32(dev, "cirrus,ts-dbnc-rise", &val);
+ 	if (!ret) {
+ 		switch (val) {
+@@ -1937,8 +1925,9 @@ static int cs42l42_i2c_probe(struct i2c_client *i2c_client,
+ 			NULL, cs42l42_irq_thread,
+ 			IRQF_ONESHOT | IRQF_TRIGGER_LOW,
+ 			"cs42l42", cs42l42);
+-
+-	if (ret != 0)
++	if (ret == -EPROBE_DEFER)
++		goto err_disable;
++	else if (ret != 0)
+ 		dev_err(&i2c_client->dev,
+ 			"Failed to request IRQ: %d\n", ret);
+ 
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index d885ced34f606..bc5d68c53e5ab 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -4859,7 +4859,7 @@ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ 
+ 	snd_soc_component_init_regmap(component, wcd->regmap);
+ 	/* Class-H Init*/
+-	wcd->clsh_ctrl = wcd_clsh_ctrl_alloc(component, wcd->version);
++	wcd->clsh_ctrl = wcd_clsh_ctrl_alloc(component, WCD9335);
+ 	if (IS_ERR(wcd->clsh_ctrl))
+ 		return PTR_ERR(wcd->clsh_ctrl);
+ 
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 5e382b5c9d457..ece9b58ab52d3 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1225,6 +1225,7 @@ int rsnd_node_count(struct rsnd_priv *priv, struct device_node *node, char *name
+ 		if (i < 0) {
+ 			dev_err(dev, "strange node numbering (%s)",
+ 				of_node_full_name(node));
++			of_node_put(np);
+ 			return 0;
+ 		}
+ 		i++;
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 583f2381cfc82..e926985bb2f87 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2599,6 +2599,7 @@ int snd_soc_component_initialize(struct snd_soc_component *component,
+ 	INIT_LIST_HEAD(&component->dai_list);
+ 	INIT_LIST_HEAD(&component->dobj_list);
+ 	INIT_LIST_HEAD(&component->card_list);
++	INIT_LIST_HEAD(&component->list);
+ 	mutex_init(&component->io_mutex);
+ 
+ 	component->name = fmt_single_name(dev, &component->id);
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index cc9585bfa4e9f..1bb2dcf37ffe9 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -2598,6 +2598,15 @@ static int sof_widget_unload(struct snd_soc_component *scomp,
+ 
+ 		/* power down the pipeline schedule core */
+ 		pipeline = swidget->private;
++
++		/*
++		 * Runtime PM should still function normally if topology loading fails and
++		 * it's components are unloaded. Do not power down the primary core so that the
++		 * CTX_SAVE IPC can succeed during runtime suspend.
++		 */
++		if (pipeline->core == SOF_DSP_PRIMARY_CORE)
++			break;
++
+ 		ret = snd_sof_dsp_core_power_down(sdev, 1 << pipeline->core);
+ 		if (ret < 0)
+ 			dev_err(scomp->dev, "error: powering down pipeline schedule core %d\n",
+diff --git a/sound/soc/tegra/tegra_asoc_machine.c b/sound/soc/tegra/tegra_asoc_machine.c
+index 735909310a262..78fb423df550b 100644
+--- a/sound/soc/tegra/tegra_asoc_machine.c
++++ b/sound/soc/tegra/tegra_asoc_machine.c
+@@ -341,9 +341,34 @@ tegra_machine_parse_phandle(struct device *dev, const char *name)
+ 	return np;
+ }
+ 
++static void tegra_machine_unregister_codec(void *pdev)
++{
++	platform_device_unregister(pdev);
++}
++
++static int tegra_machine_register_codec(struct device *dev, const char *name)
++{
++	struct platform_device *pdev;
++	int err;
++
++	if (!name)
++		return 0;
++
++	pdev = platform_device_register_simple(name, -1, NULL, 0);
++	if (IS_ERR(pdev))
++		return PTR_ERR(pdev);
++
++	err = devm_add_action_or_reset(dev, tegra_machine_unregister_codec,
++				       pdev);
++	if (err)
++		return err;
++
++	return 0;
++}
++
+ int tegra_asoc_machine_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np_codec, *np_i2s;
++	struct device_node *np_codec, *np_i2s, *np_ac97;
+ 	const struct tegra_asoc_data *asoc;
+ 	struct device *dev = &pdev->dev;
+ 	struct tegra_machine *machine;
+@@ -404,17 +429,30 @@ int tegra_asoc_machine_probe(struct platform_device *pdev)
+ 			return err;
+ 	}
+ 
+-	np_codec = tegra_machine_parse_phandle(dev, "nvidia,audio-codec");
+-	if (IS_ERR(np_codec))
+-		return PTR_ERR(np_codec);
++	if (asoc->set_ac97) {
++		err = tegra_machine_register_codec(dev, asoc->codec_dev_name);
++		if (err)
++			return err;
++
++		np_ac97 = tegra_machine_parse_phandle(dev, "nvidia,ac97-controller");
++		if (IS_ERR(np_ac97))
++			return PTR_ERR(np_ac97);
+ 
+-	np_i2s = tegra_machine_parse_phandle(dev, "nvidia,i2s-controller");
+-	if (IS_ERR(np_i2s))
+-		return PTR_ERR(np_i2s);
++		card->dai_link->cpus->of_node = np_ac97;
++		card->dai_link->platforms->of_node = np_ac97;
++	} else {
++		np_codec = tegra_machine_parse_phandle(dev, "nvidia,audio-codec");
++		if (IS_ERR(np_codec))
++			return PTR_ERR(np_codec);
+ 
+-	card->dai_link->cpus->of_node = np_i2s;
+-	card->dai_link->codecs->of_node = np_codec;
+-	card->dai_link->platforms->of_node = np_i2s;
++		np_i2s = tegra_machine_parse_phandle(dev, "nvidia,i2s-controller");
++		if (IS_ERR(np_i2s))
++			return PTR_ERR(np_i2s);
++
++		card->dai_link->cpus->of_node = np_i2s;
++		card->dai_link->codecs->of_node = np_codec;
++		card->dai_link->platforms->of_node = np_i2s;
++	}
+ 
+ 	if (asoc->add_common_controls) {
+ 		card->controls = tegra_machine_controls;
+@@ -589,6 +627,7 @@ static struct snd_soc_card snd_soc_tegra_wm9712 = {
+ static const struct tegra_asoc_data tegra_wm9712_data = {
+ 	.card = &snd_soc_tegra_wm9712,
+ 	.add_common_dapm_widgets = true,
++	.codec_dev_name = "wm9712-codec",
+ 	.set_ac97 = true,
+ };
+ 
+@@ -686,6 +725,7 @@ static struct snd_soc_dai_link tegra_tlv320aic23_dai = {
+ };
+ 
+ static struct snd_soc_card snd_soc_tegra_trimslice = {
++	.name = "tegra-trimslice",
+ 	.components = "codec:tlv320aic23",
+ 	.dai_link = &tegra_tlv320aic23_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_asoc_machine.h b/sound/soc/tegra/tegra_asoc_machine.h
+index 8ee0ec814f67c..d6a8d13205516 100644
+--- a/sound/soc/tegra/tegra_asoc_machine.h
++++ b/sound/soc/tegra/tegra_asoc_machine.h
+@@ -13,6 +13,7 @@ struct snd_soc_pcm_runtime;
+ 
+ struct tegra_asoc_data {
+ 	unsigned int (*mclk_rate)(unsigned int srate);
++	const char *codec_dev_name;
+ 	struct snd_soc_card *card;
+ 	unsigned int mclk_id;
+ 	bool hp_jack_gpio_active_low;
+diff --git a/sound/synth/emux/emux.c b/sound/synth/emux/emux.c
+index 49d1976a132c0..5ed8e36d2e043 100644
+--- a/sound/synth/emux/emux.c
++++ b/sound/synth/emux/emux.c
+@@ -88,7 +88,7 @@ int snd_emux_register(struct snd_emux *emu, struct snd_card *card, int index, ch
+ 	emu->name = kstrdup(name, GFP_KERNEL);
+ 	emu->voices = kcalloc(emu->max_voices, sizeof(struct snd_emux_voice),
+ 			      GFP_KERNEL);
+-	if (emu->voices == NULL)
++	if (emu->name == NULL || emu->voices == NULL)
+ 		return -ENOMEM;
+ 
+ 	/* create soundfont list */
+diff --git a/sound/usb/6fire/comm.c b/sound/usb/6fire/comm.c
+index 43a2a62d66f7e..49629d4bb327a 100644
+--- a/sound/usb/6fire/comm.c
++++ b/sound/usb/6fire/comm.c
+@@ -95,7 +95,7 @@ static int usb6fire_comm_send_buffer(u8 *buffer, struct usb_device *dev)
+ 	int actual_len;
+ 
+ 	ret = usb_interrupt_msg(dev, usb_sndintpipe(dev, COMM_EP),
+-			buffer, buffer[1] + 2, &actual_len, HZ);
++			buffer, buffer[1] + 2, &actual_len, 1000);
+ 	if (ret < 0)
+ 		return ret;
+ 	else if (actual_len != buffer[1] + 2)
+diff --git a/sound/usb/6fire/firmware.c b/sound/usb/6fire/firmware.c
+index 8981e61f2da4a..c51abc54d2f84 100644
+--- a/sound/usb/6fire/firmware.c
++++ b/sound/usb/6fire/firmware.c
+@@ -160,7 +160,7 @@ static int usb6fire_fw_ezusb_write(struct usb_device *device,
+ {
+ 	return usb_control_msg_send(device, 0, type,
+ 				    USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				    value, 0, data, len, HZ, GFP_KERNEL);
++				    value, 0, data, len, 1000, GFP_KERNEL);
+ }
+ 
+ static int usb6fire_fw_ezusb_read(struct usb_device *device,
+@@ -168,7 +168,7 @@ static int usb6fire_fw_ezusb_read(struct usb_device *device,
+ {
+ 	return usb_control_msg_recv(device, 0, type,
+ 				    USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-				    value, 0, data, len, HZ, GFP_KERNEL);
++				    value, 0, data, len, 1000, GFP_KERNEL);
+ }
+ 
+ static int usb6fire_fw_fpga_write(struct usb_device *device,
+@@ -178,7 +178,7 @@ static int usb6fire_fw_fpga_write(struct usb_device *device,
+ 	int ret;
+ 
+ 	ret = usb_bulk_msg(device, usb_sndbulkpipe(device, FPGA_EP), data, len,
+-			&actual_len, HZ);
++			&actual_len, 1000);
+ 	if (ret < 0)
+ 		return ret;
+ 	else if (actual_len != len)
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index eb216fef4ba75..674477be397eb 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -414,6 +414,7 @@ static int line6_parse_audio_format_rates_quirk(struct snd_usb_audio *chip,
+ 	case USB_ID(0x0e41, 0x4242): /* Line6 Helix Rack */
+ 	case USB_ID(0x0e41, 0x4244): /* Line6 Helix LT */
+ 	case USB_ID(0x0e41, 0x4246): /* Line6 HX-Stomp */
++	case USB_ID(0x0e41, 0x4253): /* Line6 HX-Stomp XL */
+ 	case USB_ID(0x0e41, 0x4247): /* Line6 Pod Go */
+ 	case USB_ID(0x0e41, 0x4248): /* Line6 Helix >= fw 2.82 */
+ 	case USB_ID(0x0e41, 0x4249): /* Line6 Helix Rack >= fw 2.82 */
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index 9602929b7de90..59faa5a9a7141 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -113,12 +113,12 @@ int line6_send_raw_message(struct usb_line6 *line6, const char *buffer,
+ 			retval = usb_interrupt_msg(line6->usbdev,
+ 						usb_sndintpipe(line6->usbdev, properties->ep_ctrl_w),
+ 						(char *)frag_buf, frag_size,
+-						&partial, LINE6_TIMEOUT * HZ);
++						&partial, LINE6_TIMEOUT);
+ 		} else {
+ 			retval = usb_bulk_msg(line6->usbdev,
+ 						usb_sndbulkpipe(line6->usbdev, properties->ep_ctrl_w),
+ 						(char *)frag_buf, frag_size,
+-						&partial, LINE6_TIMEOUT * HZ);
++						&partial, LINE6_TIMEOUT);
+ 		}
+ 
+ 		if (retval) {
+@@ -347,7 +347,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 	ret = usb_control_msg_send(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 				   (datalen << 8) | 0x21, address, NULL, 0,
+-				   LINE6_TIMEOUT * HZ, GFP_KERNEL);
++				   LINE6_TIMEOUT, GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(line6->ifcdev, "read request failed (error %d)\n", ret);
+ 		goto exit;
+@@ -360,7 +360,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 		ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 					   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ 					   0x0012, 0x0000, &len, 1,
+-					   LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					   LINE6_TIMEOUT, GFP_KERNEL);
+ 		if (ret) {
+ 			dev_err(line6->ifcdev,
+ 				"receive length failed (error %d)\n", ret);
+@@ -387,7 +387,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 	/* receive the result: */
+ 	ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-				   0x0013, 0x0000, data, datalen, LINE6_TIMEOUT * HZ,
++				   0x0013, 0x0000, data, datalen, LINE6_TIMEOUT,
+ 				   GFP_KERNEL);
+ 	if (ret)
+ 		dev_err(line6->ifcdev, "read failed (error %d)\n", ret);
+@@ -417,7 +417,7 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ 
+ 	ret = usb_control_msg_send(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-				   0x0022, address, data, datalen, LINE6_TIMEOUT * HZ,
++				   0x0022, address, data, datalen, LINE6_TIMEOUT,
+ 				   GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(line6->ifcdev,
+@@ -430,7 +430,7 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ 
+ 		ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 					   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+-					   0x0012, 0x0000, status, 1, LINE6_TIMEOUT * HZ,
++					   0x0012, 0x0000, status, 1, LINE6_TIMEOUT,
+ 					   GFP_KERNEL);
+ 		if (ret) {
+ 			dev_err(line6->ifcdev,
+diff --git a/sound/usb/line6/driver.h b/sound/usb/line6/driver.h
+index 71d3da1db8c81..ecf3a2b39c7eb 100644
+--- a/sound/usb/line6/driver.h
++++ b/sound/usb/line6/driver.h
+@@ -27,7 +27,7 @@
+ #define LINE6_FALLBACK_INTERVAL 10
+ #define LINE6_FALLBACK_MAXPACKETSIZE 16
+ 
+-#define LINE6_TIMEOUT 1
++#define LINE6_TIMEOUT 1000
+ #define LINE6_BUFSIZE_LISTEN 64
+ #define LINE6_MIDI_MESSAGE_MAXLEN 256
+ 
+diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c
+index 28794a35949d4..b24bc82f89e37 100644
+--- a/sound/usb/line6/podhd.c
++++ b/sound/usb/line6/podhd.c
+@@ -190,7 +190,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 	ret = usb_control_msg_send(usbdev, 0,
+ 					0x67, USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 					0x11, 0,
+-					NULL, 0, LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					NULL, 0, LINE6_TIMEOUT, GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(pod->line6.ifcdev, "read request failed (error %d)\n", ret);
+ 		goto exit;
+@@ -200,7 +200,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 	ret = usb_control_msg_recv(usbdev, 0, 0x67,
+ 					USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ 					0x11, 0x0,
+-					init_bytes, 3, LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					init_bytes, 3, LINE6_TIMEOUT, GFP_KERNEL);
+ 	if (ret) {
+ 		dev_err(pod->line6.ifcdev,
+ 			"receive length failed (error %d)\n", ret);
+@@ -220,7 +220,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 					USB_REQ_SET_FEATURE,
+ 					USB_TYPE_STANDARD | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 					1, 0,
+-					NULL, 0, LINE6_TIMEOUT * HZ, GFP_KERNEL);
++					NULL, 0, LINE6_TIMEOUT, GFP_KERNEL);
+ exit:
+ 	return ret;
+ }
+diff --git a/sound/usb/line6/toneport.c b/sound/usb/line6/toneport.c
+index 4e5693c97aa42..e33df58740a91 100644
+--- a/sound/usb/line6/toneport.c
++++ b/sound/usb/line6/toneport.c
+@@ -128,7 +128,7 @@ static int toneport_send_cmd(struct usb_device *usbdev, int cmd1, int cmd2)
+ 
+ 	ret = usb_control_msg_send(usbdev, 0, 0x67,
+ 				   USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+-				   cmd1, cmd2, NULL, 0, LINE6_TIMEOUT * HZ,
++				   cmd1, cmd2, NULL, 0, LINE6_TIMEOUT,
+ 				   GFP_KERNEL);
+ 
+ 	if (ret) {
+diff --git a/sound/usb/misc/ua101.c b/sound/usb/misc/ua101.c
+index 5834d1dc317ef..4f6b20ed29dd7 100644
+--- a/sound/usb/misc/ua101.c
++++ b/sound/usb/misc/ua101.c
+@@ -1000,7 +1000,7 @@ static int detect_usb_format(struct ua101 *ua)
+ 		fmt_playback->bSubframeSize * ua->playback.channels;
+ 
+ 	epd = &ua->intf[INTF_CAPTURE]->altsetting[1].endpoint[0].desc;
+-	if (!usb_endpoint_is_isoc_in(epd)) {
++	if (!usb_endpoint_is_isoc_in(epd) || usb_endpoint_maxp(epd) == 0) {
+ 		dev_err(&ua->dev->dev, "invalid capture endpoint\n");
+ 		return -ENXIO;
+ 	}
+@@ -1008,7 +1008,7 @@ static int detect_usb_format(struct ua101 *ua)
+ 	ua->capture.max_packet_bytes = usb_endpoint_maxp(epd);
+ 
+ 	epd = &ua->intf[INTF_PLAYBACK]->altsetting[1].endpoint[0].desc;
+-	if (!usb_endpoint_is_isoc_out(epd)) {
++	if (!usb_endpoint_is_isoc_out(epd) || usb_endpoint_maxp(epd) == 0) {
+ 		dev_err(&ua->dev->dev, "invalid playback endpoint\n");
+ 		return -ENXIO;
+ 	}
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index db65f77eb131f..1565ee1348392 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1899,6 +1899,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f47, 2),	/* JBL Quantum 800 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f4c, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */
+diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c
+index 797699462cd8e..8fd63a067308a 100644
+--- a/tools/arch/x86/lib/insn.c
++++ b/tools/arch/x86/lib/insn.c
+@@ -13,6 +13,7 @@
+ #endif
+ #include "../include/asm/inat.h" /* __ignore_sync_check__ */
+ #include "../include/asm/insn.h" /* __ignore_sync_check__ */
++#include "../include/asm-generic/unaligned.h" /* __ignore_sync_check__ */
+ 
+ #include <linux/errno.h>
+ #include <linux/kconfig.h>
+@@ -37,10 +38,10 @@
+ 	((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
+ 
+ #define __get_next(t, insn)	\
+-	({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
++	({ t r = get_unaligned((t *)(insn)->next_byte); (insn)->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+ 
+ #define __peek_nbyte_next(t, insn, n)	\
+-	({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
++	({ t r = get_unaligned((t *)(insn)->next_byte + n); leXX_to_cpu(t, r); })
+ 
+ #define get_next(t, insn)	\
+ 	({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 9d709b4276655..a33238a100a0c 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -308,18 +308,12 @@ static void show_prog_metadata(int fd, __u32 num_maps)
+ 		if (printed_header)
+ 			jsonw_end_object(json_wtr);
+ 	} else {
+-		json_writer_t *btf_wtr = jsonw_new(stdout);
++		json_writer_t *btf_wtr;
+ 		struct btf_dumper d = {
+ 			.btf = btf,
+-			.jw = btf_wtr,
+ 			.is_plain_text = true,
+ 		};
+ 
+-		if (!btf_wtr) {
+-			p_err("jsonw alloc failed");
+-			goto out_free;
+-		}
+-
+ 		for (i = 0; i < vlen; i++, vsi++) {
+ 			t_var = btf__type_by_id(btf, vsi->type);
+ 			name = btf__name_by_offset(btf, t_var->name_off);
+@@ -329,6 +323,14 @@ static void show_prog_metadata(int fd, __u32 num_maps)
+ 
+ 			if (!printed_header) {
+ 				printf("\tmetadata:");
++
++				btf_wtr = jsonw_new(stdout);
++				if (!btf_wtr) {
++					p_err("jsonw alloc failed");
++					goto out_free;
++				}
++				d.jw = btf_wtr,
++
+ 				printed_header = true;
+ 			}
+ 
+diff --git a/tools/include/asm-generic/unaligned.h b/tools/include/asm-generic/unaligned.h
+new file mode 100644
+index 0000000000000..47387c607035e
+--- /dev/null
++++ b/tools/include/asm-generic/unaligned.h
+@@ -0,0 +1,23 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++/*
++ * Copied from the kernel sources to tools/perf/:
++ */
++
++#ifndef __TOOLS_LINUX_ASM_GENERIC_UNALIGNED_H
++#define __TOOLS_LINUX_ASM_GENERIC_UNALIGNED_H
++
++#define __get_unaligned_t(type, ptr) ({						\
++	const struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr);	\
++	__pptr->x;								\
++})
++
++#define __put_unaligned_t(type, val, ptr) do {					\
++	struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr);		\
++	__pptr->x = (val);							\
++} while (0)
++
++#define get_unaligned(ptr)	__get_unaligned_t(typeof(*(ptr)), (ptr))
++#define put_unaligned(val, ptr) __put_unaligned_t(typeof(*(ptr)), (val), (ptr))
++
++#endif /* __TOOLS_LINUX_ASM_GENERIC_UNALIGNED_H */
++
+diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
+index 86dcac44f32f6..2c3ac0edf6ad3 100644
+--- a/tools/lib/bpf/bpf.c
++++ b/tools/lib/bpf/bpf.c
+@@ -480,6 +480,7 @@ int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value)
+ int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags)
+ {
+ 	union bpf_attr attr;
++	int ret;
+ 
+ 	memset(&attr, 0, sizeof(attr));
+ 	attr.map_fd = fd;
+@@ -487,7 +488,8 @@ int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, _
+ 	attr.value = ptr_to_u64(value);
+ 	attr.flags = flags;
+ 
+-	return sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
++	ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
++	return libbpf_err_errno(ret);
+ }
+ 
+ int bpf_map_delete_elem(int fd, const void *key)
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index 09ebe3db5f2f8..e4aa9996a5501 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -40,7 +40,7 @@ enum bpf_enum_value_kind {
+ #define __CORE_RELO(src, field, info)					      \
+ 	__builtin_preserve_field_info((src)->field, BPF_FIELD_##info)
+ 
+-#if __BYTE_ORDER == __LITTLE_ENDIAN
++#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ #define __CORE_BITFIELD_PROBE_READ(dst, src, fld)			      \
+ 	bpf_probe_read_kernel(						      \
+ 			(void *)dst,				      \
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 7ff3d5ce44f99..7ab9f702e72ac 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -231,17 +231,23 @@ static int btf_parse_hdr(struct btf *btf)
+ 		}
+ 		btf_bswap_hdr(hdr);
+ 	} else if (hdr->magic != BTF_MAGIC) {
+-		pr_debug("Invalid BTF magic:%x\n", hdr->magic);
++		pr_debug("Invalid BTF magic: %x\n", hdr->magic);
+ 		return -EINVAL;
+ 	}
+ 
+-	meta_left = btf->raw_size - sizeof(*hdr);
+-	if (meta_left < hdr->str_off + hdr->str_len) {
+-		pr_debug("Invalid BTF total size:%u\n", btf->raw_size);
++	if (btf->raw_size < hdr->hdr_len) {
++		pr_debug("BTF header len %u larger than data size %u\n",
++			 hdr->hdr_len, btf->raw_size);
+ 		return -EINVAL;
+ 	}
+ 
+-	if (hdr->type_off + hdr->type_len > hdr->str_off) {
++	meta_left = btf->raw_size - hdr->hdr_len;
++	if (meta_left < (long long)hdr->str_off + hdr->str_len) {
++		pr_debug("Invalid BTF total size: %u\n", btf->raw_size);
++		return -EINVAL;
++	}
++
++	if ((long long)hdr->type_off + hdr->type_len > hdr->str_off) {
+ 		pr_debug("Invalid BTF data sections layout: type data at %u + %u, strings data at %u + %u\n",
+ 			 hdr->type_off, hdr->type_len, hdr->str_off, hdr->str_len);
+ 		return -EINVAL;
+@@ -2899,8 +2905,10 @@ int btf__dedup(struct btf *btf, struct btf_ext *btf_ext,
+ 		return libbpf_err(-EINVAL);
+ 	}
+ 
+-	if (btf_ensure_modifiable(btf))
+-		return libbpf_err(-ENOMEM);
++	if (btf_ensure_modifiable(btf)) {
++		err = -ENOMEM;
++		goto done;
++	}
+ 
+ 	err = btf_dedup_prep(d);
+ 	if (err) {
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index f1bc09e606cd1..994266b565c1a 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -2990,6 +2990,12 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
+ 		}
+ 	}
+ 
++	if (!obj->efile.symbols) {
++		pr_warn("elf: couldn't find symbol table in %s, stripped object file?\n",
++			obj->path);
++		return -ENOENT;
++	}
++
+ 	scn = NULL;
+ 	while ((scn = elf_nextscn(elf, scn)) != NULL) {
+ 		idx++;
+diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
+index b22b50c1b173e..9cf66702fa8dd 100644
+--- a/tools/lib/bpf/skel_internal.h
++++ b/tools/lib/bpf/skel_internal.h
+@@ -105,10 +105,12 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
+ 	err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr));
+ 	if (err < 0 || (int)attr.test.retval < 0) {
+ 		opts->errstr = "failed to execute loader prog";
+-		if (err < 0)
++		if (err < 0) {
+ 			err = -errno;
+-		else
++		} else {
+ 			err = (int)attr.test.retval;
++			errno = -err;
++		}
+ 		goto out;
+ 	}
+ 	err = 0;
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index 0893436cc09f8..77b51600e3e94 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -659,6 +659,26 @@ const char *arch_nop_insn(int len)
+ 	return nops[len-1];
+ }
+ 
++#define BYTE_RET	0xC3
++
++const char *arch_ret_insn(int len)
++{
++	static const char ret[5][5] = {
++		{ BYTE_RET },
++		{ BYTE_RET, BYTES_NOP1 },
++		{ BYTE_RET, BYTES_NOP2 },
++		{ BYTE_RET, BYTES_NOP3 },
++		{ BYTE_RET, BYTES_NOP4 },
++	};
++
++	if (len < 1 || len > 5) {
++		WARN("invalid RET size: %d\n", len);
++		return NULL;
++	}
++
++	return ret[len-1];
++}
++
+ /* asm/alternative.h ? */
+ 
+ #define ALTINSTR_FLAG_INV	(1 << 15)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index e5947fbb9e7a6..c39eca5399851 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -173,6 +173,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"rewind_stack_do_exit",
+ 		"kunit_try_catch_throw",
+ 		"xen_start_kernel",
++		"cpu_bringup_and_idle",
+ 	};
+ 
+ 	if (!func)
+@@ -828,6 +829,79 @@ static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *i
+ 	return insn->reloc;
+ }
+ 
++static void remove_insn_ops(struct instruction *insn)
++{
++	struct stack_op *op, *tmp;
++
++	list_for_each_entry_safe(op, tmp, &insn->stack_ops, list) {
++		list_del(&op->list);
++		free(op);
++	}
++}
++
++static void add_call_dest(struct objtool_file *file, struct instruction *insn,
++			  struct symbol *dest, bool sibling)
++{
++	struct reloc *reloc = insn_reloc(file, insn);
++
++	insn->call_dest = dest;
++	if (!dest)
++		return;
++
++	if (insn->call_dest->static_call_tramp) {
++		list_add_tail(&insn->call_node,
++			      &file->static_call_list);
++	}
++
++	/*
++	 * Many compilers cannot disable KCOV with a function attribute
++	 * so they need a little help, NOP out any KCOV calls from noinstr
++	 * text.
++	 */
++	if (insn->sec->noinstr &&
++	    !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) {
++		if (reloc) {
++			reloc->type = R_NONE;
++			elf_write_reloc(file->elf, reloc);
++		}
++
++		elf_write_insn(file->elf, insn->sec,
++			       insn->offset, insn->len,
++			       sibling ? arch_ret_insn(insn->len)
++			               : arch_nop_insn(insn->len));
++
++		insn->type = sibling ? INSN_RETURN : INSN_NOP;
++	}
++
++	if (mcount && !strcmp(insn->call_dest->name, "__fentry__")) {
++		if (sibling)
++			WARN_FUNC("Tail call to __fentry__ !?!?", insn->sec, insn->offset);
++
++		if (reloc) {
++			reloc->type = R_NONE;
++			elf_write_reloc(file->elf, reloc);
++		}
++
++		elf_write_insn(file->elf, insn->sec,
++			       insn->offset, insn->len,
++			       arch_nop_insn(insn->len));
++
++		insn->type = INSN_NOP;
++
++		list_add_tail(&insn->mcount_loc_node,
++			      &file->mcount_loc_list);
++	}
++
++	/*
++	 * Whatever stack impact regular CALLs have, should be undone
++	 * by the RETURN of the called function.
++	 *
++	 * Annotated intra-function calls retain the stack_ops but
++	 * are converted to JUMP, see read_intra_function_calls().
++	 */
++	remove_insn_ops(insn);
++}
++
+ /*
+  * Find the destination instructions for all jumps.
+  */
+@@ -866,11 +940,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ 			continue;
+ 		} else if (insn->func) {
+ 			/* internal or external sibling call (with reloc) */
+-			insn->call_dest = reloc->sym;
+-			if (insn->call_dest->static_call_tramp) {
+-				list_add_tail(&insn->call_node,
+-					      &file->static_call_list);
+-			}
++			add_call_dest(file, insn, reloc->sym, true);
+ 			continue;
+ 		} else if (reloc->sym->sec->idx) {
+ 			dest_sec = reloc->sym->sec;
+@@ -926,13 +996,8 @@ static int add_jump_destinations(struct objtool_file *file)
+ 
+ 			} else if (insn->jump_dest->func->pfunc != insn->func->pfunc &&
+ 				   insn->jump_dest->offset == insn->jump_dest->func->offset) {
+-
+ 				/* internal sibling call (without reloc) */
+-				insn->call_dest = insn->jump_dest->func;
+-				if (insn->call_dest->static_call_tramp) {
+-					list_add_tail(&insn->call_node,
+-						      &file->static_call_list);
+-				}
++				add_call_dest(file, insn, insn->jump_dest->func, true);
+ 			}
+ 		}
+ 	}
+@@ -940,16 +1005,6 @@ static int add_jump_destinations(struct objtool_file *file)
+ 	return 0;
+ }
+ 
+-static void remove_insn_ops(struct instruction *insn)
+-{
+-	struct stack_op *op, *tmp;
+-
+-	list_for_each_entry_safe(op, tmp, &insn->stack_ops, list) {
+-		list_del(&op->list);
+-		free(op);
+-	}
+-}
+-
+ static struct symbol *find_call_destination(struct section *sec, unsigned long offset)
+ {
+ 	struct symbol *call_dest;
+@@ -968,6 +1023,7 @@ static int add_call_destinations(struct objtool_file *file)
+ {
+ 	struct instruction *insn;
+ 	unsigned long dest_off;
++	struct symbol *dest;
+ 	struct reloc *reloc;
+ 
+ 	for_each_insn(file, insn) {
+@@ -977,7 +1033,9 @@ static int add_call_destinations(struct objtool_file *file)
+ 		reloc = insn_reloc(file, insn);
+ 		if (!reloc) {
+ 			dest_off = arch_jump_destination(insn);
+-			insn->call_dest = find_call_destination(insn->sec, dest_off);
++			dest = find_call_destination(insn->sec, dest_off);
++
++			add_call_dest(file, insn, dest, false);
+ 
+ 			if (insn->ignore)
+ 				continue;
+@@ -995,9 +1053,8 @@ static int add_call_destinations(struct objtool_file *file)
+ 
+ 		} else if (reloc->sym->type == STT_SECTION) {
+ 			dest_off = arch_dest_reloc_offset(reloc->addend);
+-			insn->call_dest = find_call_destination(reloc->sym->sec,
+-								dest_off);
+-			if (!insn->call_dest) {
++			dest = find_call_destination(reloc->sym->sec, dest_off);
++			if (!dest) {
+ 				WARN_FUNC("can't find call dest symbol at %s+0x%lx",
+ 					  insn->sec, insn->offset,
+ 					  reloc->sym->sec->name,
+@@ -1005,6 +1062,8 @@ static int add_call_destinations(struct objtool_file *file)
+ 				return -1;
+ 			}
+ 
++			add_call_dest(file, insn, dest, false);
++
+ 		} else if (arch_is_retpoline(reloc->sym)) {
+ 			/*
+ 			 * Retpoline calls are really dynamic calls in
+@@ -1020,55 +1079,7 @@ static int add_call_destinations(struct objtool_file *file)
+ 			continue;
+ 
+ 		} else
+-			insn->call_dest = reloc->sym;
+-
+-		if (insn->call_dest && insn->call_dest->static_call_tramp) {
+-			list_add_tail(&insn->call_node,
+-				      &file->static_call_list);
+-		}
+-
+-		/*
+-		 * Many compilers cannot disable KCOV with a function attribute
+-		 * so they need a little help, NOP out any KCOV calls from noinstr
+-		 * text.
+-		 */
+-		if (insn->sec->noinstr &&
+-		    !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) {
+-			if (reloc) {
+-				reloc->type = R_NONE;
+-				elf_write_reloc(file->elf, reloc);
+-			}
+-
+-			elf_write_insn(file->elf, insn->sec,
+-				       insn->offset, insn->len,
+-				       arch_nop_insn(insn->len));
+-			insn->type = INSN_NOP;
+-		}
+-
+-		if (mcount && !strcmp(insn->call_dest->name, "__fentry__")) {
+-			if (reloc) {
+-				reloc->type = R_NONE;
+-				elf_write_reloc(file->elf, reloc);
+-			}
+-
+-			elf_write_insn(file->elf, insn->sec,
+-				       insn->offset, insn->len,
+-				       arch_nop_insn(insn->len));
+-
+-			insn->type = INSN_NOP;
+-
+-			list_add_tail(&insn->mcount_loc_node,
+-				      &file->mcount_loc_list);
+-		}
+-
+-		/*
+-		 * Whatever stack impact regular CALLs have, should be undone
+-		 * by the RETURN of the called function.
+-		 *
+-		 * Annotated intra-function calls retain the stack_ops but
+-		 * are converted to JUMP, see read_intra_function_calls().
+-		 */
+-		remove_insn_ops(insn);
++			add_call_dest(file, insn, reloc->sym, false);
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
+index 062bb6e9b8658..478e054fcdf71 100644
+--- a/tools/objtool/include/objtool/arch.h
++++ b/tools/objtool/include/objtool/arch.h
+@@ -82,6 +82,7 @@ unsigned long arch_jump_destination(struct instruction *insn);
+ unsigned long arch_dest_reloc_offset(int addend);
+ 
+ const char *arch_nop_insn(int len);
++const char *arch_ret_insn(int len);
+ 
+ int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg);
+ 
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index 17a9844e4fbf8..53d6c6f449c9c 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -564,7 +564,7 @@ void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+ 		synthesize_bpf_prog_name(name, KSYM_NAME_LEN, info, btf, 0);
+ 		fprintf(fp, "# bpf_prog_info %u: %s addr 0x%llx size %u\n",
+ 			info->id, name, prog_addrs[0], prog_lens[0]);
+-		return;
++		goto out;
+ 	}
+ 
+ 	fprintf(fp, "# bpf_prog_info %u:\n", info->id);
+@@ -574,4 +574,6 @@ void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+ 		fprintf(fp, "# \tsub_prog %u: %s addr 0x%llx size %u\n",
+ 			i, name, prog_addrs[i], prog_lens[i]);
+ 	}
++out:
++	btf__free(btf);
+ }
+diff --git a/tools/perf/util/intel-pt-decoder/Build b/tools/perf/util/intel-pt-decoder/Build
+index bc629359826fb..b41c2e9c6f887 100644
+--- a/tools/perf/util/intel-pt-decoder/Build
++++ b/tools/perf/util/intel-pt-decoder/Build
+@@ -18,3 +18,5 @@ CFLAGS_intel-pt-insn-decoder.o += -I$(OUTPUT)util/intel-pt-decoder
+ ifeq ($(CC_NO_CLANG), 1)
+   CFLAGS_intel-pt-insn-decoder.o += -Wno-override-init
+ endif
++
++CFLAGS_intel-pt-insn-decoder.o += -Wno-packed
+diff --git a/tools/testing/selftests/bpf/prog_tests/perf_buffer.c b/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
+index 6490e9673002f..7daaaab13681b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
++++ b/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
+@@ -107,8 +107,8 @@ void test_perf_buffer(void)
+ 		  "expect %d, seen %d\n", nr_on_cpus, CPU_COUNT(&cpu_seen)))
+ 		goto out_free_pb;
+ 
+-	if (CHECK(perf_buffer__buffer_cnt(pb) != nr_cpus, "buf_cnt",
+-		  "got %zu, expected %d\n", perf_buffer__buffer_cnt(pb), nr_cpus))
++	if (CHECK(perf_buffer__buffer_cnt(pb) != nr_on_cpus, "buf_cnt",
++		  "got %zu, expected %d\n", perf_buffer__buffer_cnt(pb), nr_on_cpus))
+ 		goto out_close;
+ 
+ 	for (i = 0; i < nr_cpus; i++) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+index aee41547e7f45..6db07401bc493 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
++++ b/tools/testing/selftests/bpf/prog_tests/sk_lookup.c
+@@ -598,7 +598,7 @@ close:
+ 
+ static void run_lookup_prog(const struct test *t)
+ {
+-	int server_fds[MAX_SERVERS] = { -1 };
++	int server_fds[] = { [0 ... MAX_SERVERS - 1] = -1 };
+ 	int client_fd, reuse_conn_fd = -1;
+ 	struct bpf_link *lookup_link;
+ 	int i, err;
+@@ -1053,7 +1053,7 @@ static void run_sk_assign(struct test_sk_lookup *skel,
+ 			  struct bpf_program *lookup_prog,
+ 			  const char *remote_ip, const char *local_ip)
+ {
+-	int server_fds[MAX_SERVERS] = { -1 };
++	int server_fds[] = { [0 ... MAX_SERVERS - 1] = -1 };
+ 	struct bpf_sk_lookup ctx;
+ 	__u64 server_cookie;
+ 	int i, err;
+diff --git a/tools/testing/selftests/bpf/prog_tests/test_ima.c b/tools/testing/selftests/bpf/prog_tests/test_ima.c
+index 0252f61d611a9..97d8a6f84f4ab 100644
+--- a/tools/testing/selftests/bpf/prog_tests/test_ima.c
++++ b/tools/testing/selftests/bpf/prog_tests/test_ima.c
+@@ -43,7 +43,7 @@ static int process_sample(void *ctx, void *data, size_t len)
+ void test_test_ima(void)
+ {
+ 	char measured_dir_template[] = "/tmp/ima_measuredXXXXXX";
+-	struct ring_buffer *ringbuf;
++	struct ring_buffer *ringbuf = NULL;
+ 	const char *measured_dir;
+ 	char cmd[256];
+ 
+@@ -85,5 +85,6 @@ close_clean:
+ 	err = system(cmd);
+ 	CHECK(err, "failed to run command", "%s, errno = %d\n", cmd, errno);
+ close_prog:
++	ring_buffer__free(ringbuf);
+ 	ima__destroy(skel);
+ }
+diff --git a/tools/testing/selftests/bpf/progs/strobemeta.h b/tools/testing/selftests/bpf/progs/strobemeta.h
+index 7de534f38c3f1..60c93aee2f4ad 100644
+--- a/tools/testing/selftests/bpf/progs/strobemeta.h
++++ b/tools/testing/selftests/bpf/progs/strobemeta.h
+@@ -358,7 +358,7 @@ static __always_inline uint64_t read_str_var(struct strobemeta_cfg *cfg,
+ 					     void *payload)
+ {
+ 	void *location;
+-	uint32_t len;
++	uint64_t len;
+ 
+ 	data->str_lens[idx] = 0;
+ 	location = calc_location(&cfg->str_locs[idx], tls_base);
+@@ -390,7 +390,7 @@ static __always_inline void *read_map_var(struct strobemeta_cfg *cfg,
+ 	struct strobe_map_descr* descr = &data->map_descrs[idx];
+ 	struct strobe_map_raw map;
+ 	void *location;
+-	uint32_t len;
++	uint64_t len;
+ 	int i;
+ 
+ 	descr->tag_len = 0; /* presume no tag is set */
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index bfbf2277b61a6..f381253902274 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -348,7 +348,7 @@ int extract_build_id(char *build_id, size_t size)
+ 
+ 	if (getline(&line, &len, fp) == -1)
+ 		goto err;
+-	fclose(fp);
++	pclose(fp);
+ 
+ 	if (len > size)
+ 		len = size;
+@@ -357,7 +357,7 @@ int extract_build_id(char *build_id, size_t size)
+ 	free(line);
+ 	return 0;
+ err:
+-	fclose(fp);
++	pclose(fp);
+ 	return -1;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+index 1538373157e3c..bedff7aa7023f 100755
+--- a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
++++ b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+@@ -2,11 +2,11 @@
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Test topology:
+-#     - - - - - - - - - - - - - - - - - - - - - - - - -
+-#    | veth1         veth2         veth3 |  ... init net
++#    - - - - - - - - - - - - - - - - - - -
++#    | veth1         veth2         veth3 |  ns0
+ #     - -| - - - - - - | - - - - - - | - -
+ #    ---------     ---------     ---------
+-#    | veth0 |     | veth0 |     | veth0 |  ...
++#    | veth0 |     | veth0 |     | veth0 |
+ #    ---------     ---------     ---------
+ #       ns1           ns2           ns3
+ #
+@@ -31,6 +31,7 @@ IFACES=""
+ DRV_MODE="xdpgeneric xdpdrv xdpegress"
+ PASS=0
+ FAIL=0
++LOG_DIR=$(mktemp -d)
+ 
+ test_pass()
+ {
+@@ -50,6 +51,7 @@ clean_up()
+ 		ip link del veth$i 2> /dev/null
+ 		ip netns del ns$i 2> /dev/null
+ 	done
++	ip netns del ns0 2> /dev/null
+ }
+ 
+ # Kselftest framework requirement - SKIP code is 4.
+@@ -77,10 +79,12 @@ setup_ns()
+ 		mode="xdpdrv"
+ 	fi
+ 
++	ip netns add ns0
+ 	for i in $(seq $NUM); do
+ 	        ip netns add ns$i
+-	        ip link add veth$i type veth peer name veth0 netns ns$i
+-		ip link set veth$i up
++		ip -n ns$i link add veth0 index 2 type veth \
++			peer name veth$i netns ns0 index $((1 + $i))
++		ip -n ns0 link set veth$i up
+ 		ip -n ns$i link set veth0 up
+ 
+ 		ip -n ns$i addr add 192.0.2.$i/24 dev veth0
+@@ -91,7 +95,7 @@ setup_ns()
+ 			xdp_dummy.o sec xdp_dummy &> /dev/null || \
+ 			{ test_fail "Unable to load dummy xdp" && exit 1; }
+ 		IFACES="$IFACES veth$i"
+-		veth_mac[$i]=$(ip link show veth$i | awk '/link\/ether/ {print $2}')
++		veth_mac[$i]=$(ip -n ns0 link show veth$i | awk '/link\/ether/ {print $2}')
+ 	done
+ }
+ 
+@@ -100,17 +104,17 @@ do_egress_tests()
+ 	local mode=$1
+ 
+ 	# mac test
+-	ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> mac_ns1-2_${mode}.log &
+-	ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> mac_ns1-3_${mode}.log &
++	ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
++	ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
+ 	sleep 0.5
+ 	ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
+ 	sleep 0.5
+-	pkill -9 tcpdump
++	pkill tcpdump
+ 
+ 	# mac check
+-	grep -q "${veth_mac[2]} > ff:ff:ff:ff:ff:ff" mac_ns1-2_${mode}.log && \
++	grep -q "${veth_mac[2]} > ff:ff:ff:ff:ff:ff" ${LOG_DIR}/mac_ns1-2_${mode}.log && \
+ 	       test_pass "$mode mac ns1-2" || test_fail "$mode mac ns1-2"
+-	grep -q "${veth_mac[3]} > ff:ff:ff:ff:ff:ff" mac_ns1-3_${mode}.log && \
++	grep -q "${veth_mac[3]} > ff:ff:ff:ff:ff:ff" ${LOG_DIR}/mac_ns1-3_${mode}.log && \
+ 		test_pass "$mode mac ns1-3" || test_fail "$mode mac ns1-3"
+ }
+ 
+@@ -121,46 +125,46 @@ do_ping_tests()
+ 	# ping6 test: echo request should be redirect back to itself, not others
+ 	ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
+ 
+-	ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ns1-1_${mode}.log &
+-	ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ns1-2_${mode}.log &
+-	ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ns1-3_${mode}.log &
++	ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
++	ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
++	ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
+ 	sleep 0.5
+ 	# ARP test
+-	ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
++	ip netns exec ns1 arping -q -c 2 -I veth0 192.0.2.254
+ 	# IPv4 test
+ 	ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
+ 	# IPv6 test
+ 	ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
+ 	sleep 0.5
+-	pkill -9 tcpdump
++	pkill tcpdump
+ 
+ 	# All netns should receive the redirect arp requests
+-	[ $(grep -c "who-has 192.0.2.254" ns1-1_${mode}.log) -gt 4 ] && \
++	[ $(grep -cF "who-has 192.0.2.254" ${LOG_DIR}/ns1-1_${mode}.log) -eq 4 ] && \
+ 		test_pass "$mode arp(F_BROADCAST) ns1-1" || \
+ 		test_fail "$mode arp(F_BROADCAST) ns1-1"
+-	[ $(grep -c "who-has 192.0.2.254" ns1-2_${mode}.log) -le 4 ] && \
++	[ $(grep -cF "who-has 192.0.2.254" ${LOG_DIR}/ns1-2_${mode}.log) -eq 2 ] && \
+ 		test_pass "$mode arp(F_BROADCAST) ns1-2" || \
+ 		test_fail "$mode arp(F_BROADCAST) ns1-2"
+-	[ $(grep -c "who-has 192.0.2.254" ns1-3_${mode}.log) -le 4 ] && \
++	[ $(grep -cF "who-has 192.0.2.254" ${LOG_DIR}/ns1-3_${mode}.log) -eq 2 ] && \
+ 		test_pass "$mode arp(F_BROADCAST) ns1-3" || \
+ 		test_fail "$mode arp(F_BROADCAST) ns1-3"
+ 
+ 	# ns1 should not receive the redirect echo request, others should
+-	[ $(grep -c "ICMP echo request" ns1-1_${mode}.log) -eq 4 ] && \
++	[ $(grep -c "ICMP echo request" ${LOG_DIR}/ns1-1_${mode}.log) -eq 4 ] && \
+ 		test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1" || \
+ 		test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1"
+-	[ $(grep -c "ICMP echo request" ns1-2_${mode}.log) -eq 4 ] && \
++	[ $(grep -c "ICMP echo request" ${LOG_DIR}/ns1-2_${mode}.log) -eq 4 ] && \
+ 		test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2" || \
+ 		test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2"
+-	[ $(grep -c "ICMP echo request" ns1-3_${mode}.log) -eq 4 ] && \
++	[ $(grep -c "ICMP echo request" ${LOG_DIR}/ns1-3_${mode}.log) -eq 4 ] && \
+ 		test_pass "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3" || \
+ 		test_fail "$mode IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3"
+ 
+ 	# ns1 should receive the echo request, ns2 should not
+-	[ $(grep -c "ICMP6, echo request" ns1-1_${mode}.log) -eq 4 ] && \
++	[ $(grep -c "ICMP6, echo request" ${LOG_DIR}/ns1-1_${mode}.log) -eq 4 ] && \
+ 		test_pass "$mode IPv6 (no flags) ns1-1" || \
+ 		test_fail "$mode IPv6 (no flags) ns1-1"
+-	[ $(grep -c "ICMP6, echo request" ns1-2_${mode}.log) -eq 0 ] && \
++	[ $(grep -c "ICMP6, echo request" ${LOG_DIR}/ns1-2_${mode}.log) -eq 0 ] && \
+ 		test_pass "$mode IPv6 (no flags) ns1-2" || \
+ 		test_fail "$mode IPv6 (no flags) ns1-2"
+ }
+@@ -176,9 +180,13 @@ do_tests()
+ 		xdpgeneric) drv_p="-S";;
+ 	esac
+ 
+-	./xdp_redirect_multi $drv_p $IFACES &> xdp_redirect_${mode}.log &
++	ip netns exec ns0 ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
+ 	xdp_pid=$!
+ 	sleep 1
++	if ! ps -p $xdp_pid > /dev/null; then
++		test_fail "$mode xdp_redirect_multi start failed"
++		return 1
++	fi
+ 
+ 	if [ "$mode" = "xdpegress" ]; then
+ 		do_egress_tests $mode
+@@ -189,16 +197,16 @@ do_tests()
+ 	kill $xdp_pid
+ }
+ 
+-trap clean_up 0 2 3 6 9
++trap clean_up EXIT
+ 
+ check_env
+-rm -f xdp_redirect_*.log ns*.log mac_ns*.log
+ 
+ for mode in ${DRV_MODE}; do
+ 	setup_ns $mode
+ 	do_tests $mode
+ 	clean_up
+ done
++rm -rf ${LOG_DIR}
+ 
+ echo "Summary: PASS $PASS, FAIL $FAIL"
+ [ $FAIL -eq 0 ] && exit 0 || exit 1
+diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
+index 1b1c798e92489..1b138cd2b187d 100644
+--- a/tools/testing/selftests/bpf/verifier/array_access.c
++++ b/tools/testing/selftests/bpf/verifier/array_access.c
+@@ -186,7 +186,7 @@
+ 	},
+ 	.fixup_map_hash_48b = { 3 },
+ 	.errstr_unpriv = "R0 leaks addr",
+-	.errstr = "R0 unbounded memory access",
++	.errstr = "invalid access to map value, value_size=48 off=44 size=8",
+ 	.result_unpriv = REJECT,
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+diff --git a/tools/testing/selftests/bpf/xdp_redirect_multi.c b/tools/testing/selftests/bpf/xdp_redirect_multi.c
+index 3696a8f32c235..f5ffba341c174 100644
+--- a/tools/testing/selftests/bpf/xdp_redirect_multi.c
++++ b/tools/testing/selftests/bpf/xdp_redirect_multi.c
+@@ -129,7 +129,7 @@ int main(int argc, char **argv)
+ 		goto err_out;
+ 	}
+ 
+-	printf("Get interfaces");
++	printf("Get interfaces:");
+ 	for (i = 0; i < MAX_IFACE_NUM && argv[optind + i]; i++) {
+ 		ifaces[i] = if_nametoindex(argv[optind + i]);
+ 		if (!ifaces[i])
+@@ -139,7 +139,7 @@ int main(int argc, char **argv)
+ 			goto err_out;
+ 		}
+ 		if (ifaces[i] > MAX_INDEX_NUM) {
+-			printf("Interface index to large\n");
++			printf(" interface index too large\n");
+ 			goto err_out;
+ 		}
+ 		printf(" %d", ifaces[i]);
+diff --git a/tools/testing/selftests/core/close_range_test.c b/tools/testing/selftests/core/close_range_test.c
+index 73eb29c916d1b..aa7d13d91963f 100644
+--- a/tools/testing/selftests/core/close_range_test.c
++++ b/tools/testing/selftests/core/close_range_test.c
+@@ -54,7 +54,7 @@ static inline int sys_close_range(unsigned int fd, unsigned int max_fd,
+ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+ #endif
+ 
+-TEST(close_range)
++TEST(core_close_range)
+ {
+ 	int i, ret;
+ 	int open_fds[101];
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
+index 2ac98d70d02bd..161eba7cd1289 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
+@@ -54,6 +54,18 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
+ 	seg->base = base;
+ }
+ 
++/*
++ * Avoid using memset to clear the vmcb, since libc may not be
++ * available in L1 (and, even if it is, features that libc memset may
++ * want to use, like AVX, may not be enabled).
++ */
++static void clear_vmcb(struct vmcb *vmcb)
++{
++	int n = sizeof(*vmcb) / sizeof(u32);
++
++	asm volatile ("rep stosl" : "+c"(n), "+D"(vmcb) : "a"(0) : "memory");
++}
++
+ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp)
+ {
+ 	struct vmcb *vmcb = svm->vmcb;
+@@ -70,7 +82,7 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r
+ 	wrmsr(MSR_EFER, efer | EFER_SVME);
+ 	wrmsr(MSR_VM_HSAVE_PA, svm->save_area_gpa);
+ 
+-	memset(vmcb, 0, sizeof(*vmcb));
++	clear_vmcb(vmcb);
+ 	asm volatile ("vmsave %0\n\t" : : "a" (vmcb_gpa) : "memory");
+ 	vmcb_set_seg(&save->es, get_es(), 0, -1U, data_seg_attr);
+ 	vmcb_set_seg(&save->cs, get_cs(), 0, -1U, code_seg_attr);
+diff --git a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+index 8039e1eff9388..9f55ccd169a13 100644
+--- a/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
++++ b/tools/testing/selftests/kvm/x86_64/mmio_warning_test.c
+@@ -84,7 +84,7 @@ int get_warnings_count(void)
+ 	f = popen("dmesg | grep \"WARNING:\" | wc -l", "r");
+ 	if (fscanf(f, "%d", &warnings) < 1)
+ 		warnings = 0;
+-	fclose(f);
++	pclose(f);
+ 
+ 	return warnings;
+ }
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 79c9eb0034d58..a9b98d88df687 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -12,7 +12,7 @@ TEST_PROGS += udpgro_bench.sh udpgro.sh test_vxlan_under_vrf.sh reuseport_addr_a
+ TEST_PROGS += test_vxlan_fdb_changelink.sh so_txtime.sh ipv6_flowlabel.sh
+ TEST_PROGS += tcp_fastopen_backup_key.sh fcnal-test.sh l2tp.sh traceroute.sh
+ TEST_PROGS += fin_ack_lat.sh fib_nexthop_multiprefix.sh fib_nexthops.sh
+-TEST_PROGS += altnames.sh icmp_redirect.sh ip6_gre_headroom.sh
++TEST_PROGS += altnames.sh icmp.sh icmp_redirect.sh ip6_gre_headroom.sh
+ TEST_PROGS += route_localnet.sh
+ TEST_PROGS += reuseaddr_ports_exhausted.sh
+ TEST_PROGS += txtimestamp.sh
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index a8ad92850e630..8acc4f2a20071 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -436,10 +436,13 @@ cleanup()
+ 		ip -netns ${NSA} link set dev ${NSA_DEV} down
+ 		ip -netns ${NSA} link del dev ${NSA_DEV}
+ 
++		ip netns pids ${NSA} | xargs kill 2>/dev/null
+ 		ip netns del ${NSA}
+ 	fi
+ 
++	ip netns pids ${NSB} | xargs kill 2>/dev/null
+ 	ip netns del ${NSB}
++	ip netns pids ${NSC} | xargs kill 2>/dev/null
+ 	ip netns del ${NSC} >/dev/null 2>&1
+ }
+ 
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index 0d293391e9a44..b5a69ad191b07 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -2078,6 +2078,7 @@ basic_res()
+ 		"id 101 index 0 nhid 2 id 101 index 1 nhid 2 id 101 index 2 nhid 1 id 101 index 3 nhid 1"
+ 	log_test $? 0 "Dump all nexthop buckets in a group"
+ 
++	sleep 0.1
+ 	(( $($IP -j nexthop bucket list id 101 |
+ 	     jq '[.[] | select(.bucket.idle_time > 0 and
+ 	                       .bucket.idle_time < 2)] | length') == 4 ))
+diff --git a/tools/testing/selftests/net/forwarding/bridge_igmp.sh b/tools/testing/selftests/net/forwarding/bridge_igmp.sh
+index 675eff45b0371..1162836f8f329 100755
+--- a/tools/testing/selftests/net/forwarding/bridge_igmp.sh
++++ b/tools/testing/selftests/net/forwarding/bridge_igmp.sh
+@@ -482,10 +482,15 @@ v3exc_timeout_test()
+ 	local X=("192.0.2.20" "192.0.2.30")
+ 
+ 	# GMI should be 3 seconds
+-	ip link set dev br0 type bridge mcast_query_interval 100 mcast_query_response_interval 100
++	ip link set dev br0 type bridge mcast_query_interval 100 \
++					mcast_query_response_interval 100 \
++					mcast_membership_interval 300
+ 
+ 	v3exclude_prepare $h1 $ALL_MAC $ALL_GROUP
+-	ip link set dev br0 type bridge mcast_query_interval 500 mcast_query_response_interval 500
++	ip link set dev br0 type bridge mcast_query_interval 500 \
++					mcast_query_response_interval 500 \
++					mcast_membership_interval 1500
++
+ 	$MZ $h1 -c 1 -b $ALL_MAC -B $ALL_GROUP -t ip "proto=2,p=$MZPKT_ALLOW2" -q
+ 	sleep 3
+ 	bridge -j -d -s mdb show dev br0 \
+@@ -517,7 +522,8 @@ v3exc_timeout_test()
+ 	log_test "IGMPv3 group $TEST_GROUP exclude timeout"
+ 
+ 	ip link set dev br0 type bridge mcast_query_interval 12500 \
+-					mcast_query_response_interval 1000
++					mcast_query_response_interval 1000 \
++					mcast_membership_interval 26000
+ 
+ 	v3cleanup $swp1 $TEST_GROUP
+ }
+diff --git a/tools/testing/selftests/net/forwarding/bridge_mld.sh b/tools/testing/selftests/net/forwarding/bridge_mld.sh
+index ffdcfa87ca2ba..e2b9ff773c6b6 100755
+--- a/tools/testing/selftests/net/forwarding/bridge_mld.sh
++++ b/tools/testing/selftests/net/forwarding/bridge_mld.sh
+@@ -479,10 +479,15 @@ mldv2exc_timeout_test()
+ 	local X=("2001:db8:1::20" "2001:db8:1::30")
+ 
+ 	# GMI should be 3 seconds
+-	ip link set dev br0 type bridge mcast_query_interval 100 mcast_query_response_interval 100
++	ip link set dev br0 type bridge mcast_query_interval 100 \
++					mcast_query_response_interval 100 \
++					mcast_membership_interval 300
+ 
+ 	mldv2exclude_prepare $h1
+-	ip link set dev br0 type bridge mcast_query_interval 500 mcast_query_response_interval 500
++	ip link set dev br0 type bridge mcast_query_interval 500 \
++					mcast_query_response_interval 500 \
++					mcast_membership_interval 1500
++
+ 	$MZ $h1 -c 1 $MZPKT_ALLOW2 -q
+ 	sleep 3
+ 	bridge -j -d -s mdb show dev br0 \
+@@ -514,7 +519,8 @@ mldv2exc_timeout_test()
+ 	log_test "MLDv2 group $TEST_GROUP exclude timeout"
+ 
+ 	ip link set dev br0 type bridge mcast_query_interval 12500 \
+-					mcast_query_response_interval 1000
++					mcast_query_response_interval 1000 \
++					mcast_membership_interval 26000
+ 
+ 	mldv2cleanup $swp1
+ }
+diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
+index 76a24052f4b47..6a193425c367f 100644
+--- a/tools/testing/selftests/net/udpgso_bench_rx.c
++++ b/tools/testing/selftests/net/udpgso_bench_rx.c
+@@ -293,19 +293,17 @@ static void usage(const char *filepath)
+ 
+ static void parse_opts(int argc, char **argv)
+ {
++	const char *bind_addr = NULL;
+ 	int c;
+ 
+-	/* bind to any by default */
+-	setup_sockaddr(PF_INET6, "::", &cfg_bind_addr);
+ 	while ((c = getopt(argc, argv, "4b:C:Gl:n:p:rR:S:tv")) != -1) {
+ 		switch (c) {
+ 		case '4':
+ 			cfg_family = PF_INET;
+ 			cfg_alen = sizeof(struct sockaddr_in);
+-			setup_sockaddr(PF_INET, "0.0.0.0", &cfg_bind_addr);
+ 			break;
+ 		case 'b':
+-			setup_sockaddr(cfg_family, optarg, &cfg_bind_addr);
++			bind_addr = optarg;
+ 			break;
+ 		case 'C':
+ 			cfg_connect_timeout_ms = strtoul(optarg, NULL, 0);
+@@ -341,6 +339,11 @@ static void parse_opts(int argc, char **argv)
+ 		}
+ 	}
+ 
++	if (!bind_addr)
++		bind_addr = cfg_family == PF_INET6 ? "::" : "0.0.0.0";
++
++	setup_sockaddr(cfg_family, bind_addr, &cfg_bind_addr);
++
+ 	if (optind != argc)
+ 		usage(argv[0]);
+ 
+diff --git a/tools/testing/selftests/sched/cs_prctl_test.c b/tools/testing/selftests/sched/cs_prctl_test.c
+index 63fe6521c56d9..1829383715c66 100644
+--- a/tools/testing/selftests/sched/cs_prctl_test.c
++++ b/tools/testing/selftests/sched/cs_prctl_test.c
+@@ -64,6 +64,17 @@ enum pid_type {PIDTYPE_PID = 0, PIDTYPE_TGID, PIDTYPE_PGID};
+ 
+ const int THREAD_CLONE_FLAGS = CLONE_THREAD | CLONE_SIGHAND | CLONE_FS | CLONE_VM | CLONE_FILES;
+ 
++struct child_args {
++	int num_threads;
++	int pfd[2];
++	int cpid;
++	int thr_tids[MAX_THREADS];
++};
++
++static struct child_args procs[MAX_PROCESSES];
++static int num_processes = 2;
++static int need_cleanup = 0;
++
+ static int _prctl(int option, unsigned long arg2, unsigned long arg3, unsigned long arg4,
+ 		  unsigned long arg5)
+ {
+@@ -80,8 +91,14 @@ static int _prctl(int option, unsigned long arg2, unsigned long arg3, unsigned l
+ #define handle_error(msg) __handle_error(__FILE__, __LINE__, msg)
+ static void __handle_error(char *fn, int ln, char *msg)
+ {
++	int pidx;
+ 	printf("(%s:%d) - ", fn, ln);
+ 	perror(msg);
++	if (need_cleanup) {
++		for (pidx = 0; pidx < num_processes; ++pidx)
++			kill(procs[pidx].cpid, 15);
++		need_cleanup = 0;
++	}
+ 	exit(EXIT_FAILURE);
+ }
+ 
+@@ -108,13 +125,6 @@ static unsigned long get_cs_cookie(int pid)
+ 	return cookie;
+ }
+ 
+-struct child_args {
+-	int num_threads;
+-	int pfd[2];
+-	int cpid;
+-	int thr_tids[MAX_THREADS];
+-};
+-
+ static int child_func_thread(void __attribute__((unused))*arg)
+ {
+ 	while (1)
+@@ -214,10 +224,7 @@ void _validate(int line, int val, char *msg)
+ 
+ int main(int argc, char *argv[])
+ {
+-	struct child_args procs[MAX_PROCESSES];
+-
+ 	int keypress = 0;
+-	int num_processes = 2;
+ 	int num_threads = 3;
+ 	int delay = 0;
+ 	int res = 0;
+@@ -264,6 +271,7 @@ int main(int argc, char *argv[])
+ 
+ 	printf("\n## Create a thread/process/process group hiearchy\n");
+ 	create_processes(num_processes, num_threads, procs);
++	need_cleanup = 1;
+ 	disp_processes(num_processes, procs);
+ 	validate(get_cs_cookie(0) == 0);
+ 
+diff --git a/tools/testing/selftests/vm/split_huge_page_test.c b/tools/testing/selftests/vm/split_huge_page_test.c
+index 1af16d2c2a0ac..52497b7b9f1db 100644
+--- a/tools/testing/selftests/vm/split_huge_page_test.c
++++ b/tools/testing/selftests/vm/split_huge_page_test.c
+@@ -341,7 +341,7 @@ void split_file_backed_thp(void)
+ 	}
+ 
+ 	/* write something to the file, so a file-backed THP can be allocated */
+-	num_written = write(fd, tmpfs_loc, sizeof(tmpfs_loc));
++	num_written = write(fd, tmpfs_loc, strlen(tmpfs_loc) + 1);
+ 	close(fd);
+ 
+ 	if (num_written < 1) {
+diff --git a/tools/testing/selftests/x86/iopl.c b/tools/testing/selftests/x86/iopl.c
+index bab2f6e06b63d..7e3e09c1abac6 100644
+--- a/tools/testing/selftests/x86/iopl.c
++++ b/tools/testing/selftests/x86/iopl.c
+@@ -85,48 +85,88 @@ static void expect_gp_outb(unsigned short port)
+ 	printf("[OK]\toutb to 0x%02hx failed\n", port);
+ }
+ 
+-static bool try_cli(void)
++#define RET_FAULTED	0
++#define RET_FAIL	1
++#define RET_EMUL	2
++
++static int try_cli(void)
+ {
++	unsigned long flags;
++
+ 	sethandler(SIGSEGV, sigsegv, SA_RESETHAND);
+ 	if (sigsetjmp(jmpbuf, 1) != 0) {
+-		return false;
++		return RET_FAULTED;
+ 	} else {
+-		asm volatile ("cli");
+-		return true;
++		asm volatile("cli; pushf; pop %[flags]"
++				: [flags] "=rm" (flags));
++
++		/* X86_FLAGS_IF */
++		if (!(flags & (1 << 9)))
++			return RET_FAIL;
++		else
++			return RET_EMUL;
+ 	}
+ 	clearhandler(SIGSEGV);
+ }
+ 
+-static bool try_sti(void)
++static int try_sti(bool irqs_off)
+ {
++	unsigned long flags;
++
+ 	sethandler(SIGSEGV, sigsegv, SA_RESETHAND);
+ 	if (sigsetjmp(jmpbuf, 1) != 0) {
+-		return false;
++		return RET_FAULTED;
+ 	} else {
+-		asm volatile ("sti");
+-		return true;
++		asm volatile("sti; pushf; pop %[flags]"
++				: [flags] "=rm" (flags));
++
++		/* X86_FLAGS_IF */
++		if (irqs_off && (flags & (1 << 9)))
++			return RET_FAIL;
++		else
++			return RET_EMUL;
+ 	}
+ 	clearhandler(SIGSEGV);
+ }
+ 
+-static void expect_gp_sti(void)
++static void expect_gp_sti(bool irqs_off)
+ {
+-	if (try_sti()) {
++	int ret = try_sti(irqs_off);
++
++	switch (ret) {
++	case RET_FAULTED:
++		printf("[OK]\tSTI faulted\n");
++		break;
++	case RET_EMUL:
++		printf("[OK]\tSTI NOPped\n");
++		break;
++	default:
+ 		printf("[FAIL]\tSTI worked\n");
+ 		nerrs++;
+-	} else {
+-		printf("[OK]\tSTI faulted\n");
+ 	}
+ }
+ 
+-static void expect_gp_cli(void)
++/*
++ * Returns whether it managed to disable interrupts.
++ */
++static bool test_cli(void)
+ {
+-	if (try_cli()) {
++	int ret = try_cli();
++
++	switch (ret) {
++	case RET_FAULTED:
++		printf("[OK]\tCLI faulted\n");
++		break;
++	case RET_EMUL:
++		printf("[OK]\tCLI NOPped\n");
++		break;
++	default:
+ 		printf("[FAIL]\tCLI worked\n");
+ 		nerrs++;
+-	} else {
+-		printf("[OK]\tCLI faulted\n");
++		return true;
+ 	}
++
++	return false;
+ }
+ 
+ int main(void)
+@@ -152,8 +192,7 @@ int main(void)
+ 	}
+ 
+ 	/* Make sure that CLI/STI are blocked even with IOPL level 3 */
+-	expect_gp_cli();
+-	expect_gp_sti();
++	expect_gp_sti(test_cli());
+ 	expect_ok_outb(0x80);
+ 
+ 	/* Establish an I/O bitmap to test the restore */
+@@ -204,8 +243,7 @@ int main(void)
+ 	printf("[RUN]\tparent: write to 0x80 (should fail)\n");
+ 
+ 	expect_gp_outb(0x80);
+-	expect_gp_cli();
+-	expect_gp_sti();
++	expect_gp_sti(test_cli());
+ 
+ 	/* Test the capability checks. */
+ 	printf("\tiopl(3)\n");
+diff --git a/tools/tracing/latency/latency-collector.c b/tools/tracing/latency/latency-collector.c
+index 3a2e6bb781a8c..59a7f2346eab4 100644
+--- a/tools/tracing/latency/latency-collector.c
++++ b/tools/tracing/latency/latency-collector.c
+@@ -1538,7 +1538,7 @@ static void tracing_loop(void)
+ 				mutex_lock(&print_mtx);
+ 				check_signals();
+ 				write_or_die(fd_stdout, queue_full_warning,
+-					     sizeof(queue_full_warning));
++					     strlen(queue_full_warning));
+ 				mutex_unlock(&print_mtx);
+ 			}
+ 			modified--;


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-18 15:32 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-18 15:32 UTC (permalink / raw
  To: gentoo-commits

commit:     d7e7f2ad6ee719ecb585e302141124540d80f29c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 18 15:32:32 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 18 15:32:32 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d7e7f2ad

Linux patch 5.14.20

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1019_linux-5.14.20.patch | 947 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 951 insertions(+)

diff --git a/0000_README b/0000_README
index 534f161e..6e5582fd 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1018_linux-5.14.19.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.19
 
+Patch:  1019_linux-5.14.20.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.20
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1019_linux-5.14.20.patch b/1019_linux-5.14.20.patch
new file mode 100644
index 00000000..2762b0a9
--- /dev/null
+++ b/1019_linux-5.14.20.patch
@@ -0,0 +1,947 @@
+diff --git a/Makefile b/Makefile
+index f4773aee95c4e..0e14a3a30d073 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/alpha/include/asm/processor.h b/arch/alpha/include/asm/processor.h
+index 090499c99c1c1..6100431da07a3 100644
+--- a/arch/alpha/include/asm/processor.h
++++ b/arch/alpha/include/asm/processor.h
+@@ -42,7 +42,7 @@ extern void start_thread(struct pt_regs *, unsigned long, unsigned long);
+ struct task_struct;
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
+ 
+diff --git a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
+index 5f8527081da92..a5123ea426ce5 100644
+--- a/arch/alpha/kernel/process.c
++++ b/arch/alpha/kernel/process.c
+@@ -376,11 +376,12 @@ thread_saved_pc(struct task_struct *t)
+ }
+ 
+ unsigned long
+-__get_wchan(struct task_struct *p)
++get_wchan(struct task_struct *p)
+ {
+ 	unsigned long schedule_frame;
+ 	unsigned long pc;
+-
++	if (!p || p == current || task_is_running(p))
++		return 0;
+ 	/*
+ 	 * This one depends on the frame size of schedule().  Do a
+ 	 * "disass schedule" in gdb to find the frame size.  Also, the
+diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h
+index 04a5268e592b9..e4031ecd3c8c1 100644
+--- a/arch/arc/include/asm/processor.h
++++ b/arch/arc/include/asm/processor.h
+@@ -70,7 +70,7 @@ struct task_struct;
+ extern void start_thread(struct pt_regs * regs, unsigned long pc,
+ 			 unsigned long usp);
+ 
+-extern unsigned int __get_wchan(struct task_struct *p);
++extern unsigned int get_wchan(struct task_struct *p);
+ 
+ #endif /* !__ASSEMBLY__ */
+ 
+diff --git a/arch/arc/kernel/stacktrace.c b/arch/arc/kernel/stacktrace.c
+index db96cc8783891..1b9576d21e244 100644
+--- a/arch/arc/kernel/stacktrace.c
++++ b/arch/arc/kernel/stacktrace.c
+@@ -15,7 +15,7 @@
+  *      = specifics of data structs where trace is saved(CONFIG_STACKTRACE etc)
+  *
+  *  vineetg: March 2009
+- *  -Implemented correct versions of thread_saved_pc() and __get_wchan()
++ *  -Implemented correct versions of thread_saved_pc() and get_wchan()
+  *
+  *  rajeshwarr: 2008
+  *  -Initial implementation
+@@ -248,7 +248,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
+  * Of course just returning schedule( ) would be pointless so unwind until
+  * the function is not in schedular code
+  */
+-unsigned int __get_wchan(struct task_struct *tsk)
++unsigned int get_wchan(struct task_struct *tsk)
+ {
+ 	return arc_unwind_core(tsk, NULL, __get_first_nonsched, NULL);
+ }
+diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
+index 6af68edfa53ab..9e6b972863077 100644
+--- a/arch/arm/include/asm/processor.h
++++ b/arch/arm/include/asm/processor.h
+@@ -84,7 +84,7 @@ struct task_struct;
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define task_pt_regs(p) \
+ 	((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index 261be96fa0c30..fc9e8b37eaa84 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -283,11 +283,13 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
+ 	return 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	struct stackframe frame;
+ 	unsigned long stack_page;
+ 	int count = 0;
++	if (!p || p == current || task_is_running(p))
++		return 0;
+ 
+ 	frame.fp = thread_saved_fp(p);
+ 	frame.sp = thread_saved_sp(p);
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 922355eb7eefa..b6517fd03d7b6 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -251,7 +251,7 @@ struct task_struct;
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ void set_task_sctlr_el1(u64 sctlr);
+ 
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 46995c972ff5f..c858b857c1ecf 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -544,11 +544,13 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
+ 	return last;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	struct stackframe frame;
+ 	unsigned long stack_page, ret = 0;
+ 	int count = 0;
++	if (!p || p == current || task_is_running(p))
++		return 0;
+ 
+ 	stack_page = (unsigned long)try_get_task_stack(p);
+ 	if (!stack_page)
+diff --git a/arch/csky/include/asm/processor.h b/arch/csky/include/asm/processor.h
+index 817dd60ff152d..9e933021fe8e0 100644
+--- a/arch/csky/include/asm/processor.h
++++ b/arch/csky/include/asm/processor.h
+@@ -81,7 +81,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ 
+ extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
+ #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->usp)
+diff --git a/arch/csky/kernel/stacktrace.c b/arch/csky/kernel/stacktrace.c
+index 9f78f5d215117..1b280ef080045 100644
+--- a/arch/csky/kernel/stacktrace.c
++++ b/arch/csky/kernel/stacktrace.c
+@@ -111,11 +111,12 @@ static bool save_wchan(unsigned long pc, void *arg)
+ 	return false;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *task)
++unsigned long get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc = 0;
+ 
+-	walk_stackframe(task, NULL, save_wchan, &pc);
++	if (likely(task && task != current && !task_is_running(task)))
++		walk_stackframe(task, NULL, save_wchan, &pc);
+ 	return pc;
+ }
+ 
+diff --git a/arch/h8300/include/asm/processor.h b/arch/h8300/include/asm/processor.h
+index 141a23eb62b74..a060b41b2d31c 100644
+--- a/arch/h8300/include/asm/processor.h
++++ b/arch/h8300/include/asm/processor.h
+@@ -105,7 +105,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define	KSTK_EIP(tsk)	\
+ 	({			 \
+diff --git a/arch/h8300/kernel/process.c b/arch/h8300/kernel/process.c
+index 8833fa4f5d516..2ac27e4248a46 100644
+--- a/arch/h8300/kernel/process.c
++++ b/arch/h8300/kernel/process.c
+@@ -128,12 +128,15 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
+ 	return 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	stack_page = (unsigned long)p;
+ 	fp = ((struct pt_regs *)p->thread.ksp)->er6;
+ 	do {
+diff --git a/arch/hexagon/include/asm/processor.h b/arch/hexagon/include/asm/processor.h
+index 615f7e49968e6..9f0cc99420bee 100644
+--- a/arch/hexagon/include/asm/processor.h
++++ b/arch/hexagon/include/asm/processor.h
+@@ -64,7 +64,7 @@ struct thread_struct {
+ extern void release_thread(struct task_struct *dead_task);
+ 
+ /* Get wait channel for task P.  */
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ /*  The following stuff is pretty HEXAGON specific.  */
+ 
+diff --git a/arch/hexagon/kernel/process.c b/arch/hexagon/kernel/process.c
+index 232dfd8956aa2..6a6835fb42425 100644
+--- a/arch/hexagon/kernel/process.c
++++ b/arch/hexagon/kernel/process.c
+@@ -130,11 +130,13 @@ void flush_thread(void)
+  * is an identification of the point at which the scheduler
+  * was invoked by a blocked thread.
+  */
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
++	if (!p || p == current || task_is_running(p))
++		return 0;
+ 
+ 	stack_page = (unsigned long)task_stack_page(p);
+ 	fp = ((struct hexagon_switch_stack *)p->thread.switch_sp)->fp;
+diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
+index 45365c2ef5983..2d8bcdc27d7f8 100644
+--- a/arch/ia64/include/asm/processor.h
++++ b/arch/ia64/include/asm/processor.h
+@@ -330,7 +330,7 @@ struct task_struct;
+ #define release_thread(dead_task)
+ 
+ /* Get wait channel for task P.  */
+-extern unsigned long __get_wchan (struct task_struct *p);
++extern unsigned long get_wchan (struct task_struct *p);
+ 
+ /* Return instruction pointer of blocked task TSK.  */
+ #define KSTK_EIP(tsk)					\
+diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
+index 834df24a88f12..e56d63f4abf9d 100644
+--- a/arch/ia64/kernel/process.c
++++ b/arch/ia64/kernel/process.c
+@@ -523,12 +523,15 @@ exit_thread (struct task_struct *tsk)
+ }
+ 
+ unsigned long
+-__get_wchan (struct task_struct *p)
++get_wchan (struct task_struct *p)
+ {
+ 	struct unw_frame_info info;
+ 	unsigned long ip;
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	/*
+ 	 * Note: p may not be a blocked task (it could be current or
+ 	 * another process running on some other CPU.  Rather than
+diff --git a/arch/m68k/include/asm/processor.h b/arch/m68k/include/asm/processor.h
+index bacec548cb3c6..3750819ac5a13 100644
+--- a/arch/m68k/include/asm/processor.h
++++ b/arch/m68k/include/asm/processor.h
+@@ -125,7 +125,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define	KSTK_EIP(tsk)	\
+     ({			\
+diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
+index d2357cba09abe..db49f90917112 100644
+--- a/arch/m68k/kernel/process.c
++++ b/arch/m68k/kernel/process.c
+@@ -263,11 +263,13 @@ int dump_fpu (struct pt_regs *regs, struct user_m68kfp_struct *fpu)
+ }
+ EXPORT_SYMBOL(dump_fpu);
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
++	if (!p || p == current || task_is_running(p))
++		return 0;
+ 
+ 	stack_page = (unsigned long)task_stack_page(p);
+ 	fp = ((struct switch_stack *)p->thread.ksp)->a6;
+diff --git a/arch/microblaze/include/asm/processor.h b/arch/microblaze/include/asm/processor.h
+index 7e9e92670df33..06c6e493590a2 100644
+--- a/arch/microblaze/include/asm/processor.h
++++ b/arch/microblaze/include/asm/processor.h
+@@ -68,7 +68,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ /* The size allocated for kernel stacks. This _must_ be a power of two! */
+ # define KERNEL_STACK_SIZE	0x2000
+diff --git a/arch/microblaze/kernel/process.c b/arch/microblaze/kernel/process.c
+index 5e2b91c1e8ced..62aa237180b67 100644
+--- a/arch/microblaze/kernel/process.c
++++ b/arch/microblaze/kernel/process.c
+@@ -112,7 +112,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
+ 	return 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ /* TBD (used by procfs) */
+ 	return 0;
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index 252ed38ce8c5a..0c3550c82b726 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -369,7 +369,7 @@ static inline void flush_thread(void)
+ {
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
+ 			 THREAD_SIZE - 32 - sizeof(struct pt_regs))
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 637e6207e3500..73c8e7990a973 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -511,7 +511,7 @@ static int __init frame_info_init(void)
+ 
+ 	/*
+ 	 * Without schedule() frame info, result given by
+-	 * thread_saved_pc() and __get_wchan() are not reliable.
++	 * thread_saved_pc() and get_wchan() are not reliable.
+ 	 */
+ 	if (schedule_mfi.pc_offset < 0)
+ 		printk("Can't analyze schedule() prologue at %p\n", schedule);
+@@ -652,9 +652,9 @@ unsigned long unwind_stack(struct task_struct *task, unsigned long *sp,
+ #endif
+ 
+ /*
+- * __get_wchan - a maintenance nightmare^W^Wpain in the ass ...
++ * get_wchan - a maintenance nightmare^W^Wpain in the ass ...
+  */
+-unsigned long __get_wchan(struct task_struct *task)
++unsigned long get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc = 0;
+ #ifdef CONFIG_KALLSYMS
+@@ -662,6 +662,8 @@ unsigned long __get_wchan(struct task_struct *task)
+ 	unsigned long ra = 0;
+ #endif
+ 
++	if (!task || task == current || task_is_running(task))
++		goto out;
+ 	if (!task_stack_page(task))
+ 		goto out;
+ 
+diff --git a/arch/nds32/include/asm/processor.h b/arch/nds32/include/asm/processor.h
+index e6bfc74972bb3..b82369c7659d4 100644
+--- a/arch/nds32/include/asm/processor.h
++++ b/arch/nds32/include/asm/processor.h
+@@ -83,7 +83,7 @@ extern struct task_struct *last_task_used_math;
+ /* Prepare to copy thread state - unlazy all lazy status */
+ #define prepare_to_copy(tsk)	do { } while (0)
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define cpu_relax()			barrier()
+ 
+diff --git a/arch/nds32/kernel/process.c b/arch/nds32/kernel/process.c
+index 49fab9e39cbff..391895b54d13c 100644
+--- a/arch/nds32/kernel/process.c
++++ b/arch/nds32/kernel/process.c
+@@ -233,12 +233,15 @@ int dump_fpu(struct pt_regs *regs, elf_fpregset_t * fpu)
+ 
+ EXPORT_SYMBOL(dump_fpu);
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, lr;
+ 	unsigned long stack_start, stack_end;
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	if (IS_ENABLED(CONFIG_FRAME_POINTER)) {
+ 		stack_start = (unsigned long)end_of_stack(p);
+ 		stack_end = (unsigned long)task_stack_page(p) + THREAD_SIZE;
+@@ -255,3 +258,5 @@ unsigned long __get_wchan(struct task_struct *p)
+ 	}
+ 	return 0;
+ }
++
++EXPORT_SYMBOL(get_wchan);
+diff --git a/arch/nios2/include/asm/processor.h b/arch/nios2/include/asm/processor.h
+index b8125dfbcad2d..94bcb86f679f5 100644
+--- a/arch/nios2/include/asm/processor.h
++++ b/arch/nios2/include/asm/processor.h
+@@ -69,7 +69,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ #define task_pt_regs(p) \
+ 	((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)
+diff --git a/arch/nios2/kernel/process.c b/arch/nios2/kernel/process.c
+index f8ea522a15880..9ff37ba2bb603 100644
+--- a/arch/nios2/kernel/process.c
++++ b/arch/nios2/kernel/process.c
+@@ -217,12 +217,15 @@ void dump(struct pt_regs *fp)
+ 	pr_emerg("\n\n");
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long fp, pc;
+ 	unsigned long stack_page;
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	stack_page = (unsigned long)p;
+ 	fp = ((struct switch_stack *)p->thread.ksp)->fp;	/* ;dgt2 */
+ 	do {
+diff --git a/arch/openrisc/include/asm/processor.h b/arch/openrisc/include/asm/processor.h
+index aa1699c18add8..ad53b31848859 100644
+--- a/arch/openrisc/include/asm/processor.h
++++ b/arch/openrisc/include/asm/processor.h
+@@ -73,7 +73,7 @@ struct thread_struct {
+ 
+ void start_thread(struct pt_regs *regs, unsigned long nip, unsigned long sp);
+ void release_thread(struct task_struct *);
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define cpu_relax()     barrier()
+ 
+diff --git a/arch/openrisc/kernel/process.c b/arch/openrisc/kernel/process.c
+index eeea6d54b198c..eb62429681fc8 100644
+--- a/arch/openrisc/kernel/process.c
++++ b/arch/openrisc/kernel/process.c
+@@ -265,7 +265,7 @@ void dump_elf_thread(elf_greg_t *dest, struct pt_regs* regs)
+ 	dest[35] = 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	/* TODO */
+ 
+diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h
+index 5e5ceb5b9631f..b5fbcd2c17808 100644
+--- a/arch/parisc/include/asm/processor.h
++++ b/arch/parisc/include/asm/processor.h
+@@ -277,7 +277,7 @@ struct mm_struct;
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)	((tsk)->thread.regs.iaoq[0])
+ #define KSTK_ESP(tsk)	((tsk)->thread.regs.gr[30])
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 05e89d4fa911a..184ec3c1eae44 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -243,12 +243,15 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
+ }
+ 
+ unsigned long
+-__get_wchan(struct task_struct *p)
++get_wchan(struct task_struct *p)
+ {
+ 	struct unwind_frame_info info;
+ 	unsigned long ip;
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	/*
+ 	 * These bracket the sleeping functions..
+ 	 */
+diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
+index e39bd0ff69f3a..f348e564f7dd5 100644
+--- a/arch/powerpc/include/asm/processor.h
++++ b/arch/powerpc/include/asm/processor.h
+@@ -300,7 +300,7 @@ struct thread_struct {
+ 
+ #define task_pt_regs(tsk)	((tsk)->thread.regs)
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->nip: 0)
+ #define KSTK_ESP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->gpr[1]: 0)
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 247ef0b9bfa4e..185beb2905801 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -2111,11 +2111,14 @@ int validate_sp(unsigned long sp, struct task_struct *p,
+ 
+ EXPORT_SYMBOL(validate_sp);
+ 
+-static unsigned long ___get_wchan(struct task_struct *p)
++static unsigned long __get_wchan(struct task_struct *p)
+ {
+ 	unsigned long ip, sp;
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	sp = p->thread.ksp;
+ 	if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD))
+ 		return 0;
+@@ -2134,14 +2137,14 @@ static unsigned long ___get_wchan(struct task_struct *p)
+ 	return 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long ret;
+ 
+ 	if (!try_get_task_stack(p))
+ 		return 0;
+ 
+-	ret = ___get_wchan(p);
++	ret = __get_wchan(p);
+ 
+ 	put_task_stack(p);
+ 
+diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
+index 086821b44def1..021ed64ee608f 100644
+--- a/arch/riscv/include/asm/processor.h
++++ b/arch/riscv/include/asm/processor.h
+@@ -58,7 +58,7 @@ static inline void release_thread(struct task_struct *dead_task)
+ {
+ }
+ 
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ 
+ static inline void wait_for_interrupt(void)
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 0fcdc0233faca..315db3d0229bf 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -128,14 +128,16 @@ static bool save_wchan(void *arg, unsigned long pc)
+ 	return true;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *task)
++unsigned long get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc = 0;
+ 
+-	if (!try_get_task_stack(task))
+-		return 0;
+-	walk_stackframe(task, NULL, save_wchan, &pc);
+-	put_task_stack(task);
++	if (likely(task && task != current && !task_is_running(task))) {
++		if (!try_get_task_stack(task))
++			return 0;
++		walk_stackframe(task, NULL, save_wchan, &pc);
++		put_task_stack(task);
++	}
+ 	return pc;
+ }
+ 
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index f54c152bf2bf9..879b8e3f609cd 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -192,7 +192,7 @@ static inline void release_thread(struct task_struct *tsk) { }
+ void guarded_storage_release(struct task_struct *tsk);
+ void gs_load_bc_cb(struct pt_regs *regs);
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ #define task_pt_regs(tsk) ((struct pt_regs *) \
+         (task_stack_page(tsk) + THREAD_SIZE) - 1)
+ #define KSTK_EIP(tsk)	(task_pt_regs(tsk)->psw.addr)
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index e5dd46b1bff8c..350e94d0cac23 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -181,12 +181,12 @@ void execve_tail(void)
+ 	asm volatile("sfpc %0" : : "d" (0));
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	struct unwind_state state;
+ 	unsigned long ip = 0;
+ 
+-	if (!task_stack_page(p))
++	if (!p || p == current || task_is_running(p) || !task_stack_page(p))
+ 		return 0;
+ 
+ 	if (!try_get_task_stack(p))
+diff --git a/arch/sh/include/asm/processor_32.h b/arch/sh/include/asm/processor_32.h
+index 45240ec6b85a4..aa92cc933889d 100644
+--- a/arch/sh/include/asm/processor_32.h
++++ b/arch/sh/include/asm/processor_32.h
+@@ -180,7 +180,7 @@ static inline void show_code(struct pt_regs *regs)
+ }
+ #endif
+ 
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->pc)
+ #define KSTK_ESP(tsk)  (task_pt_regs(tsk)->regs[15])
+diff --git a/arch/sh/kernel/process_32.c b/arch/sh/kernel/process_32.c
+index 1c28e3cddb60d..717de05c81f49 100644
+--- a/arch/sh/kernel/process_32.c
++++ b/arch/sh/kernel/process_32.c
+@@ -182,10 +182,13 @@ __switch_to(struct task_struct *prev, struct task_struct *next)
+ 	return prev;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long pc;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	/*
+ 	 * The same comment as on the Alpha applies here, too ...
+ 	 */
+diff --git a/arch/sparc/include/asm/processor_32.h b/arch/sparc/include/asm/processor_32.h
+index 647bf0ac7beb9..b6242f7771e9e 100644
+--- a/arch/sparc/include/asm/processor_32.h
++++ b/arch/sparc/include/asm/processor_32.h
+@@ -89,7 +89,7 @@ static inline void start_thread(struct pt_regs * regs, unsigned long pc,
+ /* Free all resources held by a thread. */
+ #define release_thread(tsk)		do { } while(0)
+ 
+-unsigned long __get_wchan(struct task_struct *);
++unsigned long get_wchan(struct task_struct *);
+ 
+ #define task_pt_regs(tsk) ((tsk)->thread.kregs)
+ #define KSTK_EIP(tsk)  ((tsk)->thread.kregs->pc)
+diff --git a/arch/sparc/include/asm/processor_64.h b/arch/sparc/include/asm/processor_64.h
+index ae851e8fce4c9..5cf145f18f36b 100644
+--- a/arch/sparc/include/asm/processor_64.h
++++ b/arch/sparc/include/asm/processor_64.h
+@@ -183,7 +183,7 @@ do { \
+ /* Free all resources held by a thread. */
+ #define release_thread(tsk)		do { } while (0)
+ 
+-unsigned long __get_wchan(struct task_struct *task);
++unsigned long get_wchan(struct task_struct *task);
+ 
+ #define task_pt_regs(tsk) (task_thread_info(tsk)->kregs)
+ #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->tpc)
+diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
+index 29a2f396f8601..93983d6d431de 100644
+--- a/arch/sparc/kernel/process_32.c
++++ b/arch/sparc/kernel/process_32.c
+@@ -368,7 +368,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
+ 	return 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *task)
++unsigned long get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc, fp, bias = 0;
+ 	unsigned long task_base = (unsigned long) task;
+@@ -376,6 +376,9 @@ unsigned long __get_wchan(struct task_struct *task)
+ 	struct reg_window32 *rw;
+ 	int count = 0;
+ 
++	if (!task || task == current || task_is_running(task))
++		goto out;
++
+ 	fp = task_thread_info(task)->ksp + bias;
+ 	do {
+ 		/* Bogus frame pointer? */
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index fa8db86e561c7..d33c58a58d4ff 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -666,7 +666,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ 	return 0;
+ }
+ 
+-unsigned long __get_wchan(struct task_struct *task)
++unsigned long get_wchan(struct task_struct *task)
+ {
+ 	unsigned long pc, fp, bias = 0;
+ 	struct thread_info *tp;
+@@ -674,6 +674,9 @@ unsigned long __get_wchan(struct task_struct *task)
+         unsigned long ret = 0;
+ 	int count = 0; 
+ 
++	if (!task || task == current || task_is_running(task))
++		goto out;
++
+ 	tp = task_thread_info(task);
+ 	bias = STACK_BIAS;
+ 	fp = task_thread_info(task)->ksp + bias;
+diff --git a/arch/um/include/asm/processor-generic.h b/arch/um/include/asm/processor-generic.h
+index 579692a40a556..b5cf0ed116d9e 100644
+--- a/arch/um/include/asm/processor-generic.h
++++ b/arch/um/include/asm/processor-generic.h
+@@ -106,6 +106,6 @@ extern struct cpuinfo_um boot_cpu_data;
+ #define cache_line_size()	(boot_cpu_data.cache_alignment)
+ 
+ #define KSTK_REG(tsk, reg) get_thread_reg(reg, &tsk->thread.switch_buf)
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 82107373ac7e9..457a38db368b7 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -364,11 +364,14 @@ unsigned long arch_align_stack(unsigned long sp)
+ }
+ #endif
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long stack_page, sp, ip;
+ 	bool seen_sched = 0;
+ 
++	if ((p == NULL) || (p == current) || task_is_running(p))
++		return 0;
++
+ 	stack_page = (unsigned long) task_stack_page(p);
+ 	/* Bail if the process has no kernel stack for some reason */
+ 	if (stack_page == 0)
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 6f9ed2e800f21..d2c11378c7832 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -588,7 +588,7 @@ static inline void load_sp0(unsigned long sp0)
+ /* Free all resources held by a thread. */
+ extern void release_thread(struct task_struct *);
+ 
+-unsigned long __get_wchan(struct task_struct *p);
++unsigned long get_wchan(struct task_struct *p);
+ 
+ /*
+  * Generic CPUID function
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 2fe1810e922a9..f2f733bcb2b95 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -43,7 +43,6 @@
+ #include <asm/io_bitmap.h>
+ #include <asm/proto.h>
+ #include <asm/frame.h>
+-#include <asm/unwind.h>
+ 
+ #include "process.h"
+ 
+@@ -944,22 +943,60 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
+  * because the task might wake up and we might look at a stack
+  * changing under us.
+  */
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+-	struct unwind_state state;
+-	unsigned long addr = 0;
++	unsigned long start, bottom, top, sp, fp, ip, ret = 0;
++	int count = 0;
+ 
+-	for (unwind_start(&state, p, NULL, NULL); !unwind_done(&state);
+-	     unwind_next_frame(&state)) {
+-		addr = unwind_get_return_address(&state);
+-		if (!addr)
+-			break;
+-		if (in_sched_functions(addr))
+-			continue;
+-		break;
+-	}
++	if (p == current || task_is_running(p))
++		return 0;
++
++	if (!try_get_task_stack(p))
++		return 0;
++
++	start = (unsigned long)task_stack_page(p);
++	if (!start)
++		goto out;
++
++	/*
++	 * Layout of the stack page:
++	 *
++	 * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
++	 * PADDING
++	 * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
++	 * stack
++	 * ----------- bottom = start
++	 *
++	 * The tasks stack pointer points at the location where the
++	 * framepointer is stored. The data on the stack is:
++	 * ... IP FP ... IP FP
++	 *
++	 * We need to read FP and IP, so we need to adjust the upper
++	 * bound by another unsigned long.
++	 */
++	top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;
++	top -= 2 * sizeof(unsigned long);
++	bottom = start;
++
++	sp = READ_ONCE(p->thread.sp);
++	if (sp < bottom || sp > top)
++		goto out;
++
++	fp = READ_ONCE_NOCHECK(((struct inactive_task_frame *)sp)->bp);
++	do {
++		if (fp < bottom || fp > top)
++			goto out;
++		ip = READ_ONCE_NOCHECK(*(unsigned long *)(fp + sizeof(unsigned long)));
++		if (!in_sched_functions(ip)) {
++			ret = ip;
++			goto out;
++		}
++		fp = READ_ONCE_NOCHECK(*(unsigned long *)fp);
++	} while (count++ < 16 && !task_is_running(p));
+ 
+-	return addr;
++out:
++	put_task_stack(p);
++	return ret;
+ }
+ 
+ long do_arch_prctl_common(struct task_struct *task, int option,
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index ad15fbc572838..7f63aca6a0d34 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -215,7 +215,7 @@ struct mm_struct;
+ /* Free all resources held by a thread. */
+ #define release_thread(thread) do { } while(0)
+ 
+-extern unsigned long __get_wchan(struct task_struct *p);
++extern unsigned long get_wchan(struct task_struct *p);
+ 
+ #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
+ #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->areg[1])
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 47f933fed8700..0601653406123 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -298,12 +298,15 @@ int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn,
+  * These bracket the sleeping functions..
+  */
+ 
+-unsigned long __get_wchan(struct task_struct *p)
++unsigned long get_wchan(struct task_struct *p)
+ {
+ 	unsigned long sp, pc;
+ 	unsigned long stack_page = (unsigned long) task_stack_page(p);
+ 	int count = 0;
+ 
++	if (!p || p == current || task_is_running(p))
++		return 0;
++
+ 	sp = p->thread.sp;
+ 	pc = MAKE_PC_FROM_RA(p->thread.ra, p->thread.sp);
+ 
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 4ee118cf06971..8e10c7accdbcc 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2030,7 +2030,6 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ #endif /* CONFIG_SMP */
+ 
+ extern bool sched_task_on_rq(struct task_struct *p);
+-extern unsigned long get_wchan(struct task_struct *p);
+ 
+ /*
+  * In order to reduce various lock holder preemption latencies provide an
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5ea5b6d8b2a94..9289da7f0ac4a 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1960,25 +1960,6 @@ bool sched_task_on_rq(struct task_struct *p)
+ 	return task_on_rq_queued(p);
+ }
+ 
+-unsigned long get_wchan(struct task_struct *p)
+-{
+-	unsigned long ip = 0;
+-	unsigned int state;
+-
+-	if (!p || p == current)
+-		return 0;
+-
+-	/* Only get wchan if task is blocked and we can keep it that way. */
+-	raw_spin_lock_irq(&p->pi_lock);
+-	state = READ_ONCE(p->__state);
+-	smp_rmb(); /* see try_to_wake_up() */
+-	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
+-		ip = __get_wchan(p);
+-	raw_spin_unlock_irq(&p->pi_lock);
+-
+-	return ip;
+-}
+-
+ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
+ {
+ 	if (!(flags & ENQUEUE_NOCLOCK))


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-19  0:18 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-19  0:18 UTC (permalink / raw
  To: gentoo-commits

commit:     17ecdd1bda59626c4d9ae901a5e75370cb8b3674
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 19 00:17:09 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 19 00:17:09 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=17ecdd1b

Fix for BMQ(BitMap Queue) Scheduler

Bug: https://bugs.gentoo.org/824586

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch b/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
index 99adff7c..cf68d7ea 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
@@ -9510,3 +9510,14 @@ index adf7ef194005..11c8f36e281b 100644
  	};
  	struct wakeup_test_data *x = data;
  
+--- a/kernel/sched/alt_core.c	2021-11-18 18:58:14.290182408 -0500
++++ b/kernel/sched/alt_core.c	2021-11-18 18:58:54.870593883 -0500
+@@ -2762,7 +2762,7 @@ int sched_fork(unsigned long clone_flags
+ 	return 0;
+ }
+ 
+-void sched_post_fork(struct task_struct *p) {}
++void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs) {}
+ 
+ #ifdef CONFIG_SCHEDSTATS
+ 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-21 20:38 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-21 20:38 UTC (permalink / raw
  To: gentoo-commits

commit:     b9cd64dcc70590a28789f605a8aebdc47983e1dd
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 21 20:38:38 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 21 20:38:38 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b9cd64dc

Linux patch 5.14.21

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1020_linux-5.14.21.patch | 375 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 379 insertions(+)

diff --git a/0000_README b/0000_README
index 6e5582fd..e8f44666 100644
--- a/0000_README
+++ b/0000_README
@@ -127,6 +127,10 @@ Patch:  1019_linux-5.14.20.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.14.20
 
+Patch:  1020_linux-5.14.21.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.14.21
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1020_linux-5.14.21.patch b/1020_linux-5.14.21.patch
new file mode 100644
index 00000000..d1393a87
--- /dev/null
+++ b/1020_linux-5.14.21.patch
@@ -0,0 +1,375 @@
+diff --git a/Makefile b/Makefile
+index 0e14a3a30d073..8b5ba7ee7543e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 14
+-SUBLEVEL = 20
++SUBLEVEL = 21
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 2716e58b498bb..437c8d31f3907 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -1835,7 +1835,7 @@ syscall_restore:
+ 
+ 	/* Are we being ptraced? */
+ 	LDREG	TI_FLAGS-THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r19
+-	ldi	_TIF_SYSCALL_TRACE_MASK,%r2
++	ldi	_TIF_SINGLESTEP|_TIF_BLOCKSTEP,%r2
+ 	and,COND(=)	%r19,%r2,%r0
+ 	b,n	syscall_restore_rfi
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 285e865931436..599ed79581bf2 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3237,9 +3237,9 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
+ 			     "xor %1, %1\n"
+ 			     "2:\n"
+ 			     _ASM_EXTABLE_UA(1b, 2b)
+-			     : "+r" (st_preempted),
+-			       "+&r" (err)
+-			     : "m" (st->preempted));
++			     : "+q" (st_preempted),
++			       "+&r" (err),
++			       "+m" (st->preempted));
+ 		if (err)
+ 			goto out;
+ 
+diff --git a/drivers/acpi/glue.c b/drivers/acpi/glue.c
+index 3fd1713f1f626..fce3f3bba714a 100644
+--- a/drivers/acpi/glue.c
++++ b/drivers/acpi/glue.c
+@@ -363,28 +363,3 @@ int acpi_platform_notify(struct device *dev, enum kobject_action action)
+ 	}
+ 	return 0;
+ }
+-
+-int acpi_dev_turn_off_if_unused(struct device *dev, void *not_used)
+-{
+-	struct acpi_device *adev = to_acpi_device(dev);
+-
+-	/*
+-	 * Skip device objects with device IDs, because they may be in use even
+-	 * if they are not companions of any physical device objects.
+-	 */
+-	if (adev->pnp.type.hardware_id)
+-		return 0;
+-
+-	mutex_lock(&adev->physical_node_lock);
+-
+-	/*
+-	 * Device objects without device IDs are not in use if they have no
+-	 * corresponding physical device objects.
+-	 */
+-	if (list_empty(&adev->physical_node_list))
+-		acpi_device_set_power(adev, ACPI_STATE_D3_COLD);
+-
+-	mutex_unlock(&adev->physical_node_lock);
+-
+-	return 0;
+-}
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index 8fbdc172864b0..d91b560e88674 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -117,7 +117,6 @@ bool acpi_device_is_battery(struct acpi_device *adev);
+ bool acpi_device_is_first_physical_node(struct acpi_device *adev,
+ 					const struct device *dev);
+ int acpi_bus_register_early_device(int type);
+-int acpi_dev_turn_off_if_unused(struct device *dev, void *not_used);
+ 
+ /* --------------------------------------------------------------------------
+                      Device Matching and Notification
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index ae9464091f1b1..b24513ec3fae1 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -2560,12 +2560,6 @@ int __init acpi_scan_init(void)
+ 		}
+ 	}
+ 
+-	/*
+-	 * Make sure that power management resources are not blocked by ACPI
+-	 * device objects with no users.
+-	 */
+-	bus_for_each_dev(&acpi_bus_type, NULL, NULL, acpi_dev_turn_off_if_unused);
+-
+ 	acpi_turn_off_unused_power_resources();
+ 
+ 	acpi_scan_initialized = true;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 1f91bd41a29b2..b7956c5294257 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -272,19 +272,6 @@ static void __loop_update_dio(struct loop_device *lo, bool dio)
+ 		blk_mq_unfreeze_queue(lo->lo_queue);
+ }
+ 
+-/**
+- * loop_validate_block_size() - validates the passed in block size
+- * @bsize: size to validate
+- */
+-static int
+-loop_validate_block_size(unsigned short bsize)
+-{
+-	if (bsize < 512 || bsize > PAGE_SIZE || !is_power_of_2(bsize))
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+ /**
+  * loop_set_size() - sets device size and notifies userspace
+  * @lo: struct loop_device to set the size for
+@@ -1235,7 +1222,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	}
+ 
+ 	if (config->block_size) {
+-		error = loop_validate_block_size(config->block_size);
++		error = blk_validate_block_size(config->block_size);
+ 		if (error)
+ 			goto out_unlock;
+ 	}
+@@ -1761,7 +1748,7 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
+ 	if (lo->lo_state != Lo_bound)
+ 		return -ENXIO;
+ 
+-	err = loop_validate_block_size(arg);
++	err = blk_validate_block_size(arg);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index bd37d6fb88c26..40814d98a1a4b 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -434,6 +434,10 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0bda, 0xb009), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
+ 
++	/* Additional Realtek 8761B Bluetooth devices */
++	{ USB_DEVICE(0x2357, 0x0604), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	/* Additional Realtek 8761BU Bluetooth devices */
+ 	{ USB_DEVICE(0x0b05, 0x190e), .driver_info = BTUSB_REALTEK |
+ 	  					     BTUSB_WIDEBAND_SPEECH },
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 0bba672176b10..7ff89690a976a 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -97,8 +97,9 @@ config DRM_DEBUG_DP_MST_TOPOLOGY_REFS
+ 
+ config DRM_FBDEV_EMULATION
+ 	bool "Enable legacy fbdev support for your modesetting driver"
+-	depends on DRM_KMS_HELPER
+-	depends on FB=y || FB=DRM_KMS_HELPER
++	depends on DRM
++	depends on FB
++	select DRM_KMS_HELPER
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 3a9f4f8ad8f94..da5fbfcda433d 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -368,18 +368,6 @@ static void free_msi_irqs(struct pci_dev *dev)
+ 			for (i = 0; i < entry->nvec_used; i++)
+ 				BUG_ON(irq_has_action(entry->irq + i));
+ 
+-	pci_msi_teardown_msi_irqs(dev);
+-
+-	list_for_each_entry_safe(entry, tmp, msi_list, list) {
+-		if (entry->msi_attrib.is_msix) {
+-			if (list_is_last(&entry->list, msi_list))
+-				iounmap(entry->mask_base);
+-		}
+-
+-		list_del(&entry->list);
+-		free_msi_entry(entry);
+-	}
+-
+ 	if (dev->msi_irq_groups) {
+ 		sysfs_remove_groups(&dev->dev.kobj, dev->msi_irq_groups);
+ 		msi_attrs = dev->msi_irq_groups[0]->attrs;
+@@ -395,6 +383,18 @@ static void free_msi_irqs(struct pci_dev *dev)
+ 		kfree(dev->msi_irq_groups);
+ 		dev->msi_irq_groups = NULL;
+ 	}
++
++	pci_msi_teardown_msi_irqs(dev);
++
++	list_for_each_entry_safe(entry, tmp, msi_list, list) {
++		if (entry->msi_attrib.is_msix) {
++			if (list_is_last(&entry->list, msi_list))
++				iounmap(entry->mask_base);
++		}
++
++		list_del(&entry->list);
++		free_msi_entry(entry);
++	}
+ }
+ 
+ static void pci_intx_for_msi(struct pci_dev *dev, int enable)
+@@ -585,6 +585,9 @@ msi_setup_entry(struct pci_dev *dev, int nvec, struct irq_affinity *affd)
+ 		goto out;
+ 
+ 	pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
++	/* Lies, damned lies, and MSIs */
++	if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING)
++		control |= PCI_MSI_FLAGS_MASKBIT;
+ 
+ 	entry->msi_attrib.is_msix	= 0;
+ 	entry->msi_attrib.is_64		= !!(control & PCI_MSI_FLAGS_64BIT);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index cef69b71a6f12..d6000202a4556 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5760,3 +5760,9 @@ static void apex_pci_fixup_class(struct pci_dev *pdev)
+ }
+ DECLARE_PCI_FIXUP_CLASS_HEADER(0x1ac1, 0x089a,
+ 			       PCI_CLASS_NOT_DEFINED, 8, apex_pci_fixup_class);
++
++static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
++{
++	pdev->dev_flags |= PCI_DEV_FLAGS_HAS_MSI_MASKING;
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0ab8, nvidia_ion_ahci_fixup);
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 6379f26a335f6..9233f7e744544 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -89,7 +89,7 @@ static int of_thermal_get_temp(struct thermal_zone_device *tz,
+ {
+ 	struct __thermal_zone *data = tz->devdata;
+ 
+-	if (!data->ops->get_temp)
++	if (!data->ops || !data->ops->get_temp)
+ 		return -EINVAL;
+ 
+ 	return data->ops->get_temp(data->sensor_data, temp);
+@@ -186,6 +186,9 @@ static int of_thermal_set_emul_temp(struct thermal_zone_device *tz,
+ {
+ 	struct __thermal_zone *data = tz->devdata;
+ 
++	if (!data->ops || !data->ops->set_emul_temp)
++		return -EINVAL;
++
+ 	return data->ops->set_emul_temp(data->sensor_data, temp);
+ }
+ 
+@@ -194,7 +197,7 @@ static int of_thermal_get_trend(struct thermal_zone_device *tz, int trip,
+ {
+ 	struct __thermal_zone *data = tz->devdata;
+ 
+-	if (!data->ops->get_trend)
++	if (!data->ops || !data->ops->get_trend)
+ 		return -EINVAL;
+ 
+ 	return data->ops->get_trend(data->sensor_data, trip, trend);
+@@ -301,7 +304,7 @@ static int of_thermal_set_trip_temp(struct thermal_zone_device *tz, int trip,
+ 	if (trip >= data->ntrips || trip < 0)
+ 		return -EDOM;
+ 
+-	if (data->ops->set_trip_temp) {
++	if (data->ops && data->ops->set_trip_temp) {
+ 		int ret;
+ 
+ 		ret = data->ops->set_trip_temp(data->sensor_data, trip, temp);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index e7979fe7e4fad..42ed8ccc40e77 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -240,6 +240,14 @@ struct request {
+ 	void *end_io_data;
+ };
+ 
++static inline int blk_validate_block_size(unsigned int bsize)
++{
++	if (bsize < 512 || bsize > PAGE_SIZE || !is_power_of_2(bsize))
++		return -EINVAL;
++
++	return 0;
++}
++
+ static inline bool blk_op_is_passthrough(unsigned int op)
+ {
+ 	op &= REQ_OP_MASK;
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index acbed2ecf6e8c..25acd447c9227 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -227,6 +227,8 @@ enum pci_dev_flags {
+ 	PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10),
+ 	/* Don't use Relaxed Ordering for TLPs directed at this device */
+ 	PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11),
++	/* Device does honor MSI masking despite saying otherwise */
++	PCI_DEV_FLAGS_HAS_MSI_MASKING = (__force pci_dev_flags_t) (1 << 12),
+ };
+ 
+ enum pci_irq_reroute_variant {
+diff --git a/init/main.c b/init/main.c
+index 90733a916791f..5840218c06775 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -382,6 +382,7 @@ static char * __init xbc_make_cmdline(const char *key)
+ 	ret = xbc_snprint_cmdline(new_cmdline, len + 1, root);
+ 	if (ret < 0 || ret > len) {
+ 		pr_err("Failed to print extra kernel cmdline.\n");
++		memblock_free(__pa(new_cmdline), len + 1);
+ 		return NULL;
+ 	}
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 22c5b1622c226..69f86bc95011d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7141,7 +7141,6 @@ void perf_output_sample(struct perf_output_handle *handle,
+ static u64 perf_virt_to_phys(u64 virt)
+ {
+ 	u64 phys_addr = 0;
+-	struct page *p = NULL;
+ 
+ 	if (!virt)
+ 		return 0;
+@@ -7160,14 +7159,15 @@ static u64 perf_virt_to_phys(u64 virt)
+ 		 * If failed, leave phys_addr as 0.
+ 		 */
+ 		if (current->mm != NULL) {
++			struct page *p;
++
+ 			pagefault_disable();
+-			if (get_user_page_fast_only(virt, 0, &p))
++			if (get_user_page_fast_only(virt, 0, &p)) {
+ 				phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
++				put_page(p);
++			}
+ 			pagefault_enable();
+ 		}
+-
+-		if (p)
+-			put_page(p);
+ 	}
+ 
+ 	return phys_addr;
+diff --git a/security/Kconfig b/security/Kconfig
+index 0ced7fd33e4d0..fe6c0395fa025 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -191,6 +191,9 @@ config HARDENED_USERCOPY_PAGESPAN
+ config FORTIFY_SOURCE
+ 	bool "Harden common str/mem functions against buffer overflows"
+ 	depends on ARCH_HAS_FORTIFY_SOURCE
++	# https://bugs.llvm.org/show_bug.cgi?id=50322
++	# https://bugs.llvm.org/show_bug.cgi?id=41459
++	depends on !CC_IS_CLANG
+ 	help
+ 	  Detect overflows of buffers in common string and memory functions
+ 	  where the compiler can determine and validate the buffer sizes.


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [gentoo-commits] proj/linux-patches:5.14 commit in: /
@ 2021-11-21 21:14 Mike Pagano
  0 siblings, 0 replies; 40+ messages in thread
From: Mike Pagano @ 2021-11-21 21:14 UTC (permalink / raw
  To: gentoo-commits

commit:     8077ca8990e6d4e9b0db60ec1e302f0699ba8d20
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 21 21:14:26 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 21 21:14:26 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8077ca89

Remove BMQ, will add back with fixed patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |    8 -
 5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch | 9523 --------------------------
 5021_BMQ-and-PDS-gentoo-defaults.patch       |   13 -
 3 files changed, 9544 deletions(-)

diff --git a/0000_README b/0000_README
index e8f44666..35f55e4e 100644
--- a/0000_README
+++ b/0000_README
@@ -166,11 +166,3 @@ Desc:   UID/GID shifting overlay filesystem for containers
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
-
-Patch:  5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
-From:   https://gitlab.com/alfredchen/linux-prjc
-Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
-
-Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
-From:   https://gitweb.gentoo.org/proj/linux-patches.git/
-Desc:   Set defaults for BMQ. Add archs as people test, default to N

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch b/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
deleted file mode 100644
index cf68d7ea..00000000
--- a/5020_BMQ-and-PDS-io-scheduler-v5.14-r3.patch
+++ /dev/null
@@ -1,9523 +0,0 @@
-diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
-index bdb22006f713..d755d7df632f 100644
---- a/Documentation/admin-guide/kernel-parameters.txt
-+++ b/Documentation/admin-guide/kernel-parameters.txt
-@@ -4947,6 +4947,12 @@
- 
- 	sbni=		[NET] Granch SBNI12 leased line adapter
- 
-+	sched_timeslice=
-+			[KNL] Time slice in ms for Project C BMQ/PDS scheduler.
-+			Format: integer 2, 4
-+			Default: 4
-+			See Documentation/scheduler/sched-BMQ.txt
-+
- 	sched_verbose	[KNL] Enables verbose scheduler debug messages.
- 
- 	schedstats=	[KNL,X86] Enable or disable scheduled statistics.
-diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
-index 426162009ce9..15ac2d7e47cd 100644
---- a/Documentation/admin-guide/sysctl/kernel.rst
-+++ b/Documentation/admin-guide/sysctl/kernel.rst
-@@ -1542,3 +1542,13 @@ is 10 seconds.
- 
- The softlockup threshold is (``2 * watchdog_thresh``). Setting this
- tunable to zero will disable lockup detection altogether.
-+
-+yield_type:
-+===========
-+
-+BMQ/PDS CPU scheduler only. This determines what type of yield calls
-+to sched_yield will perform.
-+
-+  0 - No yield.
-+  1 - Deboost and requeue task. (default)
-+  2 - Set run queue skip task.
-diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
-new file mode 100644
-index 000000000000..05c84eec0f31
---- /dev/null
-+++ b/Documentation/scheduler/sched-BMQ.txt
-@@ -0,0 +1,110 @@
-+                         BitMap queue CPU Scheduler
-+                         --------------------------
-+
-+CONTENT
-+========
-+
-+ Background
-+ Design
-+   Overview
-+   Task policy
-+   Priority management
-+   BitMap Queue
-+   CPU Assignment and Migration
-+
-+
-+Background
-+==========
-+
-+BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
-+of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
-+and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
-+simple, while efficiency and scalable for interactive tasks, such as desktop,
-+movie playback and gaming etc.
-+
-+Design
-+======
-+
-+Overview
-+--------
-+
-+BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
-+each CPU is responsible for scheduling the tasks that are putting into it's
-+run queue.
-+
-+The run queue is a set of priority queues. Note that these queues are fifo
-+queue for non-rt tasks or priority queue for rt tasks in data structure. See
-+BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
-+that most applications are non-rt tasks. No matter the queue is fifo or
-+priority, In each queue is an ordered list of runnable tasks awaiting execution
-+and the data structures are the same. When it is time for a new task to run,
-+the scheduler simply looks the lowest numbered queueue that contains a task,
-+and runs the first task from the head of that queue. And per CPU idle task is
-+also in the run queue, so the scheduler can always find a task to run on from
-+its run queue.
-+
-+Each task will assigned the same timeslice(default 4ms) when it is picked to
-+start running. Task will be reinserted at the end of the appropriate priority
-+queue when it uses its whole timeslice. When the scheduler selects a new task
-+from the priority queue it sets the CPU's preemption timer for the remainder of
-+the previous timeslice. When that timer fires the scheduler will stop execution
-+on that task, select another task and start over again.
-+
-+If a task blocks waiting for a shared resource then it's taken out of its
-+priority queue and is placed in a wait queue for the shared resource. When it
-+is unblocked it will be reinserted in the appropriate priority queue of an
-+eligible CPU.
-+
-+Task policy
-+-----------
-+
-+BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
-+mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
-+NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
-+policy.
-+
-+DEADLINE
-+	It is squashed as priority 0 FIFO task.
-+
-+FIFO/RR
-+	All RT tasks share one single priority queue in BMQ run queue designed. The
-+complexity of insert operation is O(n). BMQ is not designed for system runs
-+with major rt policy tasks.
-+
-+NORMAL/BATCH/IDLE
-+	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
-+NORMAL policy tasks, but they just don't boost. To control the priority of
-+NORMAL/BATCH/IDLE tasks, simply use nice level.
-+
-+ISO
-+	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
-+task instead.
-+
-+Priority management
-+-------------------
-+
-+RT tasks have priority from 0-99. For non-rt tasks, there are three different
-+factors used to determine the effective priority of a task. The effective
-+priority being what is used to determine which queue it will be in.
-+
-+The first factor is simply the task’s static priority. Which is assigned from
-+task's nice level, within [-20, 19] in userland's point of view and [0, 39]
-+internally.
-+
-+The second factor is the priority boost. This is a value bounded between
-+[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
-+modified by the following cases:
-+
-+*When a thread has used up its entire timeslice, always deboost its boost by
-+increasing by one.
-+*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
-+and its switch-in time(time after last switch and run) below the thredhold
-+based on its priority boost, will boost its boost by decreasing by one buti is
-+capped at 0 (won’t go negative).
-+
-+The intent in this system is to ensure that interactive threads are serviced
-+quickly. These are usually the threads that interact directly with the user
-+and cause user-perceivable latency. These threads usually do little work and
-+spend most of their time blocked awaiting another user event. So they get the
-+priority boost from unblocking while background threads that do most of the
-+processing receive the priority penalty for using their entire timeslice.
-diff --git a/fs/proc/base.c b/fs/proc/base.c
-index e5b5f7709d48..284b3c4b7d90 100644
---- a/fs/proc/base.c
-+++ b/fs/proc/base.c
-@@ -476,7 +476,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
- 		seq_puts(m, "0 0 0\n");
- 	else
- 		seq_printf(m, "%llu %llu %lu\n",
--		   (unsigned long long)task->se.sum_exec_runtime,
-+		   (unsigned long long)tsk_seruntime(task),
- 		   (unsigned long long)task->sched_info.run_delay,
- 		   task->sched_info.pcount);
- 
-diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
-index 8874f681b056..59eb72bf7d5f 100644
---- a/include/asm-generic/resource.h
-+++ b/include/asm-generic/resource.h
-@@ -23,7 +23,7 @@
- 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
- 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
--	[RLIMIT_NICE]		= { 0, 0 },				\
-+	[RLIMIT_NICE]		= { 30, 30 },				\
- 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
- 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- }
-diff --git a/include/linux/sched.h b/include/linux/sched.h
-index ec8d07d88641..b12f660404fd 100644
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -681,12 +681,18 @@ struct task_struct {
- 	unsigned int			ptrace;
- 
- #ifdef CONFIG_SMP
--	int				on_cpu;
- 	struct __call_single_node	wake_entry;
-+#endif
-+#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
-+	int				on_cpu;
-+#endif
-+
-+#ifdef CONFIG_SMP
- #ifdef CONFIG_THREAD_INFO_IN_TASK
- 	/* Current CPU: */
- 	unsigned int			cpu;
- #endif
-+#ifndef CONFIG_SCHED_ALT
- 	unsigned int			wakee_flips;
- 	unsigned long			wakee_flip_decay_ts;
- 	struct task_struct		*last_wakee;
-@@ -700,6 +706,7 @@ struct task_struct {
- 	 */
- 	int				recent_used_cpu;
- 	int				wake_cpu;
-+#endif /* !CONFIG_SCHED_ALT */
- #endif
- 	int				on_rq;
- 
-@@ -708,6 +715,20 @@ struct task_struct {
- 	int				normal_prio;
- 	unsigned int			rt_priority;
- 
-+#ifdef CONFIG_SCHED_ALT
-+	u64				last_ran;
-+	s64				time_slice;
-+	int				sq_idx;
-+	struct list_head		sq_node;
-+#ifdef CONFIG_SCHED_BMQ
-+	int				boost_prio;
-+#endif /* CONFIG_SCHED_BMQ */
-+#ifdef CONFIG_SCHED_PDS
-+	u64				deadline;
-+#endif /* CONFIG_SCHED_PDS */
-+	/* sched_clock time spent running */
-+	u64				sched_time;
-+#else /* !CONFIG_SCHED_ALT */
- 	const struct sched_class	*sched_class;
- 	struct sched_entity		se;
- 	struct sched_rt_entity		rt;
-@@ -718,6 +739,7 @@ struct task_struct {
- 	unsigned long			core_cookie;
- 	unsigned int			core_occupation;
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_CGROUP_SCHED
- 	struct task_group		*sched_task_group;
-@@ -1417,6 +1439,15 @@ struct task_struct {
- 	 */
- };
- 
-+#ifdef CONFIG_SCHED_ALT
-+#define tsk_seruntime(t)		((t)->sched_time)
-+/* replace the uncertian rt_timeout with 0UL */
-+#define tsk_rttimeout(t)		(0UL)
-+#else /* CFS */
-+#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
-+#define tsk_rttimeout(t)	((t)->rt.timeout)
-+#endif /* !CONFIG_SCHED_ALT */
-+
- static inline struct pid *task_pid(struct task_struct *task)
- {
- 	return task->thread_pid;
-diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
-index 1aff00b65f3c..216fdf2fe90c 100644
---- a/include/linux/sched/deadline.h
-+++ b/include/linux/sched/deadline.h
-@@ -1,5 +1,24 @@
- /* SPDX-License-Identifier: GPL-2.0 */
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+static inline int dl_task(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#define __tsk_deadline(p)	(0UL)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
-+#endif
-+
-+#else
-+
-+#define __tsk_deadline(p)	((p)->dl.deadline)
-+
- /*
-  * SCHED_DEADLINE tasks has negative priorities, reflecting
-  * the fact that any of them has higher prio than RT and
-@@ -19,6 +38,7 @@ static inline int dl_task(struct task_struct *p)
- {
- 	return dl_prio(p->prio);
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- static inline bool dl_time_before(u64 a, u64 b)
- {
-diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
-index ab83d85e1183..6af9ae681116 100644
---- a/include/linux/sched/prio.h
-+++ b/include/linux/sched/prio.h
-@@ -18,6 +18,32 @@
- #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
- #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+/* Undefine MAX_PRIO and DEFAULT_PRIO */
-+#undef MAX_PRIO
-+#undef DEFAULT_PRIO
-+
-+/* +/- priority levels from the base priority */
-+#ifdef CONFIG_SCHED_BMQ
-+#define MAX_PRIORITY_ADJ	(7)
-+
-+#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
-+#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
-+#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define MAX_PRIORITY_ADJ	(0)
-+
-+#define MIN_NORMAL_PRIO		(128)
-+#define NORMAL_PRIO_NUM		(64)
-+#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
-+#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
-+#endif
-+
-+#endif /* CONFIG_SCHED_ALT */
-+
- /*
-  * Convert user-nice values [ -20 ... 0 ... 19 ]
-  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
-diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
-index e5af028c08b4..0a7565d0d3cf 100644
---- a/include/linux/sched/rt.h
-+++ b/include/linux/sched/rt.h
-@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
- 
- 	if (policy == SCHED_FIFO || policy == SCHED_RR)
- 		return true;
-+#ifndef CONFIG_SCHED_ALT
- 	if (policy == SCHED_DEADLINE)
- 		return true;
-+#endif
- 	return false;
- }
- 
-diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
-index 8f0f778b7c91..991f2280475b 100644
---- a/include/linux/sched/topology.h
-+++ b/include/linux/sched/topology.h
-@@ -225,7 +225,8 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
- 
- #endif	/* !CONFIG_SMP */
- 
--#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
-+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
-+	!defined(CONFIG_SCHED_ALT)
- extern void rebuild_sched_domains_energy(void);
- #else
- static inline void rebuild_sched_domains_energy(void)
-diff --git a/init/Kconfig b/init/Kconfig
-index 55f9f7738ebb..9a9b244d3ca3 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -786,9 +786,39 @@ config GENERIC_SCHED_CLOCK
- 
- menu "Scheduler features"
- 
-+menuconfig SCHED_ALT
-+	bool "Alternative CPU Schedulers"
-+	default y
-+	help
-+	  This feature enable alternative CPU scheduler"
-+
-+if SCHED_ALT
-+
-+choice
-+	prompt "Alternative CPU Scheduler"
-+	default SCHED_BMQ
-+
-+config SCHED_BMQ
-+	bool "BMQ CPU scheduler"
-+	help
-+	  The BitMap Queue CPU scheduler for excellent interactivity and
-+	  responsiveness on the desktop and solid scalability on normal
-+	  hardware and commodity servers.
-+
-+config SCHED_PDS
-+	bool "PDS CPU scheduler"
-+	help
-+	  The Priority and Deadline based Skip list multiple queue CPU
-+	  Scheduler.
-+
-+endchoice
-+
-+endif
-+
- config UCLAMP_TASK
- 	bool "Enable utilization clamping for RT/FAIR tasks"
- 	depends on CPU_FREQ_GOV_SCHEDUTIL
-+	depends on !SCHED_ALT
- 	help
- 	  This feature enables the scheduler to track the clamped utilization
- 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
-@@ -874,6 +904,7 @@ config NUMA_BALANCING
- 	depends on ARCH_SUPPORTS_NUMA_BALANCING
- 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
- 	depends on SMP && NUMA && MIGRATION
-+	depends on !SCHED_ALT
- 	help
- 	  This option adds support for automatic NUMA aware memory/task placement.
- 	  The mechanism is quite primitive and is based on migrating memory when
-@@ -966,6 +997,7 @@ config FAIR_GROUP_SCHED
- 	depends on CGROUP_SCHED
- 	default CGROUP_SCHED
- 
-+if !SCHED_ALT
- config CFS_BANDWIDTH
- 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
- 	depends on FAIR_GROUP_SCHED
-@@ -988,6 +1020,7 @@ config RT_GROUP_SCHED
- 	  realtime bandwidth for them.
- 	  See Documentation/scheduler/sched-rt-group.rst for more information.
- 
-+endif #!SCHED_ALT
- endif #CGROUP_SCHED
- 
- config UCLAMP_TASK_GROUP
-@@ -1231,6 +1264,7 @@ config CHECKPOINT_RESTORE
- 
- config SCHED_AUTOGROUP
- 	bool "Automatic process group scheduling"
-+	depends on !SCHED_ALT
- 	select CGROUPS
- 	select CGROUP_SCHED
- 	select FAIR_GROUP_SCHED
-diff --git a/init/init_task.c b/init/init_task.c
-index 562f2ef8d157..177b63db4ce0 100644
---- a/init/init_task.c
-+++ b/init/init_task.c
-@@ -75,9 +75,15 @@ struct task_struct init_task
- 	.stack		= init_stack,
- 	.usage		= REFCOUNT_INIT(2),
- 	.flags		= PF_KTHREAD,
-+#ifdef CONFIG_SCHED_ALT
-+	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
-+	.static_prio	= DEFAULT_PRIO,
-+	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
-+#else
- 	.prio		= MAX_PRIO - 20,
- 	.static_prio	= MAX_PRIO - 20,
- 	.normal_prio	= MAX_PRIO - 20,
-+#endif
- 	.policy		= SCHED_NORMAL,
- 	.cpus_ptr	= &init_task.cpus_mask,
- 	.cpus_mask	= CPU_MASK_ALL,
-@@ -87,6 +93,17 @@ struct task_struct init_task
- 	.restart_block	= {
- 		.fn = do_no_restart_syscall,
- 	},
-+#ifdef CONFIG_SCHED_ALT
-+	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
-+#ifdef CONFIG_SCHED_BMQ
-+	.boost_prio	= 0,
-+	.sq_idx		= 15,
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+	.deadline	= 0,
-+#endif
-+	.time_slice	= HZ,
-+#else
- 	.se		= {
- 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
- 	},
-@@ -94,6 +111,7 @@ struct task_struct init_task
- 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
- 		.time_slice	= RR_TIMESLICE,
- 	},
-+#endif
- 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
- #ifdef CONFIG_SMP
- 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
-diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
-index 5876e30c5740..7594d0a31869 100644
---- a/kernel/Kconfig.preempt
-+++ b/kernel/Kconfig.preempt
-@@ -102,7 +102,7 @@ config PREEMPT_DYNAMIC
- 
- config SCHED_CORE
- 	bool "Core Scheduling for SMT"
--	depends on SCHED_SMT
-+	depends on SCHED_SMT && !SCHED_ALT
- 	help
- 	  This option permits Core Scheduling, a means of coordinated task
- 	  selection across SMT siblings. When enabled -- see
-diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
-index adb5190c4429..8c02bce63146 100644
---- a/kernel/cgroup/cpuset.c
-+++ b/kernel/cgroup/cpuset.c
-@@ -636,7 +636,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
- 	return ret;
- }
- 
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
- /*
-  * Helper routine for generate_sched_domains().
-  * Do cpusets a, b have overlapping effective cpus_allowed masks?
-@@ -1032,7 +1032,7 @@ static void rebuild_sched_domains_locked(void)
- 	/* Have scheduler rebuild the domains */
- 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
- }
--#else /* !CONFIG_SMP */
-+#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
- static void rebuild_sched_domains_locked(void)
- {
- }
-diff --git a/kernel/delayacct.c b/kernel/delayacct.c
-index 51530d5b15a8..e542d71bb94b 100644
---- a/kernel/delayacct.c
-+++ b/kernel/delayacct.c
-@@ -139,7 +139,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
- 	 */
- 	t1 = tsk->sched_info.pcount;
- 	t2 = tsk->sched_info.run_delay;
--	t3 = tsk->se.sum_exec_runtime;
-+	t3 = tsk_seruntime(tsk);
- 
- 	d->cpu_count += t1;
- 
-diff --git a/kernel/exit.c b/kernel/exit.c
-index 9a89e7f36acb..7fe34c56bd08 100644
---- a/kernel/exit.c
-+++ b/kernel/exit.c
-@@ -122,7 +122,7 @@ static void __exit_signal(struct task_struct *tsk)
- 			sig->curr_target = next_thread(tsk);
- 	}
- 
--	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
-+	add_device_randomness((const void*) &tsk_seruntime(tsk),
- 			      sizeof(unsigned long long));
- 
- 	/*
-@@ -143,7 +143,7 @@ static void __exit_signal(struct task_struct *tsk)
- 	sig->inblock += task_io_get_inblock(tsk);
- 	sig->oublock += task_io_get_oublock(tsk);
- 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
--	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
-+	sig->sum_sched_runtime += tsk_seruntime(tsk);
- 	sig->nr_threads--;
- 	__unhash_process(tsk, group_dead);
- 	write_sequnlock(&sig->stats_lock);
-diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
-index 3a4beb9395c4..98a709628cb3 100644
---- a/kernel/livepatch/transition.c
-+++ b/kernel/livepatch/transition.c
-@@ -307,7 +307,11 @@ static bool klp_try_switch_task(struct task_struct *task)
- 	 */
- 	rq = task_rq_lock(task, &flags);
- 
-+#ifdef	CONFIG_SCHED_ALT
-+	if (task_running(task) && task != current) {
-+#else
- 	if (task_running(rq, task) && task != current) {
-+#endif
- 		snprintf(err_buf, STACK_ERR_BUF_SIZE,
- 			 "%s: %s:%d is running\n", __func__, task->comm,
- 			 task->pid);
-diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
-index ad0db322ed3b..350b0e506c17 100644
---- a/kernel/locking/rtmutex.c
-+++ b/kernel/locking/rtmutex.c
-@@ -227,14 +227,18 @@ static __always_inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
-  * Only use with rt_mutex_waiter_{less,equal}()
-  */
- #define task_to_waiter(p)	\
--	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
-+	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = __tsk_deadline(p) }
- 
- static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
- 						struct rt_mutex_waiter *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline < right->deadline);
-+#else
- 	if (left->prio < right->prio)
- 		return 1;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -243,16 +247,22 @@ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
- 	 */
- 	if (dl_prio(left->prio))
- 		return dl_time_before(left->deadline, right->deadline);
-+#endif
- 
- 	return 0;
-+#endif
- }
- 
- static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
- 						 struct rt_mutex_waiter *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline == right->deadline);
-+#else
- 	if (left->prio != right->prio)
- 		return 0;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -261,8 +271,10 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
- 	 */
- 	if (dl_prio(left->prio))
- 		return left->deadline == right->deadline;
-+#endif
- 
- 	return 1;
-+#endif
- }
- 
- #define __node_2_waiter(node) \
-@@ -654,7 +666,7 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task,
- 	 * the values of the node being removed.
- 	 */
- 	waiter->prio = task->prio;
--	waiter->deadline = task->dl.deadline;
-+	waiter->deadline = __tsk_deadline(task);
- 
- 	rt_mutex_enqueue(lock, waiter);
- 
-@@ -925,7 +937,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex *lock,
- 	waiter->task = task;
- 	waiter->lock = lock;
- 	waiter->prio = task->prio;
--	waiter->deadline = task->dl.deadline;
-+	waiter->deadline = __tsk_deadline(task);
- 
- 	/* Get the top priority waiter on the lock */
- 	if (rt_mutex_has_waiters(lock))
-diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
-index 978fcfca5871..0425ee149b4d 100644
---- a/kernel/sched/Makefile
-+++ b/kernel/sched/Makefile
-@@ -22,14 +22,21 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y)
- CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
- endif
- 
--obj-y += core.o loadavg.o clock.o cputime.o
--obj-y += idle.o fair.o rt.o deadline.o
--obj-y += wait.o wait_bit.o swait.o completion.o
--
--obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o pelt.o
-+ifdef CONFIG_SCHED_ALT
-+obj-y += alt_core.o
-+obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
-+else
-+obj-y += core.o
-+obj-y += fair.o rt.o deadline.o
-+obj-$(CONFIG_SMP) += cpudeadline.o stop_task.o
- obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
--obj-$(CONFIG_SCHEDSTATS) += stats.o
-+endif
- obj-$(CONFIG_SCHED_DEBUG) += debug.o
-+obj-y += loadavg.o clock.o cputime.o
-+obj-y += idle.o
-+obj-y += wait.o wait_bit.o swait.o completion.o
-+obj-$(CONFIG_SMP) += cpupri.o pelt.o topology.o
-+obj-$(CONFIG_SCHEDSTATS) += stats.o
- obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
- obj-$(CONFIG_CPU_FREQ) += cpufreq.o
- obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
-diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
-new file mode 100644
-index 000000000000..56aed2b1e42c
---- /dev/null
-+++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7341 @@
-+/*
-+ *  kernel/sched/alt_core.c
-+ *
-+ *  Core alternative kernel scheduler code and related syscalls
-+ *
-+ *  Copyright (C) 1991-2002  Linus Torvalds
-+ *
-+ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
-+ *		a whole lot of those previous things.
-+ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
-+ *		scheduler by Alfred Chen.
-+ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
-+ */
-+#define CREATE_TRACE_POINTS
-+#include <trace/events/sched.h>
-+#undef CREATE_TRACE_POINTS
-+
-+#include "sched.h"
-+
-+#include <linux/sched/rt.h>
-+
-+#include <linux/context_tracking.h>
-+#include <linux/compat.h>
-+#include <linux/blkdev.h>
-+#include <linux/delayacct.h>
-+#include <linux/freezer.h>
-+#include <linux/init_task.h>
-+#include <linux/kprobes.h>
-+#include <linux/mmu_context.h>
-+#include <linux/nmi.h>
-+#include <linux/profile.h>
-+#include <linux/rcupdate_wait.h>
-+#include <linux/security.h>
-+#include <linux/syscalls.h>
-+#include <linux/wait_bit.h>
-+
-+#include <linux/kcov.h>
-+#include <linux/scs.h>
-+
-+#include <asm/switch_to.h>
-+
-+#include "../workqueue_internal.h"
-+#include "../../fs/io-wq.h"
-+#include "../smpboot.h"
-+
-+#include "pelt.h"
-+#include "smp.h"
-+
-+/*
-+ * Export tracepoints that act as a bare tracehook (ie: have no trace event
-+ * associated with them) to allow external modules to probe them.
-+ */
-+EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+#define sched_feat(x)	(1)
-+/*
-+ * Print a warning if need_resched is set for the given duration (if
-+ * LATENCY_WARN is enabled).
-+ *
-+ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
-+ * per boot.
-+ */
-+__read_mostly int sysctl_resched_latency_warn_ms = 100;
-+__read_mostly int sysctl_resched_latency_warn_once = 1;
-+#else
-+#define sched_feat(x)	(0)
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+#define ALT_SCHED_VERSION "v5.14-r3"
-+
-+/* rt_prio(prio) defined in include/linux/sched/rt.h */
-+#define rt_task(p)		rt_prio((p)->prio)
-+#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
-+#define task_has_rt_policy(p)	(rt_policy((p)->policy))
-+
-+#define STOP_PRIO		(MAX_RT_PRIO - 1)
-+
-+/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
-+u64 sched_timeslice_ns __read_mostly = (4 << 20);
-+
-+static inline void requeue_task(struct task_struct *p, struct rq *rq);
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#include "bmq.h"
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+#include "pds.h"
-+#endif
-+
-+static int __init sched_timeslice(char *str)
-+{
-+	int timeslice_ms;
-+
-+	get_option(&str, &timeslice_ms);
-+	if (2 != timeslice_ms)
-+		timeslice_ms = 4;
-+	sched_timeslice_ns = timeslice_ms << 20;
-+	sched_timeslice_imp(timeslice_ms);
-+
-+	return 0;
-+}
-+early_param("sched_timeslice", sched_timeslice);
-+
-+/* Reschedule if less than this many μs left */
-+#define RESCHED_NS		(100 << 10)
-+
-+/**
-+ * sched_yield_type - Choose what sort of yield sched_yield will perform.
-+ * 0: No yield.
-+ * 1: Deboost and requeue task. (default)
-+ * 2: Set rq skip task.
-+ */
-+int sched_yield_type __read_mostly = 1;
-+
-+#ifdef CONFIG_SMP
-+static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
-+
-+DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+DEFINE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
-+DEFINE_PER_CPU(cpumask_t *, sched_cpu_topo_end_mask);
-+
-+#ifdef CONFIG_SCHED_SMT
-+DEFINE_STATIC_KEY_FALSE(sched_smt_present);
-+EXPORT_SYMBOL_GPL(sched_smt_present);
-+#endif
-+
-+/*
-+ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
-+ * the domain), this allows us to quickly tell if two cpus are in the same cache
-+ * domain, see cpus_share_cache().
-+ */
-+DEFINE_PER_CPU(int, sd_llc_id);
-+#endif /* CONFIG_SMP */
-+
-+static DEFINE_MUTEX(sched_hotcpu_mutex);
-+
-+DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
-+#endif
-+static cpumask_t sched_rq_watermark[SCHED_BITS] ____cacheline_aligned_in_smp;
-+
-+/* sched_queue related functions */
-+static inline void sched_queue_init(struct sched_queue *q)
-+{
-+	int i;
-+
-+	bitmap_zero(q->bitmap, SCHED_BITS);
-+	for(i = 0; i < SCHED_BITS; i++)
-+		INIT_LIST_HEAD(&q->heads[i]);
-+}
-+
-+/*
-+ * Init idle task and put into queue structure of rq
-+ * IMPORTANT: may be called multiple times for a single cpu
-+ */
-+static inline void sched_queue_init_idle(struct sched_queue *q,
-+					 struct task_struct *idle)
-+{
-+	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
-+	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
-+	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
-+}
-+
-+/* water mark related functions */
-+static inline void update_sched_rq_watermark(struct rq *rq)
-+{
-+	unsigned long watermark = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
-+	unsigned long last_wm = rq->watermark;
-+	unsigned long i;
-+	int cpu;
-+
-+	if (watermark == last_wm)
-+		return;
-+
-+	rq->watermark = watermark;
-+	cpu = cpu_of(rq);
-+	if (watermark < last_wm) {
-+		for (i = last_wm; i > watermark; i--)
-+			cpumask_clear_cpu(cpu, sched_rq_watermark + SCHED_BITS - 1 - i);
-+#ifdef CONFIG_SCHED_SMT
-+		if (static_branch_likely(&sched_smt_present) &&
-+		    IDLE_TASK_SCHED_PRIO == last_wm)
-+			cpumask_andnot(&sched_sg_idle_mask,
-+				       &sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+		return;
-+	}
-+	/* last_wm < watermark */
-+	for (i = watermark; i > last_wm; i--)
-+		cpumask_set_cpu(cpu, sched_rq_watermark + SCHED_BITS - 1 - i);
-+#ifdef CONFIG_SCHED_SMT
-+	if (static_branch_likely(&sched_smt_present) &&
-+	    IDLE_TASK_SCHED_PRIO == watermark) {
-+		cpumask_t tmp;
-+
-+		cpumask_and(&tmp, cpu_smt_mask(cpu), sched_rq_watermark);
-+		if (cpumask_equal(&tmp, cpu_smt_mask(cpu)))
-+			cpumask_or(&sched_sg_idle_mask,
-+				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
-+	}
-+#endif
-+}
-+
-+/*
-+ * This routine assume that the idle task always in queue
-+ */
-+static inline struct task_struct *sched_rq_first_task(struct rq *rq)
-+{
-+	unsigned long idx = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
-+	const struct list_head *head = &rq->queue.heads[sched_prio2idx(idx, rq)];
-+
-+	return list_first_entry(head, struct task_struct, sq_node);
-+}
-+
-+static inline struct task_struct *
-+sched_rq_next_task(struct task_struct *p, struct rq *rq)
-+{
-+	unsigned long idx = p->sq_idx;
-+	struct list_head *head = &rq->queue.heads[idx];
-+
-+	if (list_is_last(&p->sq_node, head)) {
-+		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
-+				    sched_idx2prio(idx, rq) + 1);
-+		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
-+
-+		return list_first_entry(head, struct task_struct, sq_node);
-+	}
-+
-+	return list_next_entry(p, sq_node);
-+}
-+
-+static inline struct task_struct *rq_runnable_task(struct rq *rq)
-+{
-+	struct task_struct *next = sched_rq_first_task(rq);
-+
-+	if (unlikely(next == rq->skip))
-+		next = sched_rq_next_task(next, rq);
-+
-+	return next;
-+}
-+
-+/*
-+ * Serialization rules:
-+ *
-+ * Lock order:
-+ *
-+ *   p->pi_lock
-+ *     rq->lock
-+ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
-+ *
-+ *  rq1->lock
-+ *    rq2->lock  where: rq1 < rq2
-+ *
-+ * Regular state:
-+ *
-+ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
-+ * local CPU's rq->lock, it optionally removes the task from the runqueue and
-+ * always looks at the local rq data structures to find the most eligible task
-+ * to run next.
-+ *
-+ * Task enqueue is also under rq->lock, possibly taken from another CPU.
-+ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
-+ * the local CPU to avoid bouncing the runqueue state around [ see
-+ * ttwu_queue_wakelist() ]
-+ *
-+ * Task wakeup, specifically wakeups that involve migration, are horribly
-+ * complicated to avoid having to take two rq->locks.
-+ *
-+ * Special state:
-+ *
-+ * System-calls and anything external will use task_rq_lock() which acquires
-+ * both p->pi_lock and rq->lock. As a consequence the state they change is
-+ * stable while holding either lock:
-+ *
-+ *  - sched_setaffinity()/
-+ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
-+ *  - set_user_nice():		p->se.load, p->*prio
-+ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
-+ *				p->se.load, p->rt_priority,
-+ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
-+ *  - sched_setnuma():		p->numa_preferred_nid
-+ *  - sched_move_task()/
-+ *    cpu_cgroup_fork():	p->sched_task_group
-+ *  - uclamp_update_active()	p->uclamp*
-+ *
-+ * p->state <- TASK_*:
-+ *
-+ *   is changed locklessly using set_current_state(), __set_current_state() or
-+ *   set_special_state(), see their respective comments, or by
-+ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
-+ *   concurrent self.
-+ *
-+ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
-+ *
-+ *   is set by activate_task() and cleared by deactivate_task(), under
-+ *   rq->lock. Non-zero indicates the task is runnable, the special
-+ *   ON_RQ_MIGRATING state is used for migration without holding both
-+ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
-+ *
-+ * p->on_cpu <- { 0, 1 }:
-+ *
-+ *   is set by prepare_task() and cleared by finish_task() such that it will be
-+ *   set before p is scheduled-in and cleared after p is scheduled-out, both
-+ *   under rq->lock. Non-zero indicates the task is running on its CPU.
-+ *
-+ *   [ The astute reader will observe that it is possible for two tasks on one
-+ *     CPU to have ->on_cpu = 1 at the same time. ]
-+ *
-+ * task_cpu(p): is changed by set_task_cpu(), the rules are:
-+ *
-+ *  - Don't call set_task_cpu() on a blocked task:
-+ *
-+ *    We don't care what CPU we're not running on, this simplifies hotplug,
-+ *    the CPU assignment of blocked tasks isn't required to be valid.
-+ *
-+ *  - for try_to_wake_up(), called under p->pi_lock:
-+ *
-+ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
-+ *
-+ *  - for migration called under rq->lock:
-+ *    [ see task_on_rq_migrating() in task_rq_lock() ]
-+ *
-+ *    o move_queued_task()
-+ *    o detach_task()
-+ *
-+ *  - for migration called under double_rq_lock():
-+ *
-+ *    o __migrate_swap_task()
-+ *    o push_rt_task() / pull_rt_task()
-+ *    o push_dl_task() / pull_dl_task()
-+ *    o dl_task_offline_migration()
-+ *
-+ */
-+
-+/*
-+ * Context: p->pi_lock
-+ */
-+static inline struct rq
-+*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock(&rq->lock);
-+			if (likely((p->on_cpu || task_on_rq_queued(p))
-+				   && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock(&rq->lock);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			*plock = NULL;
-+			return rq;
-+		}
-+	}
-+}
-+
-+static inline void
-+__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
-+{
-+	if (NULL != lock)
-+		raw_spin_unlock(lock);
-+}
-+
-+static inline struct rq
-+*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
-+			  unsigned long *flags)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock_irqsave(&rq->lock, *flags);
-+			if (likely((p->on_cpu || task_on_rq_queued(p))
-+				   && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&rq->lock, *flags);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			raw_spin_lock_irqsave(&p->pi_lock, *flags);
-+			if (likely(!p->on_cpu && !p->on_rq &&
-+				   rq == task_rq(p))) {
-+				*plock = &p->pi_lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
-+		}
-+	}
-+}
-+
-+static inline void
-+task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
-+			      unsigned long *flags)
-+{
-+	raw_spin_unlock_irqrestore(lock, *flags);
-+}
-+
-+/*
-+ * __task_rq_lock - lock the rq @p resides on.
-+ */
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	lockdep_assert_held(&p->pi_lock);
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-+			return rq;
-+		raw_spin_unlock(&rq->lock);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+/*
-+ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
-+ */
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	for (;;) {
-+		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		/*
-+		 *	move_queued_task()		task_rq_lock()
-+		 *
-+		 *	ACQUIRE (rq->lock)
-+		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
-+		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
-+		 *	[S] ->cpu = new_cpu		[L] task_rq()
-+		 *					[L] ->on_rq
-+		 *	RELEASE (rq->lock)
-+		 *
-+		 * If we observe the old CPU in task_rq_lock(), the acquire of
-+		 * the old rq->lock will fully serialize against the stores.
-+		 *
-+		 * If we observe the new CPU in task_rq_lock(), the address
-+		 * dependency headed by '[L] rq = task_rq()' and the acquire
-+		 * will pair with the WMB to ensure we then also see migrating.
-+		 */
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
-+			return rq;
-+		}
-+		raw_spin_unlock(&rq->lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+static inline void
-+rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irqsave(&rq->lock, rf->flags);
-+}
-+
-+static inline void
-+rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
-+}
-+
-+void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
-+{
-+	raw_spinlock_t *lock;
-+
-+	/* Matches synchronize_rcu() in __sched_core_enable() */
-+	preempt_disable();
-+
-+	for (;;) {
-+		lock = __rq_lockp(rq);
-+		raw_spin_lock_nested(lock, subclass);
-+		if (likely(lock == __rq_lockp(rq))) {
-+			/* preempt_count *MUST* be > 1 */
-+			preempt_enable_no_resched();
-+			return;
-+		}
-+		raw_spin_unlock(lock);
-+	}
-+}
-+
-+void raw_spin_rq_unlock(struct rq *rq)
-+{
-+	raw_spin_unlock(rq_lockp(rq));
-+}
-+
-+/*
-+ * RQ-clock updating methods:
-+ */
-+
-+static void update_rq_clock_task(struct rq *rq, s64 delta)
-+{
-+/*
-+ * In theory, the compile should just see 0 here, and optimize out the call
-+ * to sched_rt_avg_update. But I don't trust it...
-+ */
-+	s64 __maybe_unused steal = 0, irq_delta = 0;
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
-+
-+	/*
-+	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
-+	 * this case when a previous update_rq_clock() happened inside a
-+	 * {soft,}irq region.
-+	 *
-+	 * When this happens, we stop ->clock_task and only update the
-+	 * prev_irq_time stamp to account for the part that fit, so that a next
-+	 * update will consume the rest. This ensures ->clock_task is
-+	 * monotonic.
-+	 *
-+	 * It does however cause some slight miss-attribution of {soft,}irq
-+	 * time, a more accurate solution would be to update the irq_time using
-+	 * the current rq->clock timestamp, except that would require using
-+	 * atomic ops.
-+	 */
-+	if (irq_delta > delta)
-+		irq_delta = delta;
-+
-+	rq->prev_irq_time += irq_delta;
-+	delta -= irq_delta;
-+#endif
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	if (static_key_false((&paravirt_steal_rq_enabled))) {
-+		steal = paravirt_steal_clock(cpu_of(rq));
-+		steal -= rq->prev_steal_time_rq;
-+
-+		if (unlikely(steal > delta))
-+			steal = delta;
-+
-+		rq->prev_steal_time_rq += steal;
-+		delta -= steal;
-+	}
-+#endif
-+
-+	rq->clock_task += delta;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	if ((irq_delta + steal))
-+		update_irq_load_avg(rq, irq_delta + steal);
-+#endif
-+}
-+
-+static inline void update_rq_clock(struct rq *rq)
-+{
-+	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
-+
-+	if (unlikely(delta <= 0))
-+		return;
-+	rq->clock += delta;
-+	update_rq_time_edge(rq);
-+	update_rq_clock_task(rq, delta);
-+}
-+
-+/*
-+ * RQ Load update routine
-+ */
-+#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
-+#define RQ_UTIL_SHIFT			(8)
-+#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
-+
-+#define LOAD_BLOCK(t)		((t) >> 17)
-+#define LOAD_HALF_BLOCK(t)	((t) >> 16)
-+#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
-+#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
-+#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
-+
-+static inline void rq_load_update(struct rq *rq)
-+{
-+	u64 time = rq->clock;
-+	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
-+			RQ_LOAD_HISTORY_BITS - 1);
-+	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
-+	u64 curr = !!cpu_rq(rq->cpu)->nr_running;
-+
-+	if (delta) {
-+		rq->load_history = rq->load_history >> delta;
-+
-+		if (delta < RQ_UTIL_SHIFT) {
-+			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
-+			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
-+				rq->load_history ^= LOAD_BLOCK_BIT(delta);
-+		}
-+
-+		rq->load_block = BLOCK_MASK(time) * prev;
-+	} else {
-+		rq->load_block += (time - rq->load_stamp) * prev;
-+	}
-+	if (prev ^ curr)
-+		rq->load_history ^= CURRENT_LOAD_BIT;
-+	rq->load_stamp = time;
-+}
-+
-+unsigned long rq_load_util(struct rq *rq, unsigned long max)
-+{
-+	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
-+}
-+
-+#ifdef CONFIG_SMP
-+unsigned long sched_cpu_util(int cpu, unsigned long max)
-+{
-+	return rq_load_util(cpu_rq(cpu), max);
-+}
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_CPU_FREQ
-+/**
-+ * cpufreq_update_util - Take a note about CPU utilization changes.
-+ * @rq: Runqueue to carry out the update for.
-+ * @flags: Update reason flags.
-+ *
-+ * This function is called by the scheduler on the CPU whose utilization is
-+ * being updated.
-+ *
-+ * It can only be called from RCU-sched read-side critical sections.
-+ *
-+ * The way cpufreq is currently arranged requires it to evaluate the CPU
-+ * performance state (frequency/voltage) on a regular basis to prevent it from
-+ * being stuck in a completely inadequate performance level for too long.
-+ * That is not guaranteed to happen if the updates are only triggered from CFS
-+ * and DL, though, because they may not be coming in if only RT tasks are
-+ * active all the time (or there are RT tasks only).
-+ *
-+ * As a workaround for that issue, this function is called periodically by the
-+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
-+ * but that really is a band-aid.  Going forward it should be replaced with
-+ * solutions targeted more specifically at RT tasks.
-+ */
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+	struct update_util_data *data;
-+
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
-+						  cpu_of(rq)));
-+	if (data)
-+		data->func(data, rq_clock(rq), flags);
-+}
-+#else
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+}
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+/*
-+ * Tick may be needed by tasks in the runqueue depending on their policy and
-+ * requirements. If tick is needed, lets send the target an IPI to kick it out
-+ * of nohz mode if necessary.
-+ */
-+static inline void sched_update_tick_dependency(struct rq *rq)
-+{
-+	int cpu = cpu_of(rq);
-+
-+	if (!tick_nohz_full_cpu(cpu))
-+		return;
-+
-+	if (rq->nr_running < 2)
-+		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
-+	else
-+		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
-+}
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_update_tick_dependency(struct rq *rq) { }
-+#endif
-+
-+bool sched_task_on_rq(struct task_struct *p)
-+{
-+	return task_on_rq_queued(p);
-+}
-+
-+/*
-+ * Add/Remove/Requeue task to/from the runqueue routines
-+ * Context: rq->lock
-+ */
-+#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)		\
-+	psi_dequeue(p, flags & DEQUEUE_SLEEP);			\
-+	sched_info_dequeue(rq, p);				\
-+								\
-+	list_del(&p->sq_node);					\
-+	if (list_empty(&rq->queue.heads[p->sq_idx])) {		\
-+		clear_bit(sched_idx2prio(p->sq_idx, rq),	\
-+			  rq->queue.bitmap);			\
-+		func;						\
-+	}
-+
-+#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
-+	sched_info_enqueue(rq, p);					\
-+	psi_enqueue(p, flags);						\
-+									\
-+	p->sq_idx = task_sched_prio_idx(p, rq);				\
-+	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
-+	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
-+
-+static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+
-+	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_rq_watermark(rq));
-+	--rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (1 == rq->nr_running)
-+		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: enqueue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+
-+	__SCHED_ENQUEUE_TASK(p, rq, flags);
-+	update_sched_rq_watermark(rq);
-+	++rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (2 == rq->nr_running)
-+		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void requeue_task(struct task_struct *p, struct rq *rq)
-+{
-+	int idx;
-+
-+	lockdep_assert_held(&rq->lock);
-+	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
-+		  cpu_of(rq), task_cpu(p));
-+
-+	idx = task_sched_prio_idx(p, rq);
-+
-+	list_del(&p->sq_node);
-+	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
-+	if (idx != p->sq_idx) {
-+		if (list_empty(&rq->queue.heads[p->sq_idx]))
-+			clear_bit(sched_idx2prio(p->sq_idx, rq),
-+				  rq->queue.bitmap);
-+		p->sq_idx = idx;
-+		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
-+		update_sched_rq_watermark(rq);
-+	}
-+}
-+
-+/*
-+ * cmpxchg based fetch_or, macro so it works for different integer types
-+ */
-+#define fetch_or(ptr, mask)						\
-+	({								\
-+		typeof(ptr) _ptr = (ptr);				\
-+		typeof(mask) _mask = (mask);				\
-+		typeof(*_ptr) _old, _val = *_ptr;			\
-+									\
-+		for (;;) {						\
-+			_old = cmpxchg(_ptr, _val, _val | _mask);	\
-+			if (_old == _val)				\
-+				break;					\
-+			_val = _old;					\
-+		}							\
-+	_old;								\
-+})
-+
-+#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
-+/*
-+ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
-+ * this avoids any races wrt polling state changes and thereby avoids
-+ * spurious IPIs.
-+ */
-+static bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
-+}
-+
-+/*
-+ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
-+ *
-+ * If this returns true, then the idle task promises to call
-+ * sched_ttwu_pending() and reschedule soon.
-+ */
-+static bool set_nr_if_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	typeof(ti->flags) old, val = READ_ONCE(ti->flags);
-+
-+	for (;;) {
-+		if (!(val & _TIF_POLLING_NRFLAG))
-+			return false;
-+		if (val & _TIF_NEED_RESCHED)
-+			return true;
-+		old = cmpxchg(&ti->flags, val, val | _TIF_NEED_RESCHED);
-+		if (old == val)
-+			break;
-+		val = old;
-+	}
-+	return true;
-+}
-+
-+#else
-+static bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	set_tsk_need_resched(p);
-+	return true;
-+}
-+
-+#ifdef CONFIG_SMP
-+static bool set_nr_if_polling(struct task_struct *p)
-+{
-+	return false;
-+}
-+#endif
-+#endif
-+
-+static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	struct wake_q_node *node = &task->wake_q;
-+
-+	/*
-+	 * Atomically grab the task, if ->wake_q is !nil already it means
-+	 * it's already queued (either by us or someone else) and will get the
-+	 * wakeup due to that.
-+	 *
-+	 * In order to ensure that a pending wakeup will observe our pending
-+	 * state, even in the failed case, an explicit smp_mb() must be used.
-+	 */
-+	smp_mb__before_atomic();
-+	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
-+		return false;
-+
-+	/*
-+	 * The head is context local, there can be no concurrency.
-+	 */
-+	*head->lastp = node;
-+	head->lastp = &node->next;
-+	return true;
-+}
-+
-+/**
-+ * wake_q_add() - queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ */
-+void wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (__wake_q_add(head, task))
-+		get_task_struct(task);
-+}
-+
-+/**
-+ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ *
-+ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
-+ * that already hold reference to @task can call the 'safe' version and trust
-+ * wake_q to do the right thing depending whether or not the @task is already
-+ * queued for wakeup.
-+ */
-+void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (!__wake_q_add(head, task))
-+		put_task_struct(task);
-+}
-+
-+void wake_up_q(struct wake_q_head *head)
-+{
-+	struct wake_q_node *node = head->first;
-+
-+	while (node != WAKE_Q_TAIL) {
-+		struct task_struct *task;
-+
-+		task = container_of(node, struct task_struct, wake_q);
-+		/* task can safely be re-inserted now: */
-+		node = node->next;
-+		task->wake_q.next = NULL;
-+
-+		/*
-+		 * wake_up_process() executes a full barrier, which pairs with
-+		 * the queueing in wake_q_add() so as not to miss wakeups.
-+		 */
-+		wake_up_process(task);
-+		put_task_struct(task);
-+	}
-+}
-+
-+/*
-+ * resched_curr - mark rq's current task 'to be rescheduled now'.
-+ *
-+ * On UP this means the setting of the need_resched flag, on SMP it
-+ * might also involve a cross-CPU call to trigger the scheduler on
-+ * the target CPU.
-+ */
-+void resched_curr(struct rq *rq)
-+{
-+	struct task_struct *curr = rq->curr;
-+	int cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	if (test_tsk_need_resched(curr))
-+		return;
-+
-+	cpu = cpu_of(rq);
-+	if (cpu == smp_processor_id()) {
-+		set_tsk_need_resched(curr);
-+		set_preempt_need_resched();
-+		return;
-+	}
-+
-+	if (set_nr_and_not_polling(curr))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+void resched_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (cpu_online(cpu) || cpu == smp_processor_id())
-+		resched_curr(cpu_rq(cpu));
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+#ifdef CONFIG_SMP
-+#ifdef CONFIG_NO_HZ_COMMON
-+void nohz_balance_enter_idle(int cpu) {}
-+
-+void select_nohz_load_balancer(int stop_tick) {}
-+
-+void set_cpu_sd_state_idle(void) {}
-+
-+/*
-+ * In the semi idle case, use the nearest busy CPU for migrating timers
-+ * from an idle CPU.  This is good for power-savings.
-+ *
-+ * We don't do similar optimization for completely idle system, as
-+ * selecting an idle CPU will add more delays to the timers than intended
-+ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
-+ */
-+int get_nohz_timer_target(void)
-+{
-+	int i, cpu = smp_processor_id(), default_cpu = -1;
-+	struct cpumask *mask;
-+
-+	if (housekeeping_cpu(cpu, HK_FLAG_TIMER)) {
-+		if (!idle_cpu(cpu))
-+			return cpu;
-+		default_cpu = cpu;
-+	}
-+
-+	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
-+	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
-+		for_each_cpu_and(i, mask, housekeeping_cpumask(HK_FLAG_TIMER))
-+			if (!idle_cpu(i))
-+				return i;
-+
-+	if (default_cpu == -1)
-+		default_cpu = housekeeping_any_cpu(HK_FLAG_TIMER);
-+	cpu = default_cpu;
-+
-+	return cpu;
-+}
-+
-+/*
-+ * When add_timer_on() enqueues a timer into the timer wheel of an
-+ * idle CPU then this timer might expire before the next timer event
-+ * which is scheduled to wake up that CPU. In case of a completely
-+ * idle system the next event might even be infinite time into the
-+ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
-+ * leaves the inner idle loop so the newly added timer is taken into
-+ * account when the CPU goes back to idle and evaluates the timer
-+ * wheel for the next timer event.
-+ */
-+static inline void wake_up_idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (cpu == smp_processor_id())
-+		return;
-+
-+	if (set_nr_and_not_polling(rq->idle))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+static inline bool wake_up_full_nohz_cpu(int cpu)
-+{
-+	/*
-+	 * We just need the target to call irq_exit() and re-evaluate
-+	 * the next tick. The nohz full kick at least implies that.
-+	 * If needed we can still optimize that later with an
-+	 * empty IRQ.
-+	 */
-+	if (cpu_is_offline(cpu))
-+		return true;  /* Don't try to wake offline CPUs. */
-+	if (tick_nohz_full_cpu(cpu)) {
-+		if (cpu != smp_processor_id() ||
-+		    tick_nohz_tick_stopped())
-+			tick_nohz_full_kick_cpu(cpu);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_nohz_cpu(int cpu)
-+{
-+	if (!wake_up_full_nohz_cpu(cpu))
-+		wake_up_idle_cpu(cpu);
-+}
-+
-+static void nohz_csd_func(void *info)
-+{
-+	struct rq *rq = info;
-+	int cpu = cpu_of(rq);
-+	unsigned int flags;
-+
-+	/*
-+	 * Release the rq::nohz_csd.
-+	 */
-+	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
-+	WARN_ON(!(flags & NOHZ_KICK_MASK));
-+
-+	rq->idle_balance = idle_cpu(cpu);
-+	if (rq->idle_balance && !need_resched()) {
-+		rq->nohz_idle_balance = flags;
-+		raise_softirq_irqoff(SCHED_SOFTIRQ);
-+	}
-+}
-+
-+#endif /* CONFIG_NO_HZ_COMMON */
-+#endif /* CONFIG_SMP */
-+
-+static inline void check_preempt_curr(struct rq *rq)
-+{
-+	if (sched_rq_first_task(rq) != rq->curr)
-+		resched_curr(rq);
-+}
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+/*
-+ * Use HR-timers to deliver accurate preemption points.
-+ */
-+
-+static void hrtick_clear(struct rq *rq)
-+{
-+	if (hrtimer_active(&rq->hrtick_timer))
-+		hrtimer_cancel(&rq->hrtick_timer);
-+}
-+
-+/*
-+ * High-resolution timer tick.
-+ * Runs from hardirq context with interrupts disabled.
-+ */
-+static enum hrtimer_restart hrtick(struct hrtimer *timer)
-+{
-+	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
-+
-+	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
-+
-+	raw_spin_lock(&rq->lock);
-+	resched_curr(rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	return HRTIMER_NORESTART;
-+}
-+
-+/*
-+ * Use hrtick when:
-+ *  - enabled by features
-+ *  - hrtimer is actually high res
-+ */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	/**
-+	 * Alt schedule FW doesn't support sched_feat yet
-+	if (!sched_feat(HRTICK))
-+		return 0;
-+	*/
-+	if (!cpu_active(cpu_of(rq)))
-+		return 0;
-+	return hrtimer_is_hres_active(&rq->hrtick_timer);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void __hrtick_restart(struct rq *rq)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	ktime_t time = rq->hrtick_time;
-+
-+	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
-+}
-+
-+/*
-+ * called from hardirq (IPI) context
-+ */
-+static void __hrtick_start(void *arg)
-+{
-+	struct rq *rq = arg;
-+
-+	raw_spin_lock(&rq->lock);
-+	__hrtick_restart(rq);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	s64 delta;
-+
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense and can cause timer DoS.
-+	 */
-+	delta = max_t(s64, delay, 10000LL);
-+
-+	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
-+
-+	if (rq == this_rq())
-+		__hrtick_restart(rq);
-+	else
-+		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-+}
-+
-+#else
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense. Rely on vruntime for fairness.
-+	 */
-+	delay = max_t(u64, delay, 10000LL);
-+	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
-+		      HRTIMER_MODE_REL_PINNED_HARD);
-+}
-+#endif /* CONFIG_SMP */
-+
-+static void hrtick_rq_init(struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
-+#endif
-+
-+	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
-+	rq->hrtick_timer.function = hrtick;
-+}
-+#else	/* CONFIG_SCHED_HRTICK */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	return 0;
-+}
-+
-+static inline void hrtick_clear(struct rq *rq)
-+{
-+}
-+
-+static inline void hrtick_rq_init(struct rq *rq)
-+{
-+}
-+#endif	/* CONFIG_SCHED_HRTICK */
-+
-+static inline int __normal_prio(int policy, int rt_prio, int static_prio)
-+{
-+	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
-+		static_prio + MAX_PRIORITY_ADJ;
-+}
-+
-+/*
-+ * Calculate the expected normal priority: i.e. priority
-+ * without taking RT-inheritance into account. Might be
-+ * boosted by interactivity modifiers. Changes upon fork,
-+ * setprio syscalls, and whenever the interactivity
-+ * estimator recalculates.
-+ */
-+static inline int normal_prio(struct task_struct *p)
-+{
-+	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
-+}
-+
-+/*
-+ * Calculate the current priority, i.e. the priority
-+ * taken into account by the scheduler. This value might
-+ * be boosted by RT tasks as it will be RT if the task got
-+ * RT-boosted. If not then it returns p->normal_prio.
-+ */
-+static int effective_prio(struct task_struct *p)
-+{
-+	p->normal_prio = normal_prio(p);
-+	/*
-+	 * If we are RT tasks or we were boosted to RT priority,
-+	 * keep the priority unchanged. Otherwise, update priority
-+	 * to the normal priority:
-+	 */
-+	if (!rt_prio(p->prio))
-+		return p->normal_prio;
-+	return p->prio;
-+}
-+
-+/*
-+ * activate_task - move a task to the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static void activate_task(struct task_struct *p, struct rq *rq)
-+{
-+	enqueue_task(p, rq, ENQUEUE_WAKEUP);
-+	p->on_rq = TASK_ON_RQ_QUEUED;
-+
-+	/*
-+	 * If in_iowait is set, the code below may not trigger any cpufreq
-+	 * utilization updates, so do it here explicitly with the IOWAIT flag
-+	 * passed.
-+	 */
-+	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
-+}
-+
-+/*
-+ * deactivate_task - remove a task from the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static inline void deactivate_task(struct task_struct *p, struct rq *rq)
-+{
-+	dequeue_task(p, rq, DEQUEUE_SLEEP);
-+	p->on_rq = 0;
-+	cpufreq_update_util(rq, 0);
-+}
-+
-+static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
-+	 * successfully executed on another CPU. We must ensure that updates of
-+	 * per-task data have been completed by this moment.
-+	 */
-+	smp_wmb();
-+
-+#ifdef CONFIG_THREAD_INFO_IN_TASK
-+	WRITE_ONCE(p->cpu, cpu);
-+#else
-+	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
-+#endif
-+#endif
-+}
-+
-+static inline bool is_migration_disabled(struct task_struct *p)
-+{
-+#ifdef CONFIG_SMP
-+	return p->migration_disabled;
-+#else
-+	return false;
-+#endif
-+}
-+
-+#define SCA_CHECK		0x01
-+
-+#ifdef CONFIG_SMP
-+
-+void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
-+{
-+#ifdef CONFIG_SCHED_DEBUG
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * We should never call set_task_cpu() on a blocked task,
-+	 * ttwu() will sort out the placement.
-+	 */
-+	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
-+
-+#ifdef CONFIG_LOCKDEP
-+	/*
-+	 * The caller should hold either p->pi_lock or rq->lock, when changing
-+	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
-+	 *
-+	 * sched_move_task() holds both and thus holding either pins the cgroup,
-+	 * see task_group().
-+	 */
-+	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
-+				      lockdep_is_held(&task_rq(p)->lock)));
-+#endif
-+	/*
-+	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
-+	 */
-+	WARN_ON_ONCE(!cpu_online(new_cpu));
-+
-+	WARN_ON_ONCE(is_migration_disabled(p));
-+#endif
-+	if (task_cpu(p) == new_cpu)
-+		return;
-+	trace_sched_migrate_task(p, new_cpu);
-+	rseq_migrate(p);
-+	perf_event_task_migrate(p);
-+
-+	__set_task_cpu(p, new_cpu);
-+}
-+
-+#define MDF_FORCE_ENABLED	0x80
-+
-+static void
-+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	/*
-+	 * This here violates the locking rules for affinity, since we're only
-+	 * supposed to change these variables while holding both rq->lock and
-+	 * p->pi_lock.
-+	 *
-+	 * HOWEVER, it magically works, because ttwu() is the only code that
-+	 * accesses these variables under p->pi_lock and only does so after
-+	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
-+	 * before finish_task().
-+	 *
-+	 * XXX do further audits, this smells like something putrid.
-+	 */
-+	SCHED_WARN_ON(!p->on_cpu);
-+	p->cpus_ptr = new_mask;
-+}
-+
-+void migrate_disable(void)
-+{
-+	struct task_struct *p = current;
-+	int cpu;
-+
-+	if (p->migration_disabled) {
-+		p->migration_disabled++;
-+		return;
-+	}
-+
-+	preempt_disable();
-+	cpu = smp_processor_id();
-+	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
-+		cpu_rq(cpu)->nr_pinned++;
-+		p->migration_disabled = 1;
-+		p->migration_flags &= ~MDF_FORCE_ENABLED;
-+
-+		/*
-+		 * Violates locking rules! see comment in __do_set_cpus_ptr().
-+		 */
-+		if (p->cpus_ptr == &p->cpus_mask)
-+			__do_set_cpus_ptr(p, cpumask_of(cpu));
-+	}
-+	preempt_enable();
-+}
-+EXPORT_SYMBOL_GPL(migrate_disable);
-+
-+void migrate_enable(void)
-+{
-+	struct task_struct *p = current;
-+
-+	if (0 == p->migration_disabled)
-+		return;
-+
-+	if (p->migration_disabled > 1) {
-+		p->migration_disabled--;
-+		return;
-+	}
-+
-+	/*
-+	 * Ensure stop_task runs either before or after this, and that
-+	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
-+	 */
-+	preempt_disable();
-+	/*
-+	 * Assumption: current should be running on allowed cpu
-+	 */
-+	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
-+	if (p->cpus_ptr != &p->cpus_mask)
-+		__do_set_cpus_ptr(p, &p->cpus_mask);
-+	/*
-+	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
-+	 * regular cpus_mask, otherwise things that race (eg.
-+	 * select_fallback_rq) get confused.
-+	 */
-+	barrier();
-+	p->migration_disabled = 0;
-+	this_rq()->nr_pinned--;
-+	preempt_enable();
-+}
-+EXPORT_SYMBOL_GPL(migrate_enable);
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return rq->nr_pinned;
-+}
-+
-+/*
-+ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
-+ * __set_cpus_allowed_ptr() and select_fallback_rq().
-+ */
-+static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
-+{
-+	/* When not in the task's cpumask, no point in looking further. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/* migrate_disabled() must be allowed to finish. */
-+	if (is_migration_disabled(p))
-+		return cpu_online(cpu);
-+
-+	/* Non kernel threads are not allowed during either online or offline. */
-+	if (!(p->flags & PF_KTHREAD))
-+		return cpu_active(cpu);
-+
-+	/* KTHREAD_IS_PER_CPU is always allowed. */
-+	if (kthread_is_per_cpu(p))
-+		return cpu_online(cpu);
-+
-+	/* Regular kernel threads don't get to stay during offline. */
-+	if (cpu_dying(cpu))
-+		return false;
-+
-+	/* But are allowed during online. */
-+	return cpu_online(cpu);
-+}
-+
-+/*
-+ * This is how migration works:
-+ *
-+ * 1) we invoke migration_cpu_stop() on the target CPU using
-+ *    stop_one_cpu().
-+ * 2) stopper starts to run (implicitly forcing the migrated thread
-+ *    off the CPU)
-+ * 3) it checks whether the migrated task is still in the wrong runqueue.
-+ * 4) if it's in the wrong runqueue then the migration thread removes
-+ *    it and puts it into the right queue.
-+ * 5) stopper completes and stop_one_cpu() returns and the migration
-+ *    is done.
-+ */
-+
-+/*
-+ * move_queued_task - move a queued task to new rq.
-+ *
-+ * Returns (locked) new rq. Old rq's lock is released.
-+ */
-+static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
-+				   new_cpu)
-+{
-+	lockdep_assert_held(&rq->lock);
-+
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
-+	dequeue_task(p, rq, 0);
-+	set_task_cpu(p, new_cpu);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rq = cpu_rq(new_cpu);
-+
-+	raw_spin_lock(&rq->lock);
-+	BUG_ON(task_cpu(p) != new_cpu);
-+	sched_task_sanity_check(p, rq);
-+	enqueue_task(p, rq, 0);
-+	p->on_rq = TASK_ON_RQ_QUEUED;
-+	check_preempt_curr(rq);
-+
-+	return rq;
-+}
-+
-+struct migration_arg {
-+	struct task_struct *task;
-+	int dest_cpu;
-+};
-+
-+/*
-+ * Move (not current) task off this CPU, onto the destination CPU. We're doing
-+ * this because either it can't run here any more (set_cpus_allowed()
-+ * away from this CPU, or CPU going down), or because we're
-+ * attempting to rebalance this task on exec (sched_exec).
-+ *
-+ * So we race with normal scheduler movements, but that's OK, as long
-+ * as the task is no longer on this CPU.
-+ */
-+static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
-+				 dest_cpu)
-+{
-+	/* Affinity changed (again). */
-+	if (!is_cpu_allowed(p, dest_cpu))
-+		return rq;
-+
-+	update_rq_clock(rq);
-+	return move_queued_task(rq, p, dest_cpu);
-+}
-+
-+/*
-+ * migration_cpu_stop - this will be executed by a highprio stopper thread
-+ * and performs thread migration by bumping thread off CPU then
-+ * 'pushing' onto another runqueue.
-+ */
-+static int migration_cpu_stop(void *data)
-+{
-+	struct migration_arg *arg = data;
-+	struct task_struct *p = arg->task;
-+	struct rq *rq = this_rq();
-+	unsigned long flags;
-+
-+	/*
-+	 * The original target CPU might have gone down and we might
-+	 * be on another CPU but it doesn't matter.
-+	 */
-+	local_irq_save(flags);
-+	/*
-+	 * We need to explicitly wake pending tasks before running
-+	 * __migrate_task() such that we will not miss enforcing cpus_ptr
-+	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
-+	 */
-+	flush_smp_call_function_from_idle();
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+	/*
-+	 * If task_rq(p) != rq, it cannot be migrated here, because we're
-+	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
-+	 * we're holding p->pi_lock.
-+	 */
-+	if (task_rq(p) == rq && task_on_rq_queued(p))
-+		rq = __migrate_task(rq, p, arg->dest_cpu);
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	return 0;
-+}
-+
-+static inline void
-+set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	cpumask_copy(&p->cpus_mask, new_mask);
-+	p->nr_cpus_allowed = cpumask_weight(new_mask);
-+}
-+
-+static void
-+__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	lockdep_assert_held(&p->pi_lock);
-+	set_cpus_allowed_common(p, new_mask);
-+}
-+
-+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	__do_set_cpus_allowed(p, new_mask);
-+}
-+
-+#endif
-+
-+/**
-+ * task_curr - is this task currently executing on a CPU?
-+ * @p: the task in question.
-+ *
-+ * Return: 1 if the task is currently executing. 0 otherwise.
-+ */
-+inline int task_curr(const struct task_struct *p)
-+{
-+	return cpu_curr(task_cpu(p)) == p;
-+}
-+
-+#ifdef CONFIG_SMP
-+/*
-+ * wait_task_inactive - wait for a thread to unschedule.
-+ *
-+ * If @match_state is nonzero, it's the @p->state value just checked and
-+ * not expected to change.  If it changes, i.e. @p might have woken up,
-+ * then return zero.  When we succeed in waiting for @p to be off its CPU,
-+ * we return a positive number (its total switch count).  If a second call
-+ * a short while later returns the same number, the caller can be sure that
-+ * @p has remained unscheduled the whole time.
-+ *
-+ * The caller must ensure that the task *will* unschedule sometime soon,
-+ * else this function might spin for a *long* time. This function can't
-+ * be called with interrupts off, or it may introduce deadlock with
-+ * smp_call_function() if an IPI is sent by the same process we are
-+ * waiting to become inactive.
-+ */
-+unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
-+{
-+	unsigned long flags;
-+	bool running, on_rq;
-+	unsigned long ncsw;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+
-+		/*
-+		 * If the task is actively running on another CPU
-+		 * still, just relax and busy-wait without holding
-+		 * any locks.
-+		 *
-+		 * NOTE! Since we don't hold any locks, it's not
-+		 * even sure that "rq" stays as the right runqueue!
-+		 * But we don't care, since this will return false
-+		 * if the runqueue has changed and p is actually now
-+		 * running somewhere else!
-+		 */
-+		while (task_running(p) && p == rq->curr) {
-+			if (match_state && unlikely(READ_ONCE(p->__state) != match_state))
-+				return 0;
-+			cpu_relax();
-+		}
-+
-+		/*
-+		 * Ok, time to look more closely! We need the rq
-+		 * lock now, to be *sure*. If we're wrong, we'll
-+		 * just go back and repeat.
-+		 */
-+		task_access_lock_irqsave(p, &lock, &flags);
-+		trace_sched_wait_task(p);
-+		running = task_running(p);
-+		on_rq = p->on_rq;
-+		ncsw = 0;
-+		if (!match_state || READ_ONCE(p->__state) == match_state)
-+			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
-+		task_access_unlock_irqrestore(p, lock, &flags);
-+
-+		/*
-+		 * If it changed from the expected state, bail out now.
-+		 */
-+		if (unlikely(!ncsw))
-+			break;
-+
-+		/*
-+		 * Was it really running after all now that we
-+		 * checked with the proper locks actually held?
-+		 *
-+		 * Oops. Go back and try again..
-+		 */
-+		if (unlikely(running)) {
-+			cpu_relax();
-+			continue;
-+		}
-+
-+		/*
-+		 * It's not enough that it's not actively running,
-+		 * it must be off the runqueue _entirely_, and not
-+		 * preempted!
-+		 *
-+		 * So if it was still runnable (but just not actively
-+		 * running right now), it's preempted, and we should
-+		 * yield - it could be a while.
-+		 */
-+		if (unlikely(on_rq)) {
-+			ktime_t to = NSEC_PER_SEC / HZ;
-+
-+			set_current_state(TASK_UNINTERRUPTIBLE);
-+			schedule_hrtimeout(&to, HRTIMER_MODE_REL);
-+			continue;
-+		}
-+
-+		/*
-+		 * Ahh, all good. It wasn't running, and it wasn't
-+		 * runnable, which means that it will never become
-+		 * running in the future either. We're all done!
-+		 */
-+		break;
-+	}
-+
-+	return ncsw;
-+}
-+
-+/***
-+ * kick_process - kick a running thread to enter/exit the kernel
-+ * @p: the to-be-kicked thread
-+ *
-+ * Cause a process which is running on another CPU to enter
-+ * kernel-mode, without any delay. (to get signals handled.)
-+ *
-+ * NOTE: this function doesn't have to take the runqueue lock,
-+ * because all it wants to ensure is that the remote task enters
-+ * the kernel. If the IPI races and the task has been migrated
-+ * to another CPU then no harm is done and the purpose has been
-+ * achieved as well.
-+ */
-+void kick_process(struct task_struct *p)
-+{
-+	int cpu;
-+
-+	preempt_disable();
-+	cpu = task_cpu(p);
-+	if ((cpu != smp_processor_id()) && task_curr(p))
-+		smp_send_reschedule(cpu);
-+	preempt_enable();
-+}
-+EXPORT_SYMBOL_GPL(kick_process);
-+
-+/*
-+ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
-+ *
-+ * A few notes on cpu_active vs cpu_online:
-+ *
-+ *  - cpu_active must be a subset of cpu_online
-+ *
-+ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
-+ *    see __set_cpus_allowed_ptr(). At this point the newly online
-+ *    CPU isn't yet part of the sched domains, and balancing will not
-+ *    see it.
-+ *
-+ *  - on cpu-down we clear cpu_active() to mask the sched domains and
-+ *    avoid the load balancer to place new tasks on the to be removed
-+ *    CPU. Existing tasks will remain running there and will be taken
-+ *    off.
-+ *
-+ * This means that fallback selection must not select !active CPUs.
-+ * And can assume that any active CPU must be online. Conversely
-+ * select_task_rq() below may allow selection of !active CPUs in order
-+ * to satisfy the above rules.
-+ */
-+static int select_fallback_rq(int cpu, struct task_struct *p)
-+{
-+	int nid = cpu_to_node(cpu);
-+	const struct cpumask *nodemask = NULL;
-+	enum { cpuset, possible, fail } state = cpuset;
-+	int dest_cpu;
-+
-+	/*
-+	 * If the node that the CPU is on has been offlined, cpu_to_node()
-+	 * will return -1. There is no CPU on the node, and we should
-+	 * select the CPU on the other node.
-+	 */
-+	if (nid != -1) {
-+		nodemask = cpumask_of_node(nid);
-+
-+		/* Look for allowed, online CPU in same node. */
-+		for_each_cpu(dest_cpu, nodemask) {
-+			if (!cpu_active(dest_cpu))
-+				continue;
-+			if (cpumask_test_cpu(dest_cpu, p->cpus_ptr))
-+				return dest_cpu;
-+		}
-+	}
-+
-+	for (;;) {
-+		/* Any allowed, online CPU? */
-+		for_each_cpu(dest_cpu, p->cpus_ptr) {
-+			if (!is_cpu_allowed(p, dest_cpu))
-+				continue;
-+			goto out;
-+		}
-+
-+		/* No more Mr. Nice Guy. */
-+		switch (state) {
-+		case cpuset:
-+			if (IS_ENABLED(CONFIG_CPUSETS)) {
-+				cpuset_cpus_allowed_fallback(p);
-+				state = possible;
-+				break;
-+			}
-+			fallthrough;
-+		case possible:
-+			/*
-+			 * XXX When called from select_task_rq() we only
-+			 * hold p->pi_lock and again violate locking order.
-+			 *
-+			 * More yuck to audit.
-+			 */
-+			do_set_cpus_allowed(p, cpu_possible_mask);
-+			state = fail;
-+			break;
-+
-+		case fail:
-+			BUG();
-+			break;
-+		}
-+	}
-+
-+out:
-+	if (state != cpuset) {
-+		/*
-+		 * Don't tell them about moving exiting tasks or
-+		 * kernel threads (both mm NULL), since they never
-+		 * leave kernel.
-+		 */
-+		if (p->mm && printk_ratelimit()) {
-+			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
-+					task_pid_nr(p), p->comm, cpu);
-+		}
-+	}
-+
-+	return dest_cpu;
-+}
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	cpumask_t chk_mask, tmp;
-+
-+	if (unlikely(!cpumask_and(&chk_mask, p->cpus_ptr, cpu_active_mask)))
-+		return select_fallback_rq(task_cpu(p), p);
-+
-+	if (
-+#ifdef CONFIG_SCHED_SMT
-+	    cpumask_and(&tmp, &chk_mask, &sched_sg_idle_mask) ||
-+#endif
-+	    cpumask_and(&tmp, &chk_mask, sched_rq_watermark) ||
-+	    cpumask_and(&tmp, &chk_mask,
-+			sched_rq_watermark + SCHED_BITS - task_sched_prio(p)))
-+		return best_mask_cpu(task_cpu(p), &tmp);
-+
-+	return best_mask_cpu(task_cpu(p), &chk_mask);
-+}
-+
-+void sched_set_stop_task(int cpu, struct task_struct *stop)
-+{
-+	static struct lock_class_key stop_pi_lock;
-+	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
-+	struct sched_param start_param = { .sched_priority = 0 };
-+	struct task_struct *old_stop = cpu_rq(cpu)->stop;
-+
-+	if (stop) {
-+		/*
-+		 * Make it appear like a SCHED_FIFO task, its something
-+		 * userspace knows about and won't get confused about.
-+		 *
-+		 * Also, it will make PI more or less work without too
-+		 * much confusion -- but then, stop work should not
-+		 * rely on PI working anyway.
-+		 */
-+		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
-+
-+		/*
-+		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
-+		 * adjust the effective priority of a task. As a result,
-+		 * rt_mutex_setprio() can trigger (RT) balancing operations,
-+		 * which can then trigger wakeups of the stop thread to push
-+		 * around the current task.
-+		 *
-+		 * The stop task itself will never be part of the PI-chain, it
-+		 * never blocks, therefore that ->pi_lock recursion is safe.
-+		 * Tell lockdep about this by placing the stop->pi_lock in its
-+		 * own class.
-+		 */
-+		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
-+	}
-+
-+	cpu_rq(cpu)->stop = stop;
-+
-+	if (old_stop) {
-+		/*
-+		 * Reset it back to a normal scheduling policy so that
-+		 * it can die in pieces.
-+		 */
-+		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
-+	}
-+}
-+
-+/*
-+ * Change a given task's CPU affinity. Migrate the thread to a
-+ * proper CPU and schedule it away if the CPU it's executing on
-+ * is removed from the allowed bitmask.
-+ *
-+ * NOTE: the caller must have a valid reference to the task, the
-+ * task must not exit() & deallocate itself prematurely. The
-+ * call is not atomic; no spinlocks may be held.
-+ */
-+static int __set_cpus_allowed_ptr(struct task_struct *p,
-+				  const struct cpumask *new_mask,
-+				  u32 flags)
-+{
-+	const struct cpumask *cpu_valid_mask = cpu_active_mask;
-+	int dest_cpu;
-+	unsigned long irq_flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	int ret = 0;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
-+		/*
-+		 * Kernel threads are allowed on online && !active CPUs,
-+		 * however, during cpu-hot-unplug, even these might get pushed
-+		 * away if not KTHREAD_IS_PER_CPU.
-+		 *
-+		 * Specifically, migration_disabled() tasks must not fail the
-+		 * cpumask_any_and_distribute() pick below, esp. so on
-+		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
-+		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
-+		 */
-+		cpu_valid_mask = cpu_online_mask;
-+	}
-+
-+	/*
-+	 * Must re-check here, to close a race against __kthread_bind(),
-+	 * sched_setaffinity() is not guaranteed to observe the flag.
-+	 */
-+	if ((flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	if (cpumask_equal(&p->cpus_mask, new_mask))
-+		goto out;
-+
-+	dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
-+	if (dest_cpu >= nr_cpu_ids) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	__do_set_cpus_allowed(p, new_mask);
-+
-+	/* Can the task run on the task's current CPU? If so, we're done */
-+	if (cpumask_test_cpu(task_cpu(p), new_mask))
-+		goto out;
-+
-+	if (p->migration_disabled) {
-+		if (likely(p->cpus_ptr != &p->cpus_mask))
-+			__do_set_cpus_ptr(p, &p->cpus_mask);
-+		p->migration_disabled = 0;
-+		p->migration_flags |= MDF_FORCE_ENABLED;
-+		/* When p is migrate_disabled, rq->lock should be held */
-+		rq->nr_pinned--;
-+	}
-+
-+	if (task_running(p) || READ_ONCE(p->__state) == TASK_WAKING) {
-+		struct migration_arg arg = { p, dest_cpu };
-+
-+		/* Need help from migration thread: drop lock and wait. */
-+		__task_access_unlock(p, lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+		stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
-+		return 0;
-+	}
-+	if (task_on_rq_queued(p)) {
-+		/*
-+		 * OK, since we're going to drop the lock immediately
-+		 * afterwards anyway.
-+		 */
-+		update_rq_clock(rq);
-+		rq = move_queued_task(rq, p, dest_cpu);
-+		lock = &rq->lock;
-+	}
-+
-+out:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+
-+	return ret;
-+}
-+
-+int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	return __set_cpus_allowed_ptr(p, new_mask, 0);
-+}
-+EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
-+
-+#else /* CONFIG_SMP */
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+static inline int
-+__set_cpus_allowed_ptr(struct task_struct *p,
-+		       const struct cpumask *new_mask,
-+		       u32 flags)
-+{
-+	return set_cpus_allowed_ptr(p, new_mask);
-+}
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return false;
-+}
-+
-+#endif /* !CONFIG_SMP */
-+
-+static void
-+ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq;
-+
-+	if (!schedstat_enabled())
-+		return;
-+
-+	rq = this_rq();
-+
-+#ifdef CONFIG_SMP
-+	if (cpu == rq->cpu)
-+		__schedstat_inc(rq->ttwu_local);
-+	else {
-+		/** Alt schedule FW ToDo:
-+		 * How to do ttwu_wake_remote
-+		 */
-+	}
-+#endif /* CONFIG_SMP */
-+
-+	__schedstat_inc(rq->ttwu_count);
-+}
-+
-+/*
-+ * Mark the task runnable and perform wakeup-preemption.
-+ */
-+static inline void
-+ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
-+{
-+	check_preempt_curr(rq);
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	trace_sched_wakeup(p);
-+}
-+
-+static inline void
-+ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
-+{
-+	if (p->sched_contributes_to_load)
-+		rq->nr_uninterruptible--;
-+
-+	if (
-+#ifdef CONFIG_SMP
-+	    !(wake_flags & WF_MIGRATED) &&
-+#endif
-+	    p->in_iowait) {
-+		delayacct_blkio_end(p);
-+		atomic_dec(&task_rq(p)->nr_iowait);
-+	}
-+
-+	activate_task(p, rq);
-+	ttwu_do_wakeup(rq, p, 0);
-+}
-+
-+/*
-+ * Consider @p being inside a wait loop:
-+ *
-+ *   for (;;) {
-+ *      set_current_state(TASK_UNINTERRUPTIBLE);
-+ *
-+ *      if (CONDITION)
-+ *         break;
-+ *
-+ *      schedule();
-+ *   }
-+ *   __set_current_state(TASK_RUNNING);
-+ *
-+ * between set_current_state() and schedule(). In this case @p is still
-+ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
-+ * an atomic manner.
-+ *
-+ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
-+ * then schedule() must still happen and p->state can be changed to
-+ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
-+ * need to do a full wakeup with enqueue.
-+ *
-+ * Returns: %true when the wakeup is done,
-+ *          %false otherwise.
-+ */
-+static int ttwu_runnable(struct task_struct *p, int wake_flags)
-+{
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	int ret = 0;
-+
-+	rq = __task_access_lock(p, &lock);
-+	if (task_on_rq_queued(p)) {
-+		/* check_preempt_curr() may use rq clock */
-+		update_rq_clock(rq);
-+		ttwu_do_wakeup(rq, p, wake_flags);
-+		ret = 1;
-+	}
-+	__task_access_unlock(p, lock);
-+
-+	return ret;
-+}
-+
-+#ifdef CONFIG_SMP
-+void sched_ttwu_pending(void *arg)
-+{
-+	struct llist_node *llist = arg;
-+	struct rq *rq = this_rq();
-+	struct task_struct *p, *t;
-+	struct rq_flags rf;
-+
-+	if (!llist)
-+		return;
-+
-+	/*
-+	 * rq::ttwu_pending racy indication of out-standing wakeups.
-+	 * Races such that false-negatives are possible, since they
-+	 * are shorter lived that false-positives would be.
-+	 */
-+	WRITE_ONCE(rq->ttwu_pending, 0);
-+
-+	rq_lock_irqsave(rq, &rf);
-+	update_rq_clock(rq);
-+
-+	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
-+		if (WARN_ON_ONCE(p->on_cpu))
-+			smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
-+			set_task_cpu(p, cpu_of(rq));
-+
-+		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
-+	}
-+
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+void send_call_function_single_ipi(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (!set_nr_if_polling(rq->idle))
-+		arch_send_call_function_single_ipi(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+/*
-+ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
-+ * necessary. The wakee CPU on receipt of the IPI will queue the task
-+ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
-+ * of the wakeup instead of the waker.
-+ */
-+static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
-+
-+	WRITE_ONCE(rq->ttwu_pending, 1);
-+	__smp_call_single_queue(cpu, &p->wake_entry.llist);
-+}
-+
-+static inline bool ttwu_queue_cond(int cpu, int wake_flags)
-+{
-+	/*
-+	 * Do not complicate things with the async wake_list while the CPU is
-+	 * in hotplug state.
-+	 */
-+	if (!cpu_active(cpu))
-+		return false;
-+
-+	/*
-+	 * If the CPU does not share cache, then queue the task on the
-+	 * remote rqs wakelist to avoid accessing remote data.
-+	 */
-+	if (!cpus_share_cache(smp_processor_id(), cpu))
-+		return true;
-+
-+	/*
-+	 * If the task is descheduling and the only running task on the
-+	 * CPU then use the wakelist to offload the task activation to
-+	 * the soon-to-be-idle CPU as the current CPU is likely busy.
-+	 * nr_running is checked to avoid unnecessary task stacking.
-+	 */
-+	if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
-+		return true;
-+
-+	return false;
-+}
-+
-+static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
-+		if (WARN_ON_ONCE(cpu == smp_processor_id()))
-+			return false;
-+
-+		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
-+		__ttwu_queue_wakelist(p, cpu, wake_flags);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_if_idle(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	rcu_read_lock();
-+
-+	if (!is_idle_task(rcu_dereference(rq->curr)))
-+		goto out;
-+
-+	if (set_nr_if_polling(rq->idle)) {
-+		trace_sched_wake_idle_without_ipi(cpu);
-+	} else {
-+		raw_spin_lock_irqsave(&rq->lock, flags);
-+		if (is_idle_task(rq->curr))
-+			smp_send_reschedule(cpu);
-+		/* Else CPU is not idle, do nothing here */
-+		raw_spin_unlock_irqrestore(&rq->lock, flags);
-+	}
-+
-+out:
-+	rcu_read_unlock();
-+}
-+
-+bool cpus_share_cache(int this_cpu, int that_cpu)
-+{
-+	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-+}
-+#else /* !CONFIG_SMP */
-+
-+static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	return false;
-+}
-+
-+#endif /* CONFIG_SMP */
-+
-+static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (ttwu_queue_wakelist(p, cpu, wake_flags))
-+		return;
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+	ttwu_do_activate(rq, p, wake_flags);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Notes on Program-Order guarantees on SMP systems.
-+ *
-+ *  MIGRATION
-+ *
-+ * The basic program-order guarantee on SMP systems is that when a task [t]
-+ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
-+ * execution on its new CPU [c1].
-+ *
-+ * For migration (of runnable tasks) this is provided by the following means:
-+ *
-+ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
-+ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
-+ *     rq(c1)->lock (if not at the same time, then in that order).
-+ *  C) LOCK of the rq(c1)->lock scheduling in task
-+ *
-+ * Transitivity guarantees that B happens after A and C after B.
-+ * Note: we only require RCpc transitivity.
-+ * Note: the CPU doing B need not be c0 or c1
-+ *
-+ * Example:
-+ *
-+ *   CPU0            CPU1            CPU2
-+ *
-+ *   LOCK rq(0)->lock
-+ *   sched-out X
-+ *   sched-in Y
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(0)->lock // orders against CPU0
-+ *                                   dequeue X
-+ *                                   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(1)->lock
-+ *                                   enqueue X
-+ *                                   UNLOCK rq(1)->lock
-+ *
-+ *                   LOCK rq(1)->lock // orders against CPU2
-+ *                   sched-out Z
-+ *                   sched-in X
-+ *                   UNLOCK rq(1)->lock
-+ *
-+ *
-+ *  BLOCKING -- aka. SLEEP + WAKEUP
-+ *
-+ * For blocking we (obviously) need to provide the same guarantee as for
-+ * migration. However the means are completely different as there is no lock
-+ * chain to provide order. Instead we do:
-+ *
-+ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
-+ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
-+ *
-+ * Example:
-+ *
-+ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
-+ *
-+ *   LOCK rq(0)->lock LOCK X->pi_lock
-+ *   dequeue X
-+ *   sched-out X
-+ *   smp_store_release(X->on_cpu, 0);
-+ *
-+ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
-+ *                    X->state = WAKING
-+ *                    set_task_cpu(X,2)
-+ *
-+ *                    LOCK rq(2)->lock
-+ *                    enqueue X
-+ *                    X->state = RUNNING
-+ *                    UNLOCK rq(2)->lock
-+ *
-+ *                                          LOCK rq(2)->lock // orders against CPU1
-+ *                                          sched-out Z
-+ *                                          sched-in X
-+ *                                          UNLOCK rq(2)->lock
-+ *
-+ *                    UNLOCK X->pi_lock
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *
-+ * However; for wakeups there is a second guarantee we must provide, namely we
-+ * must observe the state that lead to our wakeup. That is, not only must our
-+ * task observe its own prior state, it must also observe the stores prior to
-+ * its wakeup.
-+ *
-+ * This means that any means of doing remote wakeups must order the CPU doing
-+ * the wakeup against the CPU the task is going to end up running on. This,
-+ * however, is already required for the regular Program-Order guarantee above,
-+ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
-+ *
-+ */
-+
-+/**
-+ * try_to_wake_up - wake up a thread
-+ * @p: the thread to be awakened
-+ * @state: the mask of task states that can be woken
-+ * @wake_flags: wake modifier flags (WF_*)
-+ *
-+ * Conceptually does:
-+ *
-+ *   If (@state & @p->state) @p->state = TASK_RUNNING.
-+ *
-+ * If the task was not queued/runnable, also place it back on a runqueue.
-+ *
-+ * This function is atomic against schedule() which would dequeue the task.
-+ *
-+ * It issues a full memory barrier before accessing @p->state, see the comment
-+ * with set_current_state().
-+ *
-+ * Uses p->pi_lock to serialize against concurrent wake-ups.
-+ *
-+ * Relies on p->pi_lock stabilizing:
-+ *  - p->sched_class
-+ *  - p->cpus_ptr
-+ *  - p->sched_task_group
-+ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
-+ *
-+ * Tries really hard to only take one task_rq(p)->lock for performance.
-+ * Takes rq->lock in:
-+ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
-+ *  - ttwu_queue()       -- new rq, for enqueue of the task;
-+ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
-+ *
-+ * As a consequence we race really badly with just about everything. See the
-+ * many memory barriers and their comments for details.
-+ *
-+ * Return: %true if @p->state changes (an actual wakeup was done),
-+ *	   %false otherwise.
-+ */
-+static int try_to_wake_up(struct task_struct *p, unsigned int state,
-+			  int wake_flags)
-+{
-+	unsigned long flags;
-+	int cpu, success = 0;
-+
-+	preempt_disable();
-+	if (p == current) {
-+		/*
-+		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
-+		 * == smp_processor_id()'. Together this means we can special
-+		 * case the whole 'p->on_rq && ttwu_runnable()' case below
-+		 * without taking any locks.
-+		 *
-+		 * In particular:
-+		 *  - we rely on Program-Order guarantees for all the ordering,
-+		 *  - we're serialized against set_special_state() by virtue of
-+		 *    it disabling IRQs (this allows not taking ->pi_lock).
-+		 */
-+		if (!(READ_ONCE(p->__state) & state))
-+			goto out;
-+
-+		success = 1;
-+		trace_sched_waking(p);
-+		WRITE_ONCE(p->__state, TASK_RUNNING);
-+		trace_sched_wakeup(p);
-+		goto out;
-+	}
-+
-+	/*
-+	 * If we are going to wake up a thread waiting for CONDITION we
-+	 * need to ensure that CONDITION=1 done by the caller can not be
-+	 * reordered with p->state check below. This pairs with smp_store_mb()
-+	 * in set_current_state() that the waiting thread does.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	smp_mb__after_spinlock();
-+	if (!(READ_ONCE(p->__state) & state))
-+		goto unlock;
-+
-+	trace_sched_waking(p);
-+
-+	/* We're going to change ->state: */
-+	success = 1;
-+
-+	/*
-+	 * Ensure we load p->on_rq _after_ p->state, otherwise it would
-+	 * be possible to, falsely, observe p->on_rq == 0 and get stuck
-+	 * in smp_cond_load_acquire() below.
-+	 *
-+	 * sched_ttwu_pending()			try_to_wake_up()
-+	 *   STORE p->on_rq = 1			  LOAD p->state
-+	 *   UNLOCK rq->lock
-+	 *
-+	 * __schedule() (switch to task 'p')
-+	 *   LOCK rq->lock			  smp_rmb();
-+	 *   smp_mb__after_spinlock();
-+	 *   UNLOCK rq->lock
-+	 *
-+	 * [task p]
-+	 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
-+	 *
-+	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+	 * __schedule().  See the comment for smp_mb__after_spinlock().
-+	 *
-+	 * A similar smb_rmb() lives in try_invoke_on_locked_down_task().
-+	 */
-+	smp_rmb();
-+	if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
-+		goto unlock;
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
-+	 * possible to, falsely, observe p->on_cpu == 0.
-+	 *
-+	 * One must be running (->on_cpu == 1) in order to remove oneself
-+	 * from the runqueue.
-+	 *
-+	 * __schedule() (switch to task 'p')	try_to_wake_up()
-+	 *   STORE p->on_cpu = 1		  LOAD p->on_rq
-+	 *   UNLOCK rq->lock
-+	 *
-+	 * __schedule() (put 'p' to sleep)
-+	 *   LOCK rq->lock			  smp_rmb();
-+	 *   smp_mb__after_spinlock();
-+	 *   STORE p->on_rq = 0			  LOAD p->on_cpu
-+	 *
-+	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+	 * __schedule().  See the comment for smp_mb__after_spinlock().
-+	 *
-+	 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
-+	 * schedule()'s deactivate_task() has 'happened' and p will no longer
-+	 * care about it's own p->state. See the comment in __schedule().
-+	 */
-+	smp_acquire__after_ctrl_dep();
-+
-+	/*
-+	 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
-+	 * == 0), which means we need to do an enqueue, change p->state to
-+	 * TASK_WAKING such that we can unlock p->pi_lock before doing the
-+	 * enqueue, such as ttwu_queue_wakelist().
-+	 */
-+	WRITE_ONCE(p->__state, TASK_WAKING);
-+
-+	/*
-+	 * If the owning (remote) CPU is still in the middle of schedule() with
-+	 * this task as prev, considering queueing p on the remote CPUs wake_list
-+	 * which potentially sends an IPI instead of spinning on p->on_cpu to
-+	 * let the waker make forward progress. This is safe because IRQs are
-+	 * disabled and the IPI will deliver after on_cpu is cleared.
-+	 *
-+	 * Ensure we load task_cpu(p) after p->on_cpu:
-+	 *
-+	 * set_task_cpu(p, cpu);
-+	 *   STORE p->cpu = @cpu
-+	 * __schedule() (switch to task 'p')
-+	 *   LOCK rq->lock
-+	 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
-+	 *   STORE p->on_cpu = 1                LOAD p->cpu
-+	 *
-+	 * to ensure we observe the correct CPU on which the task is currently
-+	 * scheduling.
-+	 */
-+	if (smp_load_acquire(&p->on_cpu) &&
-+	    ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
-+		goto unlock;
-+
-+	/*
-+	 * If the owning (remote) CPU is still in the middle of schedule() with
-+	 * this task as prev, wait until it's done referencing the task.
-+	 *
-+	 * Pairs with the smp_store_release() in finish_task().
-+	 *
-+	 * This ensures that tasks getting woken will be fully ordered against
-+	 * their previous state and preserve Program Order.
-+	 */
-+	smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+	sched_task_ttwu(p);
-+
-+	cpu = select_task_rq(p);
-+
-+	if (cpu != task_cpu(p)) {
-+		if (p->in_iowait) {
-+			delayacct_blkio_end(p);
-+			atomic_dec(&task_rq(p)->nr_iowait);
-+		}
-+
-+		wake_flags |= WF_MIGRATED;
-+		psi_ttwu_dequeue(p);
-+		set_task_cpu(p, cpu);
-+	}
-+#else
-+	cpu = task_cpu(p);
-+#endif /* CONFIG_SMP */
-+
-+	ttwu_queue(p, cpu, wake_flags);
-+unlock:
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+out:
-+	if (success)
-+		ttwu_stat(p, task_cpu(p), wake_flags);
-+	preempt_enable();
-+
-+	return success;
-+}
-+
-+/**
-+ * try_invoke_on_locked_down_task - Invoke a function on task in fixed state
-+ * @p: Process for which the function is to be invoked, can be @current.
-+ * @func: Function to invoke.
-+ * @arg: Argument to function.
-+ *
-+ * If the specified task can be quickly locked into a definite state
-+ * (either sleeping or on a given runqueue), arrange to keep it in that
-+ * state while invoking @func(@arg).  This function can use ->on_rq and
-+ * task_curr() to work out what the state is, if required.  Given that
-+ * @func can be invoked with a runqueue lock held, it had better be quite
-+ * lightweight.
-+ *
-+ * Returns:
-+ *	@false if the task slipped out from under the locks.
-+ *	@true if the task was locked onto a runqueue or is sleeping.
-+ *		However, @func can override this by returning @false.
-+ */
-+bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg)
-+{
-+	struct rq_flags rf;
-+	bool ret = false;
-+	struct rq *rq;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
-+	if (p->on_rq) {
-+		rq = __task_rq_lock(p, &rf);
-+		if (task_rq(p) == rq)
-+			ret = func(p, arg);
-+		__task_rq_unlock(rq, &rf);
-+	} else {
-+		switch (READ_ONCE(p->__state)) {
-+		case TASK_RUNNING:
-+		case TASK_WAKING:
-+			break;
-+		default:
-+			smp_rmb(); // See smp_rmb() comment in try_to_wake_up().
-+			if (!p->on_rq)
-+				ret = func(p, arg);
-+		}
-+	}
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
-+	return ret;
-+}
-+
-+/**
-+ * wake_up_process - Wake up a specific process
-+ * @p: The process to be woken up.
-+ *
-+ * Attempt to wake up the nominated process and move it to the set of runnable
-+ * processes.
-+ *
-+ * Return: 1 if the process was woken up, 0 if it was already running.
-+ *
-+ * This function executes a full memory barrier before accessing the task state.
-+ */
-+int wake_up_process(struct task_struct *p)
-+{
-+	return try_to_wake_up(p, TASK_NORMAL, 0);
-+}
-+EXPORT_SYMBOL(wake_up_process);
-+
-+int wake_up_state(struct task_struct *p, unsigned int state)
-+{
-+	return try_to_wake_up(p, state, 0);
-+}
-+
-+/*
-+ * Perform scheduler related setup for a newly forked process p.
-+ * p is forked by current.
-+ *
-+ * __sched_fork() is basic setup used by init_idle() too:
-+ */
-+static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	p->on_rq			= 0;
-+	p->on_cpu			= 0;
-+	p->utime			= 0;
-+	p->stime			= 0;
-+	p->sched_time			= 0;
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+	INIT_HLIST_HEAD(&p->preempt_notifiers);
-+#endif
-+
-+#ifdef CONFIG_COMPACTION
-+	p->capture_control = NULL;
-+#endif
-+#ifdef CONFIG_SMP
-+	p->wake_entry.u_flags = CSD_TYPE_TTWU;
-+#endif
-+}
-+
-+/*
-+ * fork()/clone()-time setup:
-+ */
-+int sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	__sched_fork(clone_flags, p);
-+	/*
-+	 * We mark the process as NEW here. This guarantees that
-+	 * nobody will actually run it, and a signal or other external
-+	 * event cannot wake it up and insert it on the runqueue either.
-+	 */
-+	p->__state = TASK_NEW;
-+
-+	/*
-+	 * Make sure we do not leak PI boosting priority to the child.
-+	 */
-+	p->prio = current->normal_prio;
-+
-+	/*
-+	 * Revert to default priority/policy on fork if requested.
-+	 */
-+	if (unlikely(p->sched_reset_on_fork)) {
-+		if (task_has_rt_policy(p)) {
-+			p->policy = SCHED_NORMAL;
-+			p->static_prio = NICE_TO_PRIO(0);
-+			p->rt_priority = 0;
-+		} else if (PRIO_TO_NICE(p->static_prio) < 0)
-+			p->static_prio = NICE_TO_PRIO(0);
-+
-+		p->prio = p->normal_prio = p->static_prio;
-+
-+		/*
-+		 * We don't need the reset flag anymore after the fork. It has
-+		 * fulfilled its duty:
-+		 */
-+		p->sched_reset_on_fork = 0;
-+	}
-+
-+	/*
-+	 * The child is not yet in the pid-hash so no cgroup attach races,
-+	 * and the cgroup is pinned to this child due to cgroup_fork()
-+	 * is ran before sched_fork().
-+	 *
-+	 * Silence PROVE_RCU.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	/*
-+	 * Share the timeslice between parent and child, thus the
-+	 * total amount of pending timeslices in the system doesn't change,
-+	 * resulting in more scheduling fairness.
-+	 */
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	rq->curr->time_slice /= 2;
-+	p->time_slice = rq->curr->time_slice;
-+#ifdef CONFIG_SCHED_HRTICK
-+	hrtick_start(rq, rq->curr->time_slice);
-+#endif
-+
-+	if (p->time_slice < RESCHED_NS) {
-+		p->time_slice = sched_timeslice_ns;
-+		resched_curr(rq);
-+	}
-+	sched_task_fork(p, rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rseq_migrate(p);
-+	/*
-+	 * We're setting the CPU for the first time, we don't migrate,
-+	 * so use __set_task_cpu().
-+	 */
-+	__set_task_cpu(p, cpu_of(rq));
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+#ifdef CONFIG_SCHED_INFO
-+	if (unlikely(sched_info_on()))
-+		memset(&p->sched_info, 0, sizeof(p->sched_info));
-+#endif
-+	init_task_preempt_count(p);
-+
-+	return 0;
-+}
-+
-+void sched_post_fork(struct task_struct *p) {}
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+DEFINE_STATIC_KEY_FALSE(sched_schedstats);
-+
-+static void set_schedstats(bool enabled)
-+{
-+	if (enabled)
-+		static_branch_enable(&sched_schedstats);
-+	else
-+		static_branch_disable(&sched_schedstats);
-+}
-+
-+void force_schedstat_enabled(void)
-+{
-+	if (!schedstat_enabled()) {
-+		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
-+		static_branch_enable(&sched_schedstats);
-+	}
-+}
-+
-+static int __init setup_schedstats(char *str)
-+{
-+	int ret = 0;
-+	if (!str)
-+		goto out;
-+
-+	if (!strcmp(str, "enable")) {
-+		set_schedstats(true);
-+		ret = 1;
-+	} else if (!strcmp(str, "disable")) {
-+		set_schedstats(false);
-+		ret = 1;
-+	}
-+out:
-+	if (!ret)
-+		pr_warn("Unable to parse schedstats=\n");
-+
-+	return ret;
-+}
-+__setup("schedstats=", setup_schedstats);
-+
-+#ifdef CONFIG_PROC_SYSCTL
-+int sysctl_schedstats(struct ctl_table *table, int write,
-+			 void __user *buffer, size_t *lenp, loff_t *ppos)
-+{
-+	struct ctl_table t;
-+	int err;
-+	int state = static_branch_likely(&sched_schedstats);
-+
-+	if (write && !capable(CAP_SYS_ADMIN))
-+		return -EPERM;
-+
-+	t = *table;
-+	t.data = &state;
-+	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
-+	if (err < 0)
-+		return err;
-+	if (write)
-+		set_schedstats(state);
-+	return err;
-+}
-+#endif /* CONFIG_PROC_SYSCTL */
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+/*
-+ * wake_up_new_task - wake up a newly created task for the first time.
-+ *
-+ * This function will do some initial scheduler statistics housekeeping
-+ * that must be done for every newly created context, then puts the task
-+ * on the runqueue and wakes it.
-+ */
-+void wake_up_new_task(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	rq = cpu_rq(select_task_rq(p));
-+#ifdef CONFIG_SMP
-+	rseq_migrate(p);
-+	/*
-+	 * Fork balancing, do it here and not earlier because:
-+	 * - cpus_ptr can change in the fork path
-+	 * - any previously selected CPU might disappear through hotplug
-+	 *
-+	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
-+	 * as we're not fully set-up yet.
-+	 */
-+	__set_task_cpu(p, cpu_of(rq));
-+#endif
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	activate_task(p, rq);
-+	trace_sched_wakeup_new(p);
-+	check_preempt_curr(rq);
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+
-+static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
-+
-+void preempt_notifier_inc(void)
-+{
-+	static_branch_inc(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_inc);
-+
-+void preempt_notifier_dec(void)
-+{
-+	static_branch_dec(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_dec);
-+
-+/**
-+ * preempt_notifier_register - tell me when current is being preempted & rescheduled
-+ * @notifier: notifier struct to register
-+ */
-+void preempt_notifier_register(struct preempt_notifier *notifier)
-+{
-+	if (!static_branch_unlikely(&preempt_notifier_key))
-+		WARN(1, "registering preempt_notifier while notifiers disabled\n");
-+
-+	hlist_add_head(&notifier->link, &current->preempt_notifiers);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_register);
-+
-+/**
-+ * preempt_notifier_unregister - no longer interested in preemption notifications
-+ * @notifier: notifier struct to unregister
-+ *
-+ * This is *not* safe to call from within a preemption notifier.
-+ */
-+void preempt_notifier_unregister(struct preempt_notifier *notifier)
-+{
-+	hlist_del(&notifier->link);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
-+
-+static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_in(notifier, raw_smp_processor_id());
-+}
-+
-+static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_in_preempt_notifiers(curr);
-+}
-+
-+static void
-+__fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				   struct task_struct *next)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_out(notifier, next);
-+}
-+
-+static __always_inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_out_preempt_notifiers(curr, next);
-+}
-+
-+#else /* !CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+}
-+
-+static inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+}
-+
-+#endif /* CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void prepare_task(struct task_struct *next)
-+{
-+	/*
-+	 * Claim the task as running, we do this before switching to it
-+	 * such that any running task will have this set.
-+	 *
-+	 * See the ttwu() WF_ON_CPU case and its ordering comment.
-+	 */
-+	WRITE_ONCE(next->on_cpu, 1);
-+}
-+
-+static inline void finish_task(struct task_struct *prev)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * This must be the very last reference to @prev from this CPU. After
-+	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
-+	 * must ensure this doesn't happen until the switch is completely
-+	 * finished.
-+	 *
-+	 * In particular, the load of prev->state in finish_task_switch() must
-+	 * happen before this.
-+	 *
-+	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
-+	 */
-+	smp_store_release(&prev->on_cpu, 0);
-+#else
-+	prev->on_cpu = 0;
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
-+{
-+	void (*func)(struct rq *rq);
-+	struct callback_head *next;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	while (head) {
-+		func = (void (*)(struct rq *))head->func;
-+		next = head->next;
-+		head->next = NULL;
-+		head = next;
-+
-+		func(rq);
-+	}
-+}
-+
-+static void balance_push(struct rq *rq);
-+
-+struct callback_head balance_push_callback = {
-+	.next = NULL,
-+	.func = (void (*)(struct callback_head *))balance_push,
-+};
-+
-+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
-+{
-+	struct callback_head *head = rq->balance_callback;
-+
-+	if (head) {
-+		lockdep_assert_held(&rq->lock);
-+		rq->balance_callback = NULL;
-+	}
-+
-+	return head;
-+}
-+
-+static void __balance_callbacks(struct rq *rq)
-+{
-+	do_balance_callbacks(rq, splice_balance_callbacks(rq));
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
-+{
-+	unsigned long flags;
-+
-+	if (unlikely(head)) {
-+		raw_spin_lock_irqsave(&rq->lock, flags);
-+		do_balance_callbacks(rq, head);
-+		raw_spin_unlock_irqrestore(&rq->lock, flags);
-+	}
-+}
-+
-+#else
-+
-+static inline void __balance_callbacks(struct rq *rq)
-+{
-+}
-+
-+static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
-+{
-+	return NULL;
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
-+{
-+}
-+
-+#endif
-+
-+static inline void
-+prepare_lock_switch(struct rq *rq, struct task_struct *next)
-+{
-+	/*
-+	 * Since the runqueue lock will be released by the next
-+	 * task (which is an invalid locking op but in the case
-+	 * of the scheduler it's an obvious special-case), so we
-+	 * do an early lockdep release here:
-+	 */
-+	spin_release(&rq->lock.dep_map, _THIS_IP_);
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+	/* this is a valid case when another task releases the spinlock */
-+	rq->lock.owner = next;
-+#endif
-+}
-+
-+static inline void finish_lock_switch(struct rq *rq)
-+{
-+	/*
-+	 * If we are tracking spinlock dependencies then we have to
-+	 * fix up the runqueue lock - which gets 'carried over' from
-+	 * prev into current:
-+	 */
-+	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
-+	__balance_callbacks(rq);
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+/*
-+ * NOP if the arch has not defined these:
-+ */
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+static inline void kmap_local_sched_out(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_out();
-+#endif
-+}
-+
-+static inline void kmap_local_sched_in(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_in();
-+#endif
-+}
-+
-+/**
-+ * prepare_task_switch - prepare to switch tasks
-+ * @rq: the runqueue preparing to switch
-+ * @next: the task we are going to switch to.
-+ *
-+ * This is called with the rq lock held and interrupts off. It must
-+ * be paired with a subsequent finish_task_switch after the context
-+ * switch.
-+ *
-+ * prepare_task_switch sets up locking and calls architecture specific
-+ * hooks.
-+ */
-+static inline void
-+prepare_task_switch(struct rq *rq, struct task_struct *prev,
-+		    struct task_struct *next)
-+{
-+	kcov_prepare_switch(prev);
-+	sched_info_switch(rq, prev, next);
-+	perf_event_task_sched_out(prev, next);
-+	rseq_preempt(prev);
-+	fire_sched_out_preempt_notifiers(prev, next);
-+	kmap_local_sched_out();
-+	prepare_task(next);
-+	prepare_arch_switch(next);
-+}
-+
-+/**
-+ * finish_task_switch - clean up after a task-switch
-+ * @rq: runqueue associated with task-switch
-+ * @prev: the thread we just switched away from.
-+ *
-+ * finish_task_switch must be called after the context switch, paired
-+ * with a prepare_task_switch call before the context switch.
-+ * finish_task_switch will reconcile locking set up by prepare_task_switch,
-+ * and do any other architecture-specific cleanup actions.
-+ *
-+ * Note that we may have delayed dropping an mm in context_switch(). If
-+ * so, we finish that here outside of the runqueue lock.  (Doing it
-+ * with the lock held can cause deadlocks; see schedule() for
-+ * details.)
-+ *
-+ * The context switch have flipped the stack from under us and restored the
-+ * local variables which were saved when this task called schedule() in the
-+ * past. prev == current is still correct but we need to recalculate this_rq
-+ * because prev may have moved to another CPU.
-+ */
-+static struct rq *finish_task_switch(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	struct rq *rq = this_rq();
-+	struct mm_struct *mm = rq->prev_mm;
-+	long prev_state;
-+
-+	/*
-+	 * The previous task will have left us with a preempt_count of 2
-+	 * because it left us after:
-+	 *
-+	 *	schedule()
-+	 *	  preempt_disable();			// 1
-+	 *	  __schedule()
-+	 *	    raw_spin_lock_irq(&rq->lock)	// 2
-+	 *
-+	 * Also, see FORK_PREEMPT_COUNT.
-+	 */
-+	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
-+		      "corrupted preempt_count: %s/%d/0x%x\n",
-+		      current->comm, current->pid, preempt_count()))
-+		preempt_count_set(FORK_PREEMPT_COUNT);
-+
-+	rq->prev_mm = NULL;
-+
-+	/*
-+	 * A task struct has one reference for the use as "current".
-+	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
-+	 * schedule one last time. The schedule call will never return, and
-+	 * the scheduled task must drop that reference.
-+	 *
-+	 * We must observe prev->state before clearing prev->on_cpu (in
-+	 * finish_task), otherwise a concurrent wakeup can get prev
-+	 * running on another CPU and we could rave with its RUNNING -> DEAD
-+	 * transition, resulting in a double drop.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	vtime_task_switch(prev);
-+	perf_event_task_sched_in(prev, current);
-+	finish_task(prev);
-+	tick_nohz_task_switch();
-+	finish_lock_switch(rq);
-+	finish_arch_post_lock_switch();
-+	kcov_finish_switch(current);
-+	/*
-+	 * kmap_local_sched_out() is invoked with rq::lock held and
-+	 * interrupts disabled. There is no requirement for that, but the
-+	 * sched out code does not have an interrupt enabled section.
-+	 * Restoring the maps on sched in does not require interrupts being
-+	 * disabled either.
-+	 */
-+	kmap_local_sched_in();
-+
-+	fire_sched_in_preempt_notifiers(current);
-+	/*
-+	 * When switching through a kernel thread, the loop in
-+	 * membarrier_{private,global}_expedited() may have observed that
-+	 * kernel thread and not issued an IPI. It is therefore possible to
-+	 * schedule between user->kernel->user threads without passing though
-+	 * switch_mm(). Membarrier requires a barrier after storing to
-+	 * rq->curr, before returning to userspace, so provide them here:
-+	 *
-+	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
-+	 *   provided by mmdrop(),
-+	 * - a sync_core for SYNC_CORE.
-+	 */
-+	if (mm) {
-+		membarrier_mm_sync_core_before_usermode(mm);
-+		mmdrop(mm);
-+	}
-+	if (unlikely(prev_state == TASK_DEAD)) {
-+		/*
-+		 * Remove function-return probe instances associated with this
-+		 * task and put them back on the free list.
-+		 */
-+		kprobe_flush_task(prev);
-+
-+		/* Task is done with its stack. */
-+		put_task_stack(prev);
-+
-+		put_task_struct_rcu_user(prev);
-+	}
-+
-+	return rq;
-+}
-+
-+/**
-+ * schedule_tail - first thing a freshly forked thread must call.
-+ * @prev: the thread we just switched away from.
-+ */
-+asmlinkage __visible void schedule_tail(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	/*
-+	 * New tasks start with FORK_PREEMPT_COUNT, see there and
-+	 * finish_task_switch() for details.
-+	 *
-+	 * finish_task_switch() will drop rq->lock() and lower preempt_count
-+	 * and the preempt_enable() will end up enabling preemption (on
-+	 * PREEMPT_COUNT kernels).
-+	 */
-+
-+	finish_task_switch(prev);
-+	preempt_enable();
-+
-+	if (current->set_child_tid)
-+		put_user(task_pid_vnr(current), current->set_child_tid);
-+
-+	calculate_sigpending();
-+}
-+
-+/*
-+ * context_switch - switch to the new MM and the new thread's register state.
-+ */
-+static __always_inline struct rq *
-+context_switch(struct rq *rq, struct task_struct *prev,
-+	       struct task_struct *next)
-+{
-+	prepare_task_switch(rq, prev, next);
-+
-+	/*
-+	 * For paravirt, this is coupled with an exit in switch_to to
-+	 * combine the page table reload and the switch backend into
-+	 * one hypercall.
-+	 */
-+	arch_start_context_switch(prev);
-+
-+	/*
-+	 * kernel -> kernel   lazy + transfer active
-+	 *   user -> kernel   lazy + mmgrab() active
-+	 *
-+	 * kernel ->   user   switch + mmdrop() active
-+	 *   user ->   user   switch
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		enter_lazy_tlb(prev->active_mm, next);
-+
-+		next->active_mm = prev->active_mm;
-+		if (prev->mm)                           // from user
-+			mmgrab(prev->active_mm);
-+		else
-+			prev->active_mm = NULL;
-+	} else {                                        // to user
-+		membarrier_switch_mm(rq, prev->active_mm, next->mm);
-+		/*
-+		 * sys_membarrier() requires an smp_mb() between setting
-+		 * rq->curr / membarrier_switch_mm() and returning to userspace.
-+		 *
-+		 * The below provides this either through switch_mm(), or in
-+		 * case 'prev->active_mm == next->mm' through
-+		 * finish_task_switch()'s mmdrop().
-+		 */
-+		switch_mm_irqs_off(prev->active_mm, next->mm, next);
-+
-+		if (!prev->mm) {                        // from kernel
-+			/* will mmdrop() in finish_task_switch(). */
-+			rq->prev_mm = prev->active_mm;
-+			prev->active_mm = NULL;
-+		}
-+	}
-+
-+	prepare_lock_switch(rq, next);
-+
-+	/* Here we just switch the register state and the stack. */
-+	switch_to(prev, next, prev);
-+	barrier();
-+
-+	return finish_task_switch(prev);
-+}
-+
-+/*
-+ * nr_running, nr_uninterruptible and nr_context_switches:
-+ *
-+ * externally visible scheduler statistics: current number of runnable
-+ * threads, total number of context switches performed since bootup.
-+ */
-+unsigned int nr_running(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_online_cpu(i)
-+		sum += cpu_rq(i)->nr_running;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Check if only the current task is running on the CPU.
-+ *
-+ * Caution: this function does not check that the caller has disabled
-+ * preemption, thus the result might have a time-of-check-to-time-of-use
-+ * race.  The caller is responsible to use it correctly, for example:
-+ *
-+ * - from a non-preemptible section (of course)
-+ *
-+ * - from a thread that is bound to a single CPU
-+ *
-+ * - in a loop with very short iterations (e.g. a polling loop)
-+ */
-+bool single_task_running(void)
-+{
-+	return raw_rq()->nr_running == 1;
-+}
-+EXPORT_SYMBOL(single_task_running);
-+
-+unsigned long long nr_context_switches(void)
-+{
-+	int i;
-+	unsigned long long sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += cpu_rq(i)->nr_switches;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Consumers of these two interfaces, like for example the cpuidle menu
-+ * governor, are using nonsensical data. Preferring shallow idle state selection
-+ * for a CPU that has IO-wait which might not even end up running the task when
-+ * it does become runnable.
-+ */
-+
-+unsigned int nr_iowait_cpu(int cpu)
-+{
-+	return atomic_read(&cpu_rq(cpu)->nr_iowait);
-+}
-+
-+/*
-+ * IO-wait accounting, and how it's mostly bollocks (on SMP).
-+ *
-+ * The idea behind IO-wait account is to account the idle time that we could
-+ * have spend running if it were not for IO. That is, if we were to improve the
-+ * storage performance, we'd have a proportional reduction in IO-wait time.
-+ *
-+ * This all works nicely on UP, where, when a task blocks on IO, we account
-+ * idle time as IO-wait, because if the storage were faster, it could've been
-+ * running and we'd not be idle.
-+ *
-+ * This has been extended to SMP, by doing the same for each CPU. This however
-+ * is broken.
-+ *
-+ * Imagine for instance the case where two tasks block on one CPU, only the one
-+ * CPU will have IO-wait accounted, while the other has regular idle. Even
-+ * though, if the storage were faster, both could've ran at the same time,
-+ * utilising both CPUs.
-+ *
-+ * This means, that when looking globally, the current IO-wait accounting on
-+ * SMP is a lower bound, by reason of under accounting.
-+ *
-+ * Worse, since the numbers are provided per CPU, they are sometimes
-+ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
-+ * associated with any one particular CPU, it can wake to another CPU than it
-+ * blocked on. This means the per CPU IO-wait number is meaningless.
-+ *
-+ * Task CPU affinities can make all that even more 'interesting'.
-+ */
-+
-+unsigned int nr_iowait(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += nr_iowait_cpu(i);
-+
-+	return sum;
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+/*
-+ * sched_exec - execve() is a valuable balancing opportunity, because at
-+ * this point the task has the smallest effective memory and cache
-+ * footprint.
-+ */
-+void sched_exec(void)
-+{
-+	struct task_struct *p = current;
-+	unsigned long flags;
-+	int dest_cpu;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	dest_cpu = cpumask_any(p->cpus_ptr);
-+	if (dest_cpu == smp_processor_id())
-+		goto unlock;
-+
-+	if (likely(cpu_active(dest_cpu))) {
-+		struct migration_arg arg = { p, dest_cpu };
-+
-+		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+		stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
-+		return;
-+	}
-+unlock:
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+#endif
-+
-+DEFINE_PER_CPU(struct kernel_stat, kstat);
-+DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
-+
-+EXPORT_PER_CPU_SYMBOL(kstat);
-+EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
-+
-+static inline void update_curr(struct rq *rq, struct task_struct *p)
-+{
-+	s64 ns = rq->clock_task - p->last_ran;
-+
-+	p->sched_time += ns;
-+	cgroup_account_cputime(p, ns);
-+	account_group_exec_runtime(p, ns);
-+
-+	p->time_slice -= ns;
-+	p->last_ran = rq->clock_task;
-+}
-+
-+/*
-+ * Return accounted runtime for the task.
-+ * Return separately the current's pending runtime that have not been
-+ * accounted yet.
-+ */
-+unsigned long long task_sched_runtime(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	u64 ns;
-+
-+#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
-+	/*
-+	 * 64-bit doesn't need locks to atomically read a 64-bit value.
-+	 * So we have a optimization chance when the task's delta_exec is 0.
-+	 * Reading ->on_cpu is racy, but this is ok.
-+	 *
-+	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
-+	 * If we race with it entering CPU, unaccounted time is 0. This is
-+	 * indistinguishable from the read occurring a few cycles earlier.
-+	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
-+	 * been accounted, so we're correct here as well.
-+	 */
-+	if (!p->on_cpu || !task_on_rq_queued(p))
-+		return tsk_seruntime(p);
-+#endif
-+
-+	rq = task_access_lock_irqsave(p, &lock, &flags);
-+	/*
-+	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
-+	 * project cycles that may never be accounted to this
-+	 * thread, breaking clock_gettime().
-+	 */
-+	if (p == rq->curr && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		update_curr(rq, p);
-+	}
-+	ns = tsk_seruntime(p);
-+	task_access_unlock_irqrestore(p, lock, &flags);
-+
-+	return ns;
-+}
-+
-+/* This manages tasks that have run out of timeslice during a scheduler_tick */
-+static inline void scheduler_task_tick(struct rq *rq)
-+{
-+	struct task_struct *p = rq->curr;
-+
-+	if (is_idle_task(p))
-+		return;
-+
-+	update_curr(rq, p);
-+	cpufreq_update_util(rq, 0);
-+
-+	/*
-+	 * Tasks have less than RESCHED_NS of time slice left they will be
-+	 * rescheduled.
-+	 */
-+	if (p->time_slice >= RESCHED_NS)
-+		return;
-+	set_tsk_need_resched(p);
-+	set_preempt_need_resched();
-+}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+static u64 cpu_resched_latency(struct rq *rq)
-+{
-+	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
-+	u64 resched_latency, now = rq_clock(rq);
-+	static bool warned_once;
-+
-+	if (sysctl_resched_latency_warn_once && warned_once)
-+		return 0;
-+
-+	if (!need_resched() || !latency_warn_ms)
-+		return 0;
-+
-+	if (system_state == SYSTEM_BOOTING)
-+		return 0;
-+
-+	if (!rq->last_seen_need_resched_ns) {
-+		rq->last_seen_need_resched_ns = now;
-+		rq->ticks_without_resched = 0;
-+		return 0;
-+	}
-+
-+	rq->ticks_without_resched++;
-+	resched_latency = now - rq->last_seen_need_resched_ns;
-+	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
-+		return 0;
-+
-+	warned_once = true;
-+
-+	return resched_latency;
-+}
-+
-+static int __init setup_resched_latency_warn_ms(char *str)
-+{
-+	long val;
-+
-+	if ((kstrtol(str, 0, &val))) {
-+		pr_warn("Unable to set resched_latency_warn_ms\n");
-+		return 1;
-+	}
-+
-+	sysctl_resched_latency_warn_ms = val;
-+	return 1;
-+}
-+__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
-+#else
-+static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+/*
-+ * This function gets called by the timer code, with HZ frequency.
-+ * We call it with interrupts disabled.
-+ */
-+void scheduler_tick(void)
-+{
-+	int cpu __maybe_unused = smp_processor_id();
-+	struct rq *rq = cpu_rq(cpu);
-+	u64 resched_latency;
-+
-+	arch_scale_freq_tick();
-+	sched_clock_tick();
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	scheduler_task_tick(rq);
-+	if (sched_feat(LATENCY_WARN))
-+		resched_latency = cpu_resched_latency(rq);
-+	calc_global_load_tick(rq);
-+
-+	rq->last_tick = rq->clock;
-+	raw_spin_unlock(&rq->lock);
-+
-+	if (sched_feat(LATENCY_WARN) && resched_latency)
-+		resched_latency_warn(cpu, resched_latency);
-+
-+	perf_event_task_tick();
-+}
-+
-+#ifdef CONFIG_SCHED_SMT
-+static inline int active_load_balance_cpu_stop(void *data)
-+{
-+	struct rq *rq = this_rq();
-+	struct task_struct *p = data;
-+	cpumask_t tmp;
-+	unsigned long flags;
-+
-+	local_irq_save(flags);
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+
-+	rq->active_balance = 0;
-+	/* _something_ may have changed the task, double check again */
-+	if (task_on_rq_queued(p) && task_rq(p) == rq &&
-+	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
-+	    !is_migration_disabled(p)) {
-+		int cpu = cpu_of(rq);
-+		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
-+		rq = move_queued_task(rq, p, dcpu);
-+	}
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock(&p->pi_lock);
-+
-+	local_irq_restore(flags);
-+
-+	return 0;
-+}
-+
-+/* sg_balance_trigger - trigger slibing group balance for @cpu */
-+static inline int sg_balance_trigger(const int cpu)
-+{
-+	struct rq *rq= cpu_rq(cpu);
-+	unsigned long flags;
-+	struct task_struct *curr;
-+	int res;
-+
-+	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-+		return 0;
-+	curr = rq->curr;
-+	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
-+	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
-+	      !is_migration_disabled(curr) && (!rq->active_balance);
-+
-+	if (res)
-+		rq->active_balance = 1;
-+
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	if (res)
-+		stop_one_cpu_nowait(cpu, active_load_balance_cpu_stop,
-+				    curr, &rq->active_balance_work);
-+	return res;
-+}
-+
-+/*
-+ * sg_balance_check - slibing group balance check for run queue @rq
-+ */
-+static inline void sg_balance_check(struct rq *rq)
-+{
-+	cpumask_t chk;
-+	int cpu = cpu_of(rq);
-+
-+	/* exit when cpu is offline */
-+	if (unlikely(!rq->online))
-+		return;
-+
-+	/*
-+	 * Only cpu in slibing idle group will do the checking and then
-+	 * find potential cpus which can migrate the current running task
-+	 */
-+	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
-+	    cpumask_andnot(&chk, cpu_online_mask, sched_rq_watermark) &&
-+	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
-+		int i;
-+
-+		for_each_cpu_wrap(i, &chk, cpu) {
-+			if (cpumask_subset(cpu_smt_mask(i), &chk) &&
-+			    sg_balance_trigger(i))
-+				return;
-+		}
-+	}
-+}
-+#endif /* CONFIG_SCHED_SMT */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+
-+struct tick_work {
-+	int			cpu;
-+	atomic_t		state;
-+	struct delayed_work	work;
-+};
-+/* Values for ->state, see diagram below. */
-+#define TICK_SCHED_REMOTE_OFFLINE	0
-+#define TICK_SCHED_REMOTE_OFFLINING	1
-+#define TICK_SCHED_REMOTE_RUNNING	2
-+
-+/*
-+ * State diagram for ->state:
-+ *
-+ *
-+ *          TICK_SCHED_REMOTE_OFFLINE
-+ *                    |   ^
-+ *                    |   |
-+ *                    |   | sched_tick_remote()
-+ *                    |   |
-+ *                    |   |
-+ *                    +--TICK_SCHED_REMOTE_OFFLINING
-+ *                    |   ^
-+ *                    |   |
-+ * sched_tick_start() |   | sched_tick_stop()
-+ *                    |   |
-+ *                    V   |
-+ *          TICK_SCHED_REMOTE_RUNNING
-+ *
-+ *
-+ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
-+ * and sched_tick_start() are happy to leave the state in RUNNING.
-+ */
-+
-+static struct tick_work __percpu *tick_work_cpu;
-+
-+static void sched_tick_remote(struct work_struct *work)
-+{
-+	struct delayed_work *dwork = to_delayed_work(work);
-+	struct tick_work *twork = container_of(dwork, struct tick_work, work);
-+	int cpu = twork->cpu;
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *curr;
-+	unsigned long flags;
-+	u64 delta;
-+	int os;
-+
-+	/*
-+	 * Handle the tick only if it appears the remote CPU is running in full
-+	 * dynticks mode. The check is racy by nature, but missing a tick or
-+	 * having one too much is no big deal because the scheduler tick updates
-+	 * statistics and checks timeslices in a time-independent way, regardless
-+	 * of when exactly it is running.
-+	 */
-+	if (!tick_nohz_tick_stopped_cpu(cpu))
-+		goto out_requeue;
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	curr = rq->curr;
-+	if (cpu_is_offline(cpu))
-+		goto out_unlock;
-+
-+	update_rq_clock(rq);
-+	if (!is_idle_task(curr)) {
-+		/*
-+		 * Make sure the next tick runs within a reasonable
-+		 * amount of time.
-+		 */
-+		delta = rq_clock_task(rq) - curr->last_ran;
-+		WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
-+	}
-+	scheduler_task_tick(rq);
-+
-+	calc_load_nohz_remote(rq);
-+out_unlock:
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+out_requeue:
-+	/*
-+	 * Run the remote tick once per second (1Hz). This arbitrary
-+	 * frequency is large enough to avoid overload but short enough
-+	 * to keep scheduler internal stats reasonably up to date.  But
-+	 * first update state to reflect hotplug activity if required.
-+	 */
-+	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
-+	if (os == TICK_SCHED_REMOTE_RUNNING)
-+		queue_delayed_work(system_unbound_wq, dwork, HZ);
-+}
-+
-+static void sched_tick_start(int cpu)
-+{
-+	int os;
-+	struct tick_work *twork;
-+
-+	if (housekeeping_cpu(cpu, HK_FLAG_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
-+	if (os == TICK_SCHED_REMOTE_OFFLINE) {
-+		twork->cpu = cpu;
-+		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
-+		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
-+	}
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+static void sched_tick_stop(int cpu)
-+{
-+	struct tick_work *twork;
-+
-+	if (housekeeping_cpu(cpu, HK_FLAG_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	cancel_delayed_work_sync(&twork->work);
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+int __init sched_tick_offload_init(void)
-+{
-+	tick_work_cpu = alloc_percpu(struct tick_work);
-+	BUG_ON(!tick_work_cpu);
-+	return 0;
-+}
-+
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_tick_start(int cpu) { }
-+static inline void sched_tick_stop(int cpu) { }
-+#endif
-+
-+#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
-+				defined(CONFIG_PREEMPT_TRACER))
-+/*
-+ * If the value passed in is equal to the current preempt count
-+ * then we just disabled preemption. Start timing the latency.
-+ */
-+static inline void preempt_latency_start(int val)
-+{
-+	if (preempt_count() == val) {
-+		unsigned long ip = get_lock_parent_ip();
-+#ifdef CONFIG_DEBUG_PREEMPT
-+		current->preempt_disable_ip = ip;
-+#endif
-+		trace_preempt_off(CALLER_ADDR0, ip);
-+	}
-+}
-+
-+void preempt_count_add(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
-+		return;
-+#endif
-+	__preempt_count_add(val);
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Spinlock count overflowing soon?
-+	 */
-+	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
-+				PREEMPT_MASK - 10);
-+#endif
-+	preempt_latency_start(val);
-+}
-+EXPORT_SYMBOL(preempt_count_add);
-+NOKPROBE_SYMBOL(preempt_count_add);
-+
-+/*
-+ * If the value passed in equals to the current preempt count
-+ * then we just enabled preemption. Stop timing the latency.
-+ */
-+static inline void preempt_latency_stop(int val)
-+{
-+	if (preempt_count() == val)
-+		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
-+}
-+
-+void preempt_count_sub(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
-+		return;
-+	/*
-+	 * Is the spinlock portion underflowing?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
-+			!(preempt_count() & PREEMPT_MASK)))
-+		return;
-+#endif
-+
-+	preempt_latency_stop(val);
-+	__preempt_count_sub(val);
-+}
-+EXPORT_SYMBOL(preempt_count_sub);
-+NOKPROBE_SYMBOL(preempt_count_sub);
-+
-+#else
-+static inline void preempt_latency_start(int val) { }
-+static inline void preempt_latency_stop(int val) { }
-+#endif
-+
-+static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	return p->preempt_disable_ip;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+/*
-+ * Print scheduling while atomic bug:
-+ */
-+static noinline void __schedule_bug(struct task_struct *prev)
-+{
-+	/* Save this before calling printk(), since that will clobber it */
-+	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	if (oops_in_progress)
-+		return;
-+
-+	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
-+		prev->comm, prev->pid, preempt_count());
-+
-+	debug_show_held_locks(prev);
-+	print_modules();
-+	if (irqs_disabled())
-+		print_irqtrace_events(prev);
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)
-+	    && in_atomic_preempt_off()) {
-+		pr_err("Preemption disabled at:");
-+		print_ip_sym(KERN_ERR, preempt_disable_ip);
-+	}
-+	if (panic_on_warn)
-+		panic("scheduling while atomic\n");
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+
-+/*
-+ * Various schedule()-time debugging checks and statistics:
-+ */
-+static inline void schedule_debug(struct task_struct *prev, bool preempt)
-+{
-+#ifdef CONFIG_SCHED_STACK_END_CHECK
-+	if (task_stack_end_corrupted(prev))
-+		panic("corrupted stack end detected inside scheduler\n");
-+
-+	if (task_scs_end_corrupted(prev))
-+		panic("corrupted shadow stack detected inside scheduler\n");
-+#endif
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
-+		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
-+			prev->comm, prev->pid, prev->non_block_count);
-+		dump_stack();
-+		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+	}
-+#endif
-+
-+	if (unlikely(in_atomic_preempt_off())) {
-+		__schedule_bug(prev);
-+		preempt_count_set(PREEMPT_DISABLED);
-+	}
-+	rcu_sleep_check();
-+	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
-+
-+	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
-+
-+	schedstat_inc(this_rq()->sched_count);
-+}
-+
-+/*
-+ * Compile time debug macro
-+ * #define ALT_SCHED_DEBUG
-+ */
-+
-+#ifdef ALT_SCHED_DEBUG
-+void alt_sched_debug(void)
-+{
-+	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
-+	       sched_rq_pending_mask.bits[0],
-+	       sched_rq_watermark[0].bits[0],
-+	       sched_sg_idle_mask.bits[0]);
-+}
-+#else
-+inline void alt_sched_debug(void) {}
-+#endif
-+
-+#ifdef	CONFIG_SMP
-+
-+#define SCHED_RQ_NR_MIGRATION (32U)
-+/*
-+ * Migrate pending tasks in @rq to @dest_cpu
-+ * Will try to migrate mininal of half of @rq nr_running tasks and
-+ * SCHED_RQ_NR_MIGRATION to @dest_cpu
-+ */
-+static inline int
-+migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
-+{
-+	struct task_struct *p, *skip = rq->curr;
-+	int nr_migrated = 0;
-+	int nr_tries = min(rq->nr_running / 2, SCHED_RQ_NR_MIGRATION);
-+
-+	while (skip != rq->idle && nr_tries &&
-+	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
-+		skip = sched_rq_next_task(p, rq);
-+		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
-+			__SCHED_DEQUEUE_TASK(p, rq, 0, );
-+			set_task_cpu(p, dest_cpu);
-+			sched_task_sanity_check(p, dest_rq);
-+			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
-+			nr_migrated++;
-+		}
-+		nr_tries--;
-+	}
-+
-+	return nr_migrated;
-+}
-+
-+static inline int take_other_rq_tasks(struct rq *rq, int cpu)
-+{
-+	struct cpumask *topo_mask, *end_mask;
-+
-+	if (unlikely(!rq->online))
-+		return 0;
-+
-+	if (cpumask_empty(&sched_rq_pending_mask))
-+		return 0;
-+
-+	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
-+	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
-+	do {
-+		int i;
-+		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
-+			int nr_migrated;
-+			struct rq *src_rq;
-+
-+			src_rq = cpu_rq(i);
-+			if (!do_raw_spin_trylock(&src_rq->lock))
-+				continue;
-+			spin_acquire(&src_rq->lock.dep_map,
-+				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
-+
-+			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
-+				src_rq->nr_running -= nr_migrated;
-+				if (src_rq->nr_running < 2)
-+					cpumask_clear_cpu(i, &sched_rq_pending_mask);
-+
-+				rq->nr_running += nr_migrated;
-+				if (rq->nr_running > 1)
-+					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
-+
-+				update_sched_rq_watermark(rq);
-+				cpufreq_update_util(rq, 0);
-+
-+				spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+				do_raw_spin_unlock(&src_rq->lock);
-+
-+				return 1;
-+			}
-+
-+			spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+			do_raw_spin_unlock(&src_rq->lock);
-+		}
-+	} while (++topo_mask < end_mask);
-+
-+	return 0;
-+}
-+#endif
-+
-+/*
-+ * Timeslices below RESCHED_NS are considered as good as expired as there's no
-+ * point rescheduling when there's so little time left.
-+ */
-+static inline void check_curr(struct task_struct *p, struct rq *rq)
-+{
-+	if (unlikely(rq->idle == p))
-+		return;
-+
-+	update_curr(rq, p);
-+
-+	if (p->time_slice < RESCHED_NS)
-+		time_slice_expired(p, rq);
-+}
-+
-+static inline struct task_struct *
-+choose_next_task(struct rq *rq, int cpu, struct task_struct *prev)
-+{
-+	struct task_struct *next;
-+
-+	if (unlikely(rq->skip)) {
-+		next = rq_runnable_task(rq);
-+		if (next == rq->idle) {
-+#ifdef	CONFIG_SMP
-+			if (!take_other_rq_tasks(rq, cpu)) {
-+#endif
-+				rq->skip = NULL;
-+				schedstat_inc(rq->sched_goidle);
-+				return next;
-+#ifdef	CONFIG_SMP
-+			}
-+			next = rq_runnable_task(rq);
-+#endif
-+		}
-+		rq->skip = NULL;
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+		hrtick_start(rq, next->time_slice);
-+#endif
-+		return next;
-+	}
-+
-+	next = sched_rq_first_task(rq);
-+	if (next == rq->idle) {
-+#ifdef	CONFIG_SMP
-+		if (!take_other_rq_tasks(rq, cpu)) {
-+#endif
-+			schedstat_inc(rq->sched_goidle);
-+			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
-+			return next;
-+#ifdef	CONFIG_SMP
-+		}
-+		next = sched_rq_first_task(rq);
-+#endif
-+	}
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+	hrtick_start(rq, next->time_slice);
-+#endif
-+	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu,
-+	 * next);*/
-+	return next;
-+}
-+
-+/*
-+ * schedule() is the main scheduler function.
-+ *
-+ * The main means of driving the scheduler and thus entering this function are:
-+ *
-+ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
-+ *
-+ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
-+ *      paths. For example, see arch/x86/entry_64.S.
-+ *
-+ *      To drive preemption between tasks, the scheduler sets the flag in timer
-+ *      interrupt handler scheduler_tick().
-+ *
-+ *   3. Wakeups don't really cause entry into schedule(). They add a
-+ *      task to the run-queue and that's it.
-+ *
-+ *      Now, if the new task added to the run-queue preempts the current
-+ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
-+ *      called on the nearest possible occasion:
-+ *
-+ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
-+ *
-+ *         - in syscall or exception context, at the next outmost
-+ *           preempt_enable(). (this might be as soon as the wake_up()'s
-+ *           spin_unlock()!)
-+ *
-+ *         - in IRQ context, return from interrupt-handler to
-+ *           preemptible context
-+ *
-+ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
-+ *         then at the next:
-+ *
-+ *          - cond_resched() call
-+ *          - explicit schedule() call
-+ *          - return from syscall or exception to user-space
-+ *          - return from interrupt-handler to user-space
-+ *
-+ * WARNING: must be called with preemption disabled!
-+ */
-+static void __sched notrace __schedule(bool preempt)
-+{
-+	struct task_struct *prev, *next;
-+	unsigned long *switch_count;
-+	unsigned long prev_state;
-+	struct rq *rq;
-+	int cpu;
-+
-+	cpu = smp_processor_id();
-+	rq = cpu_rq(cpu);
-+	prev = rq->curr;
-+
-+	schedule_debug(prev, preempt);
-+
-+	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
-+	hrtick_clear(rq);
-+
-+	local_irq_disable();
-+	rcu_note_context_switch(preempt);
-+
-+	/*
-+	 * Make sure that signal_pending_state()->signal_pending() below
-+	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
-+	 * done by the caller to avoid the race with signal_wake_up():
-+	 *
-+	 * __set_current_state(@state)		signal_wake_up()
-+	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
-+	 *					  wake_up_state(p, state)
-+	 *   LOCK rq->lock			    LOCK p->pi_state
-+	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
-+	 *     if (signal_pending_state())	    if (p->state & @state)
-+	 *
-+	 * Also, the membarrier system call requires a full memory barrier
-+	 * after coming from user-space, before storing to rq->curr.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+	smp_mb__after_spinlock();
-+
-+	update_rq_clock(rq);
-+
-+	switch_count = &prev->nivcsw;
-+	/*
-+	 * We must load prev->state once (task_struct::state is volatile), such
-+	 * that:
-+	 *
-+	 *  - we form a control dependency vs deactivate_task() below.
-+	 *  - ptrace_{,un}freeze_traced() can change ->state underneath us.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	if (!preempt && prev_state) {
-+		if (signal_pending_state(prev_state, prev)) {
-+			WRITE_ONCE(prev->__state, TASK_RUNNING);
-+		} else {
-+			prev->sched_contributes_to_load =
-+				(prev_state & TASK_UNINTERRUPTIBLE) &&
-+				!(prev_state & TASK_NOLOAD) &&
-+				!(prev->flags & PF_FROZEN);
-+
-+			if (prev->sched_contributes_to_load)
-+				rq->nr_uninterruptible++;
-+
-+			/*
-+			 * __schedule()			ttwu()
-+			 *   prev_state = prev->state;    if (p->on_rq && ...)
-+			 *   if (prev_state)		    goto out;
-+			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
-+			 *				  p->state = TASK_WAKING
-+			 *
-+			 * Where __schedule() and ttwu() have matching control dependencies.
-+			 *
-+			 * After this, schedule() must not care about p->state any more.
-+			 */
-+			sched_task_deactivate(prev, rq);
-+			deactivate_task(prev, rq);
-+
-+			if (prev->in_iowait) {
-+				atomic_inc(&rq->nr_iowait);
-+				delayacct_blkio_start();
-+			}
-+		}
-+		switch_count = &prev->nvcsw;
-+	}
-+
-+	check_curr(prev, rq);
-+
-+	next = choose_next_task(rq, cpu, prev);
-+	clear_tsk_need_resched(prev);
-+	clear_preempt_need_resched();
-+#ifdef CONFIG_SCHED_DEBUG
-+	rq->last_seen_need_resched_ns = 0;
-+#endif
-+
-+	if (likely(prev != next)) {
-+		next->last_ran = rq->clock_task;
-+		rq->last_ts_switch = rq->clock;
-+
-+		rq->nr_switches++;
-+		/*
-+		 * RCU users of rcu_dereference(rq->curr) may not see
-+		 * changes to task_struct made by pick_next_task().
-+		 */
-+		RCU_INIT_POINTER(rq->curr, next);
-+		/*
-+		 * The membarrier system call requires each architecture
-+		 * to have a full memory barrier after updating
-+		 * rq->curr, before returning to user-space.
-+		 *
-+		 * Here are the schemes providing that barrier on the
-+		 * various architectures:
-+		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
-+		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
-+		 * - finish_lock_switch() for weakly-ordered
-+		 *   architectures where spin_unlock is a full barrier,
-+		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
-+		 *   is a RELEASE barrier),
-+		 */
-+		++*switch_count;
-+
-+		psi_sched_switch(prev, next, !task_on_rq_queued(prev));
-+
-+		trace_sched_switch(preempt, prev, next);
-+
-+		/* Also unlocks the rq: */
-+		rq = context_switch(rq, prev, next);
-+	} else {
-+		__balance_callbacks(rq);
-+		raw_spin_unlock_irq(&rq->lock);
-+	}
-+
-+#ifdef CONFIG_SCHED_SMT
-+	sg_balance_check(rq);
-+#endif
-+}
-+
-+void __noreturn do_task_dead(void)
-+{
-+	/* Causes final put_task_struct in finish_task_switch(): */
-+	set_special_state(TASK_DEAD);
-+
-+	/* Tell freezer to ignore us: */
-+	current->flags |= PF_NOFREEZE;
-+
-+	__schedule(false);
-+	BUG();
-+
-+	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
-+	for (;;)
-+		cpu_relax();
-+}
-+
-+static inline void sched_submit_work(struct task_struct *tsk)
-+{
-+	unsigned int task_flags;
-+
-+	if (task_is_running(tsk))
-+		return;
-+
-+	task_flags = tsk->flags;
-+	/*
-+	 * If a worker went to sleep, notify and ask workqueue whether
-+	 * it wants to wake up a task to maintain concurrency.
-+	 * As this function is called inside the schedule() context,
-+	 * we disable preemption to avoid it calling schedule() again
-+	 * in the possible wakeup of a kworker and because wq_worker_sleeping()
-+	 * requires it.
-+	 */
-+	if (task_flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
-+		preempt_disable();
-+		if (task_flags & PF_WQ_WORKER)
-+			wq_worker_sleeping(tsk);
-+		else
-+			io_wq_worker_sleeping(tsk);
-+		preempt_enable_no_resched();
-+	}
-+
-+	if (tsk_is_pi_blocked(tsk))
-+		return;
-+
-+	/*
-+	 * If we are going to sleep and we have plugged IO queued,
-+	 * make sure to submit it to avoid deadlocks.
-+	 */
-+	if (blk_needs_flush_plug(tsk))
-+		blk_schedule_flush_plug(tsk);
-+}
-+
-+static void sched_update_worker(struct task_struct *tsk)
-+{
-+	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
-+		if (tsk->flags & PF_WQ_WORKER)
-+			wq_worker_running(tsk);
-+		else
-+			io_wq_worker_running(tsk);
-+	}
-+}
-+
-+asmlinkage __visible void __sched schedule(void)
-+{
-+	struct task_struct *tsk = current;
-+
-+	sched_submit_work(tsk);
-+	do {
-+		preempt_disable();
-+		__schedule(false);
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+	sched_update_worker(tsk);
-+}
-+EXPORT_SYMBOL(schedule);
-+
-+/*
-+ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
-+ * state (have scheduled out non-voluntarily) by making sure that all
-+ * tasks have either left the run queue or have gone into user space.
-+ * As idle tasks do not do either, they must not ever be preempted
-+ * (schedule out non-voluntarily).
-+ *
-+ * schedule_idle() is similar to schedule_preempt_disable() except that it
-+ * never enables preemption because it does not call sched_submit_work().
-+ */
-+void __sched schedule_idle(void)
-+{
-+	/*
-+	 * As this skips calling sched_submit_work(), which the idle task does
-+	 * regardless because that function is a nop when the task is in a
-+	 * TASK_RUNNING state, make sure this isn't used someplace that the
-+	 * current task can be in any other state. Note, idle is always in the
-+	 * TASK_RUNNING state.
-+	 */
-+	WARN_ON_ONCE(current->__state);
-+	do {
-+		__schedule(false);
-+	} while (need_resched());
-+}
-+
-+#if defined(CONFIG_CONTEXT_TRACKING) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK)
-+asmlinkage __visible void __sched schedule_user(void)
-+{
-+	/*
-+	 * If we come here after a random call to set_need_resched(),
-+	 * or we have been woken up remotely but the IPI has not yet arrived,
-+	 * we haven't yet exited the RCU idle mode. Do it here manually until
-+	 * we find a better solution.
-+	 *
-+	 * NB: There are buggy callers of this function.  Ideally we
-+	 * should warn if prev_state != CONTEXT_USER, but that will trigger
-+	 * too frequently to make sense yet.
-+	 */
-+	enum ctx_state prev_state = exception_enter();
-+	schedule();
-+	exception_exit(prev_state);
-+}
-+#endif
-+
-+/**
-+ * schedule_preempt_disabled - called with preemption disabled
-+ *
-+ * Returns with preemption disabled. Note: preempt_count must be 1
-+ */
-+void __sched schedule_preempt_disabled(void)
-+{
-+	sched_preempt_enable_no_resched();
-+	schedule();
-+	preempt_disable();
-+}
-+
-+static void __sched notrace preempt_schedule_common(void)
-+{
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		__schedule(true);
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+
-+		/*
-+		 * Check again in case we missed a preemption opportunity
-+		 * between schedule and now.
-+		 */
-+	} while (need_resched());
-+}
-+
-+#ifdef CONFIG_PREEMPTION
-+/*
-+ * This is the entry point to schedule() from in-kernel preemption
-+ * off of preempt_enable.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule(void)
-+{
-+	/*
-+	 * If there is a non-zero preempt_count or interrupts are disabled,
-+	 * we do not want to preempt the current task. Just return..
-+	 */
-+	if (likely(!preemptible()))
-+		return;
-+
-+	preempt_schedule_common();
-+}
-+NOKPROBE_SYMBOL(preempt_schedule);
-+EXPORT_SYMBOL(preempt_schedule);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
-+#endif
-+
-+
-+/**
-+ * preempt_schedule_notrace - preempt_schedule called by tracing
-+ *
-+ * The tracing infrastructure uses preempt_enable_notrace to prevent
-+ * recursion and tracing preempt enabling caused by the tracing
-+ * infrastructure itself. But as tracing can happen in areas coming
-+ * from userspace or just about to enter userspace, a preempt enable
-+ * can occur before user_exit() is called. This will cause the scheduler
-+ * to be called when the system is still in usermode.
-+ *
-+ * To prevent this, the preempt_enable_notrace will use this function
-+ * instead of preempt_schedule() to exit user context if needed before
-+ * calling the scheduler.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
-+{
-+	enum ctx_state prev_ctx;
-+
-+	if (likely(!preemptible()))
-+		return;
-+
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		/*
-+		 * Needs preempt disabled in case user_exit() is traced
-+		 * and the tracer calls preempt_enable_notrace() causing
-+		 * an infinite recursion.
-+		 */
-+		prev_ctx = exception_enter();
-+		__schedule(true);
-+		exception_exit(prev_ctx);
-+
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+	} while (need_resched());
-+}
-+EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
-+#endif
-+
-+#endif /* CONFIG_PREEMPTION */
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+
-+#include <linux/entry-common.h>
-+
-+/*
-+ * SC:cond_resched
-+ * SC:might_resched
-+ * SC:preempt_schedule
-+ * SC:preempt_schedule_notrace
-+ * SC:irqentry_exit_cond_resched
-+ *
-+ *
-+ * NONE:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * VOLUNTARY:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- __cond_resched
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * FULL:
-+ *   cond_resched               <- RET0
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- preempt_schedule
-+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
-+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
-+ */
-+
-+enum {
-+	preempt_dynamic_none = 0,
-+	preempt_dynamic_voluntary,
-+	preempt_dynamic_full,
-+};
-+
-+int preempt_dynamic_mode = preempt_dynamic_full;
-+
-+int sched_dynamic_mode(const char *str)
-+{
-+	if (!strcmp(str, "none"))
-+		return preempt_dynamic_none;
-+
-+	if (!strcmp(str, "voluntary"))
-+		return preempt_dynamic_voluntary;
-+
-+	if (!strcmp(str, "full"))
-+		return preempt_dynamic_full;
-+
-+	return -EINVAL;
-+}
-+
-+void sched_dynamic_update(int mode)
-+{
-+	/*
-+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-+	 * the ZERO state, which is invalid.
-+	 */
-+	static_call_update(cond_resched, __cond_resched);
-+	static_call_update(might_resched, __cond_resched);
-+	static_call_update(preempt_schedule, __preempt_schedule_func);
-+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-+
-+	switch (mode) {
-+	case preempt_dynamic_none:
-+		static_call_update(cond_resched, __cond_resched);
-+		static_call_update(might_resched, (void *)&__static_call_return0);
-+		static_call_update(preempt_schedule, NULL);
-+		static_call_update(preempt_schedule_notrace, NULL);
-+		static_call_update(irqentry_exit_cond_resched, NULL);
-+		pr_info("Dynamic Preempt: none\n");
-+		break;
-+
-+	case preempt_dynamic_voluntary:
-+		static_call_update(cond_resched, __cond_resched);
-+		static_call_update(might_resched, __cond_resched);
-+		static_call_update(preempt_schedule, NULL);
-+		static_call_update(preempt_schedule_notrace, NULL);
-+		static_call_update(irqentry_exit_cond_resched, NULL);
-+		pr_info("Dynamic Preempt: voluntary\n");
-+		break;
-+
-+	case preempt_dynamic_full:
-+		static_call_update(cond_resched, (void *)&__static_call_return0);
-+		static_call_update(might_resched, (void *)&__static_call_return0);
-+		static_call_update(preempt_schedule, __preempt_schedule_func);
-+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-+		pr_info("Dynamic Preempt: full\n");
-+		break;
-+	}
-+
-+	preempt_dynamic_mode = mode;
-+}
-+
-+static int __init setup_preempt_mode(char *str)
-+{
-+	int mode = sched_dynamic_mode(str);
-+	if (mode < 0) {
-+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-+		return 1;
-+	}
-+
-+	sched_dynamic_update(mode);
-+	return 0;
-+}
-+__setup("preempt=", setup_preempt_mode);
-+
-+#endif /* CONFIG_PREEMPT_DYNAMIC */
-+
-+/*
-+ * This is the entry point to schedule() from kernel preemption
-+ * off of irq context.
-+ * Note, that this is called and return with irqs disabled. This will
-+ * protect us against recursive calling from irq.
-+ */
-+asmlinkage __visible void __sched preempt_schedule_irq(void)
-+{
-+	enum ctx_state prev_state;
-+
-+	/* Catch callers which need to be fixed */
-+	BUG_ON(preempt_count() || !irqs_disabled());
-+
-+	prev_state = exception_enter();
-+
-+	do {
-+		preempt_disable();
-+		local_irq_enable();
-+		__schedule(true);
-+		local_irq_disable();
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+
-+	exception_exit(prev_state);
-+}
-+
-+int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
-+			  void *key)
-+{
-+	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC);
-+	return try_to_wake_up(curr->private, mode, wake_flags);
-+}
-+EXPORT_SYMBOL(default_wake_function);
-+
-+static inline void check_task_changed(struct task_struct *p, struct rq *rq)
-+{
-+	/* Trigger resched if task sched_prio has been modified. */
-+	if (task_on_rq_queued(p) && task_sched_prio_idx(p, rq) != p->sq_idx) {
-+		requeue_task(p, rq);
-+		check_preempt_curr(rq);
-+	}
-+}
-+
-+static void __setscheduler_prio(struct task_struct *p, int prio)
-+{
-+	p->prio = prio;
-+}
-+
-+#ifdef CONFIG_RT_MUTEXES
-+
-+static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
-+{
-+	if (pi_task)
-+		prio = min(prio, pi_task->prio);
-+
-+	return prio;
-+}
-+
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	struct task_struct *pi_task = rt_mutex_get_top_task(p);
-+
-+	return __rt_effective_prio(pi_task, prio);
-+}
-+
-+/*
-+ * rt_mutex_setprio - set the current priority of a task
-+ * @p: task to boost
-+ * @pi_task: donor task
-+ *
-+ * This function changes the 'effective' priority of a task. It does
-+ * not touch ->normal_prio like __setscheduler().
-+ *
-+ * Used by the rt_mutex code to implement priority inheritance
-+ * logic. Call site only calls if the priority of the task changed.
-+ */
-+void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
-+{
-+	int prio;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	/* XXX used to be waiter->prio, not waiter->task->prio */
-+	prio = __rt_effective_prio(pi_task, p->normal_prio);
-+
-+	/*
-+	 * If nothing changed; bail early.
-+	 */
-+	if (p->pi_top_task == pi_task && prio == p->prio)
-+		return;
-+
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Set under pi_lock && rq->lock, such that the value can be used under
-+	 * either lock.
-+	 *
-+	 * Note that there is loads of tricky to make this pointer cache work
-+	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
-+	 * ensure a task is de-boosted (pi_task is set to NULL) before the
-+	 * task is allowed to run again (and can exit). This ensures the pointer
-+	 * points to a blocked task -- which guarantees the task is present.
-+	 */
-+	p->pi_top_task = pi_task;
-+
-+	/*
-+	 * For FIFO/RR we only need to set prio, if that matches we're done.
-+	 */
-+	if (prio == p->prio)
-+		goto out_unlock;
-+
-+	/*
-+	 * Idle task boosting is a nono in general. There is one
-+	 * exception, when PREEMPT_RT and NOHZ is active:
-+	 *
-+	 * The idle task calls get_next_timer_interrupt() and holds
-+	 * the timer wheel base->lock on the CPU and another CPU wants
-+	 * to access the timer (probably to cancel it). We can safely
-+	 * ignore the boosting request, as the idle CPU runs this code
-+	 * with interrupts disabled and will complete the lock
-+	 * protected section without being interrupted. So there is no
-+	 * real need to boost.
-+	 */
-+	if (unlikely(p == rq->idle)) {
-+		WARN_ON(p != rq->curr);
-+		WARN_ON(p->pi_blocked_on);
-+		goto out_unlock;
-+	}
-+
-+	trace_sched_pi_setprio(p, pi_task);
-+
-+	__setscheduler_prio(p, prio);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+
-+	__balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+
-+	preempt_enable();
-+}
-+#else
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	return prio;
-+}
-+#endif
-+
-+void set_user_nice(struct task_struct *p, long nice)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
-+		return;
-+	/*
-+	 * We have to be careful, if called from sys_setpriority(),
-+	 * the task might be in the middle of scheduling on another CPU.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	p->static_prio = NICE_TO_PRIO(nice);
-+	/*
-+	 * The RT priorities are set via sched_setscheduler(), but we still
-+	 * allow the 'normal' nice value to be set - but as expected
-+	 * it won't have any effect on scheduling until the task is
-+	 * not SCHED_NORMAL/SCHED_BATCH:
-+	 */
-+	if (task_has_rt_policy(p))
-+		goto out_unlock;
-+
-+	p->prio = effective_prio(p);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+EXPORT_SYMBOL(set_user_nice);
-+
-+/*
-+ * can_nice - check if a task can reduce its nice value
-+ * @p: task
-+ * @nice: nice value
-+ */
-+int can_nice(const struct task_struct *p, const int nice)
-+{
-+	/* Convert nice value [19,-20] to rlimit style value [1,40] */
-+	int nice_rlim = nice_to_rlimit(nice);
-+
-+	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE) ||
-+		capable(CAP_SYS_NICE));
-+}
-+
-+#ifdef __ARCH_WANT_SYS_NICE
-+
-+/*
-+ * sys_nice - change the priority of the current process.
-+ * @increment: priority increment
-+ *
-+ * sys_setpriority is a more generic, but much slower function that
-+ * does similar things.
-+ */
-+SYSCALL_DEFINE1(nice, int, increment)
-+{
-+	long nice, retval;
-+
-+	/*
-+	 * Setpriority might change our priority at the same moment.
-+	 * We don't have to worry. Conceptually one call occurs first
-+	 * and we have a single winner.
-+	 */
-+
-+	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
-+	nice = task_nice(current) + increment;
-+
-+	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
-+	if (increment < 0 && !can_nice(current, nice))
-+		return -EPERM;
-+
-+	retval = security_task_setnice(current, nice);
-+	if (retval)
-+		return retval;
-+
-+	set_user_nice(current, nice);
-+	return 0;
-+}
-+
-+#endif
-+
-+/**
-+ * task_prio - return the priority value of a given task.
-+ * @p: the task in question.
-+ *
-+ * Return: The priority value as seen by users in /proc.
-+ *
-+ * sched policy         return value   kernel prio    user prio/nice
-+ *
-+ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
-+ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
-+ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
-+ */
-+int task_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
-+		task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+/**
-+ * idle_cpu - is a given CPU idle currently?
-+ * @cpu: the processor in question.
-+ *
-+ * Return: 1 if the CPU is currently idle. 0 otherwise.
-+ */
-+int idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (rq->curr != rq->idle)
-+		return 0;
-+
-+	if (rq->nr_running)
-+		return 0;
-+
-+#ifdef CONFIG_SMP
-+	if (rq->ttwu_pending)
-+		return 0;
-+#endif
-+
-+	return 1;
-+}
-+
-+/**
-+ * idle_task - return the idle task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * Return: The idle task for the cpu @cpu.
-+ */
-+struct task_struct *idle_task(int cpu)
-+{
-+	return cpu_rq(cpu)->idle;
-+}
-+
-+/**
-+ * find_process_by_pid - find a process with a matching PID value.
-+ * @pid: the pid in question.
-+ *
-+ * The task of @pid, if found. %NULL otherwise.
-+ */
-+static inline struct task_struct *find_process_by_pid(pid_t pid)
-+{
-+	return pid ? find_task_by_vpid(pid) : current;
-+}
-+
-+/*
-+ * sched_setparam() passes in -1 for its policy, to let the functions
-+ * it calls know not to change it.
-+ */
-+#define SETPARAM_POLICY -1
-+
-+static void __setscheduler_params(struct task_struct *p,
-+		const struct sched_attr *attr)
-+{
-+	int policy = attr->sched_policy;
-+
-+	if (policy == SETPARAM_POLICY)
-+		policy = p->policy;
-+
-+	p->policy = policy;
-+
-+	/*
-+	 * allow normal nice value to be set, but will not have any
-+	 * effect on scheduling until the task not SCHED_NORMAL/
-+	 * SCHED_BATCH
-+	 */
-+	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
-+
-+	/*
-+	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
-+	 * !rt_policy. Always setting this ensures that things like
-+	 * getparam()/getattr() don't report silly values for !rt tasks.
-+	 */
-+	p->rt_priority = attr->sched_priority;
-+	p->normal_prio = normal_prio(p);
-+}
-+
-+/*
-+ * check the target process has a UID that matches the current process's
-+ */
-+static bool check_same_owner(struct task_struct *p)
-+{
-+	const struct cred *cred = current_cred(), *pcred;
-+	bool match;
-+
-+	rcu_read_lock();
-+	pcred = __task_cred(p);
-+	match = (uid_eq(cred->euid, pcred->euid) ||
-+		 uid_eq(cred->euid, pcred->uid));
-+	rcu_read_unlock();
-+	return match;
-+}
-+
-+static int __sched_setscheduler(struct task_struct *p,
-+				const struct sched_attr *attr,
-+				bool user, bool pi)
-+{
-+	const struct sched_attr dl_squash_attr = {
-+		.size		= sizeof(struct sched_attr),
-+		.sched_policy	= SCHED_FIFO,
-+		.sched_nice	= 0,
-+		.sched_priority = 99,
-+	};
-+	int oldpolicy = -1, policy = attr->sched_policy;
-+	int retval, newprio;
-+	struct callback_head *head;
-+	unsigned long flags;
-+	struct rq *rq;
-+	int reset_on_fork;
-+	raw_spinlock_t *lock;
-+
-+	/* The pi code expects interrupts enabled */
-+	BUG_ON(pi && in_interrupt());
-+
-+	/*
-+	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
-+	 */
-+	if (unlikely(SCHED_DEADLINE == policy)) {
-+		attr = &dl_squash_attr;
-+		policy = attr->sched_policy;
-+	}
-+recheck:
-+	/* Double check policy once rq lock held */
-+	if (policy < 0) {
-+		reset_on_fork = p->sched_reset_on_fork;
-+		policy = oldpolicy = p->policy;
-+	} else {
-+		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
-+
-+		if (policy > SCHED_IDLE)
-+			return -EINVAL;
-+	}
-+
-+	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
-+		return -EINVAL;
-+
-+	/*
-+	 * Valid priorities for SCHED_FIFO and SCHED_RR are
-+	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
-+	 * SCHED_BATCH and SCHED_IDLE is 0.
-+	 */
-+	if (attr->sched_priority < 0 ||
-+	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
-+	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
-+		return -EINVAL;
-+	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
-+	    (attr->sched_priority != 0))
-+		return -EINVAL;
-+
-+	/*
-+	 * Allow unprivileged RT tasks to decrease priority:
-+	 */
-+	if (user && !capable(CAP_SYS_NICE)) {
-+		if (SCHED_FIFO == policy || SCHED_RR == policy) {
-+			unsigned long rlim_rtprio =
-+					task_rlimit(p, RLIMIT_RTPRIO);
-+
-+			/* Can't set/change the rt policy */
-+			if (policy != p->policy && !rlim_rtprio)
-+				return -EPERM;
-+
-+			/* Can't increase priority */
-+			if (attr->sched_priority > p->rt_priority &&
-+			    attr->sched_priority > rlim_rtprio)
-+				return -EPERM;
-+		}
-+
-+		/* Can't change other user's priorities */
-+		if (!check_same_owner(p))
-+			return -EPERM;
-+
-+		/* Normal users shall not reset the sched_reset_on_fork flag */
-+		if (p->sched_reset_on_fork && !reset_on_fork)
-+			return -EPERM;
-+	}
-+
-+	if (user) {
-+		retval = security_task_setscheduler(p);
-+		if (retval)
-+			return retval;
-+	}
-+
-+	if (pi)
-+		cpuset_read_lock();
-+
-+	/*
-+	 * Make sure no PI-waiters arrive (or leave) while we are
-+	 * changing the priority of the task:
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+
-+	/*
-+	 * To be able to change p->policy safely, task_access_lock()
-+	 * must be called.
-+	 * IF use task_access_lock() here:
-+	 * For the task p which is not running, reading rq->stop is
-+	 * racy but acceptable as ->stop doesn't change much.
-+	 * An enhancemnet can be made to read rq->stop saftly.
-+	 */
-+	rq = __task_access_lock(p, &lock);
-+
-+	/*
-+	 * Changing the policy of the stop threads its a very bad idea
-+	 */
-+	if (p == rq->stop) {
-+		retval = -EINVAL;
-+		goto unlock;
-+	}
-+
-+	/*
-+	 * If not changing anything there's no need to proceed further:
-+	 */
-+	if (unlikely(policy == p->policy)) {
-+		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
-+			goto change;
-+		if (!rt_policy(policy) &&
-+		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
-+			goto change;
-+
-+		p->sched_reset_on_fork = reset_on_fork;
-+		retval = 0;
-+		goto unlock;
-+	}
-+change:
-+
-+	/* Re-check policy now with rq lock held */
-+	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
-+		policy = oldpolicy = -1;
-+		__task_access_unlock(p, lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+		if (pi)
-+			cpuset_read_unlock();
-+		goto recheck;
-+	}
-+
-+	p->sched_reset_on_fork = reset_on_fork;
-+
-+	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
-+	if (pi) {
-+		/*
-+		 * Take priority boosted tasks into account. If the new
-+		 * effective priority is unchanged, we just store the new
-+		 * normal parameters and do not touch the scheduler class and
-+		 * the runqueue. This will be done when the task deboost
-+		 * itself.
-+		 */
-+		newprio = rt_effective_prio(p, newprio);
-+	}
-+
-+	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
-+		__setscheduler_params(p, attr);
-+		__setscheduler_prio(p, newprio);
-+	}
-+
-+	check_task_changed(p, rq);
-+
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+	head = splice_balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	if (pi) {
-+		cpuset_read_unlock();
-+		rt_mutex_adjust_pi(p);
-+	}
-+
-+	/* Run balance callbacks after we've adjusted the PI chain: */
-+	balance_callbacks(rq, head);
-+	preempt_enable();
-+
-+	return 0;
-+
-+unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+	if (pi)
-+		cpuset_read_unlock();
-+	return retval;
-+}
-+
-+static int _sched_setscheduler(struct task_struct *p, int policy,
-+			       const struct sched_param *param, bool check)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy   = policy,
-+		.sched_priority = param->sched_priority,
-+		.sched_nice     = PRIO_TO_NICE(p->static_prio),
-+	};
-+
-+	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
-+	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
-+		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		policy &= ~SCHED_RESET_ON_FORK;
-+		attr.sched_policy = policy;
-+	}
-+
-+	return __sched_setscheduler(p, &attr, check, true);
-+}
-+
-+/**
-+ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Use sched_set_fifo(), read its comment.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ *
-+ * NOTE that the task may be already dead.
-+ */
-+int sched_setscheduler(struct task_struct *p, int policy,
-+		       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, true);
-+}
-+
-+int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, true, true);
-+}
-+
-+int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, false, true);
-+}
-+EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
-+
-+/**
-+ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Just like sched_setscheduler, only don't bother checking if the
-+ * current context has permission.  For example, this is needed in
-+ * stop_machine(): we create temporary high priority worker threads,
-+ * but our caller might not have that capability.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+int sched_setscheduler_nocheck(struct task_struct *p, int policy,
-+			       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, false);
-+}
-+
-+/*
-+ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
-+ * incapable of resource management, which is the one thing an OS really should
-+ * be doing.
-+ *
-+ * This is of course the reason it is limited to privileged users only.
-+ *
-+ * Worse still; it is fundamentally impossible to compose static priority
-+ * workloads. You cannot take two correctly working static prio workloads
-+ * and smash them together and still expect them to work.
-+ *
-+ * For this reason 'all' FIFO tasks the kernel creates are basically at:
-+ *
-+ *   MAX_RT_PRIO / 2
-+ *
-+ * The administrator _MUST_ configure the system, the kernel simply doesn't
-+ * know enough information to make a sensible choice.
-+ */
-+void sched_set_fifo(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo);
-+
-+/*
-+ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
-+ */
-+void sched_set_fifo_low(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = 1 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo_low);
-+
-+void sched_set_normal(struct task_struct *p, int nice)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+		.sched_nice = nice,
-+	};
-+	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_normal);
-+
-+static int
-+do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
-+{
-+	struct sched_param lparam;
-+	struct task_struct *p;
-+	int retval;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
-+		return -EFAULT;
-+
-+	rcu_read_lock();
-+	retval = -ESRCH;
-+	p = find_process_by_pid(pid);
-+	if (likely(p))
-+		get_task_struct(p);
-+	rcu_read_unlock();
-+
-+	if (likely(p)) {
-+		retval = sched_setscheduler(p, policy, &lparam);
-+		put_task_struct(p);
-+	}
-+
-+	return retval;
-+}
-+
-+/*
-+ * Mimics kernel/events/core.c perf_copy_attr().
-+ */
-+static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
-+{
-+	u32 size;
-+	int ret;
-+
-+	/* Zero the full structure, so that a short copy will be nice: */
-+	memset(attr, 0, sizeof(*attr));
-+
-+	ret = get_user(size, &uattr->size);
-+	if (ret)
-+		return ret;
-+
-+	/* ABI compatibility quirk: */
-+	if (!size)
-+		size = SCHED_ATTR_SIZE_VER0;
-+
-+	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
-+		goto err_size;
-+
-+	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
-+	if (ret) {
-+		if (ret == -E2BIG)
-+			goto err_size;
-+		return ret;
-+	}
-+
-+	/*
-+	 * XXX: Do we want to be lenient like existing syscalls; or do we want
-+	 * to be strict and return an error on out-of-bounds values?
-+	 */
-+	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
-+
-+	/* sched/core.c uses zero here but we already know ret is zero */
-+	return 0;
-+
-+err_size:
-+	put_user(sizeof(*attr), &uattr->size);
-+	return -E2BIG;
-+}
-+
-+/**
-+ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
-+ * @pid: the pid in question.
-+ * @policy: new policy.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ * @param: structure containing the new RT priority.
-+ */
-+SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
-+{
-+	if (policy < 0)
-+		return -EINVAL;
-+
-+	return do_sched_setscheduler(pid, policy, param);
-+}
-+
-+/**
-+ * sys_sched_setparam - set/change the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
-+}
-+
-+/**
-+ * sys_sched_setattr - same as above, but with extended sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ */
-+SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
-+			       unsigned int, flags)
-+{
-+	struct sched_attr attr;
-+	struct task_struct *p;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || flags)
-+		return -EINVAL;
-+
-+	retval = sched_copy_attr(uattr, &attr);
-+	if (retval)
-+		return retval;
-+
-+	if ((int)attr.sched_policy < 0)
-+		return -EINVAL;
-+
-+	rcu_read_lock();
-+	retval = -ESRCH;
-+	p = find_process_by_pid(pid);
-+	if (likely(p))
-+		get_task_struct(p);
-+	rcu_read_unlock();
-+
-+	if (likely(p)) {
-+		retval = sched_setattr(p, &attr);
-+		put_task_struct(p);
-+	}
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
-+ * @pid: the pid in question.
-+ *
-+ * Return: On success, the policy of the thread. Otherwise, a negative error
-+ * code.
-+ */
-+SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
-+{
-+	struct task_struct *p;
-+	int retval = -EINVAL;
-+
-+	if (pid < 0)
-+		goto out_nounlock;
-+
-+	retval = -ESRCH;
-+	rcu_read_lock();
-+	p = find_process_by_pid(pid);
-+	if (p) {
-+		retval = security_task_getscheduler(p);
-+		if (!retval)
-+			retval = p->policy;
-+	}
-+	rcu_read_unlock();
-+
-+out_nounlock:
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the RT priority.
-+ *
-+ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
-+ * code.
-+ */
-+SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	struct sched_param lp = { .sched_priority = 0 };
-+	struct task_struct *p;
-+	int retval = -EINVAL;
-+
-+	if (!param || pid < 0)
-+		goto out_nounlock;
-+
-+	rcu_read_lock();
-+	p = find_process_by_pid(pid);
-+	retval = -ESRCH;
-+	if (!p)
-+		goto out_unlock;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		goto out_unlock;
-+
-+	if (task_has_rt_policy(p))
-+		lp.sched_priority = p->rt_priority;
-+	rcu_read_unlock();
-+
-+	/*
-+	 * This one might sleep, we cannot do it with a spinlock held ...
-+	 */
-+	retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
-+
-+out_nounlock:
-+	return retval;
-+
-+out_unlock:
-+	rcu_read_unlock();
-+	return retval;
-+}
-+
-+/*
-+ * Copy the kernel size attribute structure (which might be larger
-+ * than what user-space knows about) to user-space.
-+ *
-+ * Note that all cases are valid: user-space buffer can be larger or
-+ * smaller than the kernel-space buffer. The usual case is that both
-+ * have the same size.
-+ */
-+static int
-+sched_attr_copy_to_user(struct sched_attr __user *uattr,
-+			struct sched_attr *kattr,
-+			unsigned int usize)
-+{
-+	unsigned int ksize = sizeof(*kattr);
-+
-+	if (!access_ok(uattr, usize))
-+		return -EFAULT;
-+
-+	/*
-+	 * sched_getattr() ABI forwards and backwards compatibility:
-+	 *
-+	 * If usize == ksize then we just copy everything to user-space and all is good.
-+	 *
-+	 * If usize < ksize then we only copy as much as user-space has space for,
-+	 * this keeps ABI compatibility as well. We skip the rest.
-+	 *
-+	 * If usize > ksize then user-space is using a newer version of the ABI,
-+	 * which part the kernel doesn't know about. Just ignore it - tooling can
-+	 * detect the kernel's knowledge of attributes from the attr->size value
-+	 * which is set to ksize in this case.
-+	 */
-+	kattr->size = min(usize, ksize);
-+
-+	if (copy_to_user(uattr, kattr, kattr->size))
-+		return -EFAULT;
-+
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ * @usize: sizeof(attr) for fwd/bwd comp.
-+ * @flags: for future extension.
-+ */
-+SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
-+		unsigned int, usize, unsigned int, flags)
-+{
-+	struct sched_attr kattr = { };
-+	struct task_struct *p;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
-+	    usize < SCHED_ATTR_SIZE_VER0 || flags)
-+		return -EINVAL;
-+
-+	rcu_read_lock();
-+	p = find_process_by_pid(pid);
-+	retval = -ESRCH;
-+	if (!p)
-+		goto out_unlock;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		goto out_unlock;
-+
-+	kattr.sched_policy = p->policy;
-+	if (p->sched_reset_on_fork)
-+		kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+	if (task_has_rt_policy(p))
-+		kattr.sched_priority = p->rt_priority;
-+	else
-+		kattr.sched_nice = task_nice(p);
-+
-+#ifdef CONFIG_UCLAMP_TASK
-+	kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
-+	kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
-+#endif
-+
-+	rcu_read_unlock();
-+
-+	return sched_attr_copy_to_user(uattr, &kattr, usize);
-+
-+out_unlock:
-+	rcu_read_unlock();
-+	return retval;
-+}
-+
-+long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
-+{
-+	cpumask_var_t cpus_allowed, new_mask;
-+	struct task_struct *p;
-+	int retval;
-+
-+	rcu_read_lock();
-+
-+	p = find_process_by_pid(pid);
-+	if (!p) {
-+		rcu_read_unlock();
-+		return -ESRCH;
-+	}
-+
-+	/* Prevent p going away */
-+	get_task_struct(p);
-+	rcu_read_unlock();
-+
-+	if (p->flags & PF_NO_SETAFFINITY) {
-+		retval = -EINVAL;
-+		goto out_put_task;
-+	}
-+	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
-+		retval = -ENOMEM;
-+		goto out_put_task;
-+	}
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
-+		retval = -ENOMEM;
-+		goto out_free_cpus_allowed;
-+	}
-+	retval = -EPERM;
-+	if (!check_same_owner(p)) {
-+		rcu_read_lock();
-+		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) {
-+			rcu_read_unlock();
-+			goto out_free_new_mask;
-+		}
-+		rcu_read_unlock();
-+	}
-+
-+	retval = security_task_setscheduler(p);
-+	if (retval)
-+		goto out_free_new_mask;
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	cpumask_and(new_mask, in_mask, cpus_allowed);
-+
-+again:
-+	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
-+
-+	if (!retval) {
-+		cpuset_cpus_allowed(p, cpus_allowed);
-+		if (!cpumask_subset(new_mask, cpus_allowed)) {
-+			/*
-+			 * We must have raced with a concurrent cpuset
-+			 * update. Just reset the cpus_allowed to the
-+			 * cpuset's cpus_allowed
-+			 */
-+			cpumask_copy(new_mask, cpus_allowed);
-+			goto again;
-+		}
-+	}
-+out_free_new_mask:
-+	free_cpumask_var(new_mask);
-+out_free_cpus_allowed:
-+	free_cpumask_var(cpus_allowed);
-+out_put_task:
-+	put_task_struct(p);
-+	return retval;
-+}
-+
-+static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
-+			     struct cpumask *new_mask)
-+{
-+	if (len < cpumask_size())
-+		cpumask_clear(new_mask);
-+	else if (len > cpumask_size())
-+		len = cpumask_size();
-+
-+	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
-+}
-+
-+/**
-+ * sys_sched_setaffinity - set the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to the new CPU mask
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	cpumask_var_t new_mask;
-+	int retval;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
-+	if (retval == 0)
-+		retval = sched_setaffinity(pid, new_mask);
-+	free_cpumask_var(new_mask);
-+	return retval;
-+}
-+
-+long sched_getaffinity(pid_t pid, cpumask_t *mask)
-+{
-+	struct task_struct *p;
-+	raw_spinlock_t *lock;
-+	unsigned long flags;
-+	int retval;
-+
-+	rcu_read_lock();
-+
-+	retval = -ESRCH;
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		goto out_unlock;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		goto out_unlock;
-+
-+	task_access_lock_irqsave(p, &lock, &flags);
-+	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
-+	task_access_unlock_irqrestore(p, lock, &flags);
-+
-+out_unlock:
-+	rcu_read_unlock();
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getaffinity - get the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to hold the current CPU mask
-+ *
-+ * Return: size of CPU mask copied to user_mask_ptr on success. An
-+ * error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	int ret;
-+	cpumask_var_t mask;
-+
-+	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
-+		return -EINVAL;
-+	if (len & (sizeof(unsigned long)-1))
-+		return -EINVAL;
-+
-+	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	ret = sched_getaffinity(pid, mask);
-+	if (ret == 0) {
-+		unsigned int retlen = min_t(size_t, len, cpumask_size());
-+
-+		if (copy_to_user(user_mask_ptr, mask, retlen))
-+			ret = -EFAULT;
-+		else
-+			ret = retlen;
-+	}
-+	free_cpumask_var(mask);
-+
-+	return ret;
-+}
-+
-+static void do_sched_yield(void)
-+{
-+	struct rq *rq;
-+	struct rq_flags rf;
-+
-+	if (!sched_yield_type)
-+		return;
-+
-+	rq = this_rq_lock_irq(&rf);
-+
-+	schedstat_inc(rq->yld_count);
-+
-+	if (1 == sched_yield_type) {
-+		if (!rt_task(current))
-+			do_sched_yield_type_1(current, rq);
-+	} else if (2 == sched_yield_type) {
-+		if (rq->nr_running > 1)
-+			rq->skip = current;
-+	}
-+
-+	preempt_disable();
-+	raw_spin_unlock_irq(&rq->lock);
-+	sched_preempt_enable_no_resched();
-+
-+	schedule();
-+}
-+
-+/**
-+ * sys_sched_yield - yield the current processor to other threads.
-+ *
-+ * This function yields the current CPU to other tasks. If there are no
-+ * other threads running on this CPU then this function will return.
-+ *
-+ * Return: 0.
-+ */
-+SYSCALL_DEFINE0(sched_yield)
-+{
-+	do_sched_yield();
-+	return 0;
-+}
-+
-+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
-+int __sched __cond_resched(void)
-+{
-+	if (should_resched(0)) {
-+		preempt_schedule_common();
-+		return 1;
-+	}
-+#ifndef CONFIG_PREEMPT_RCU
-+	rcu_all_qs();
-+#endif
-+	return 0;
-+}
-+EXPORT_SYMBOL(__cond_resched);
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(cond_resched);
-+
-+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(might_resched);
-+#endif
-+
-+/*
-+ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
-+ * call schedule, and on return reacquire the lock.
-+ *
-+ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
-+ * operations here to prevent schedule() from being called twice (once via
-+ * spin_unlock(), once by hand).
-+ */
-+int __cond_resched_lock(spinlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held(lock);
-+
-+	if (spin_needbreak(lock) || resched) {
-+		spin_unlock(lock);
-+		if (resched)
-+			preempt_schedule_common();
-+		else
-+			cpu_relax();
-+		ret = 1;
-+		spin_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_lock);
-+
-+int __cond_resched_rwlock_read(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_read(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		read_unlock(lock);
-+		if (resched)
-+			preempt_schedule_common();
-+		else
-+			cpu_relax();
-+		ret = 1;
-+		read_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_read);
-+
-+int __cond_resched_rwlock_write(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_write(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		write_unlock(lock);
-+		if (resched)
-+			preempt_schedule_common();
-+		else
-+			cpu_relax();
-+		ret = 1;
-+		write_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_write);
-+
-+/**
-+ * yield - yield the current processor to other threads.
-+ *
-+ * Do not ever use this function, there's a 99% chance you're doing it wrong.
-+ *
-+ * The scheduler is at all times free to pick the calling task as the most
-+ * eligible task to run, if removing the yield() call from your code breaks
-+ * it, it's already broken.
-+ *
-+ * Typical broken usage is:
-+ *
-+ * while (!event)
-+ * 	yield();
-+ *
-+ * where one assumes that yield() will let 'the other' process run that will
-+ * make event true. If the current task is a SCHED_FIFO task that will never
-+ * happen. Never use yield() as a progress guarantee!!
-+ *
-+ * If you want to use yield() to wait for something, use wait_event().
-+ * If you want to use yield() to be 'nice' for others, use cond_resched().
-+ * If you still want to use yield(), do not!
-+ */
-+void __sched yield(void)
-+{
-+	set_current_state(TASK_RUNNING);
-+	do_sched_yield();
-+}
-+EXPORT_SYMBOL(yield);
-+
-+/**
-+ * yield_to - yield the current processor to another thread in
-+ * your thread group, or accelerate that thread toward the
-+ * processor it's on.
-+ * @p: target task
-+ * @preempt: whether task preemption is allowed or not
-+ *
-+ * It's the caller's job to ensure that the target task struct
-+ * can't go away on us before we can do any checks.
-+ *
-+ * In Alt schedule FW, yield_to is not supported.
-+ *
-+ * Return:
-+ *	true (>0) if we indeed boosted the target task.
-+ *	false (0) if we failed to boost the target.
-+ *	-ESRCH if there's no task to yield to.
-+ */
-+int __sched yield_to(struct task_struct *p, bool preempt)
-+{
-+	return 0;
-+}
-+EXPORT_SYMBOL_GPL(yield_to);
-+
-+int io_schedule_prepare(void)
-+{
-+	int old_iowait = current->in_iowait;
-+
-+	current->in_iowait = 1;
-+	blk_schedule_flush_plug(current);
-+
-+	return old_iowait;
-+}
-+
-+void io_schedule_finish(int token)
-+{
-+	current->in_iowait = token;
-+}
-+
-+/*
-+ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
-+ * that process accounting knows that this is a task in IO wait state.
-+ *
-+ * But don't do that if it is a deliberate, throttling IO wait (this task
-+ * has set its backing_dev_info: the queue against which it should throttle)
-+ */
-+
-+long __sched io_schedule_timeout(long timeout)
-+{
-+	int token;
-+	long ret;
-+
-+	token = io_schedule_prepare();
-+	ret = schedule_timeout(timeout);
-+	io_schedule_finish(token);
-+
-+	return ret;
-+}
-+EXPORT_SYMBOL(io_schedule_timeout);
-+
-+void __sched io_schedule(void)
-+{
-+	int token;
-+
-+	token = io_schedule_prepare();
-+	schedule();
-+	io_schedule_finish(token);
-+}
-+EXPORT_SYMBOL(io_schedule);
-+
-+/**
-+ * sys_sched_get_priority_max - return maximum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the maximum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = MAX_RT_PRIO - 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+/**
-+ * sys_sched_get_priority_min - return minimum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the minimum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	alt_sched_debug();
-+
-+	if (pid < 0)
-+		return -EINVAL;
-+
-+	retval = -ESRCH;
-+	rcu_read_lock();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		goto out_unlock;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		goto out_unlock;
-+	rcu_read_unlock();
-+
-+	*t = ns_to_timespec64(sched_timeslice_ns);
-+	return 0;
-+
-+out_unlock:
-+	rcu_read_unlock();
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_rr_get_interval - return the default timeslice of a process.
-+ * @pid: pid of the process.
-+ * @interval: userspace pointer to the timeslice value.
-+ *
-+ *
-+ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
-+ * an error code.
-+ */
-+SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
-+		struct __kernel_timespec __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_timespec64(&t, interval);
-+
-+	return retval;
-+}
-+
-+#ifdef CONFIG_COMPAT_32BIT_TIME
-+SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
-+		struct old_timespec32 __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_old_timespec32(&t, interval);
-+	return retval;
-+}
-+#endif
-+
-+void sched_show_task(struct task_struct *p)
-+{
-+	unsigned long free = 0;
-+	int ppid;
-+
-+	if (!try_get_task_stack(p))
-+		return;
-+
-+	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
-+
-+	if (task_is_running(p))
-+		pr_cont("  running task    ");
-+#ifdef CONFIG_DEBUG_STACK_USAGE
-+	free = stack_not_used(p);
-+#endif
-+	ppid = 0;
-+	rcu_read_lock();
-+	if (pid_alive(p))
-+		ppid = task_pid_nr(rcu_dereference(p->real_parent));
-+	rcu_read_unlock();
-+	pr_cont(" stack:%5lu pid:%5d ppid:%6d flags:0x%08lx\n",
-+		free, task_pid_nr(p), ppid,
-+		(unsigned long)task_thread_info(p)->flags);
-+
-+	print_worker_info(KERN_INFO, p);
-+	print_stop_info(KERN_INFO, p);
-+	show_stack(p, NULL, KERN_INFO);
-+	put_task_stack(p);
-+}
-+EXPORT_SYMBOL_GPL(sched_show_task);
-+
-+static inline bool
-+state_filter_match(unsigned long state_filter, struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/* no filter, everything matches */
-+	if (!state_filter)
-+		return true;
-+
-+	/* filter, but doesn't match */
-+	if (!(state & state_filter))
-+		return false;
-+
-+	/*
-+	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
-+	 * TASK_KILLABLE).
-+	 */
-+	if (state_filter == TASK_UNINTERRUPTIBLE && state == TASK_IDLE)
-+		return false;
-+
-+	return true;
-+}
-+
-+
-+void show_state_filter(unsigned int state_filter)
-+{
-+	struct task_struct *g, *p;
-+
-+	rcu_read_lock();
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * reset the NMI-timeout, listing all files on a slow
-+		 * console might take a lot of time:
-+		 * Also, reset softlockup watchdogs on all CPUs, because
-+		 * another CPU might be blocked waiting for us to process
-+		 * an IPI.
-+		 */
-+		touch_nmi_watchdog();
-+		touch_all_softlockup_watchdogs();
-+		if (state_filter_match(state_filter, p))
-+			sched_show_task(p);
-+	}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	/* TODO: Alt schedule FW should support this
-+	if (!state_filter)
-+		sysrq_sched_debug_show();
-+	*/
-+#endif
-+	rcu_read_unlock();
-+	/*
-+	 * Only show locks if all tasks are dumped:
-+	 */
-+	if (!state_filter)
-+		debug_show_all_locks();
-+}
-+
-+void dump_cpu_task(int cpu)
-+{
-+	pr_info("Task dump for CPU %d:\n", cpu);
-+	sched_show_task(cpu_curr(cpu));
-+}
-+
-+/**
-+ * init_idle - set up an idle thread for a given CPU
-+ * @idle: task in question
-+ * @cpu: CPU the idle task belongs to
-+ *
-+ * NOTE: this function does not set the idle thread's NEED_RESCHED
-+ * flag, to make booting more robust.
-+ */
-+void __init init_idle(struct task_struct *idle, int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	__sched_fork(0, idle);
-+
-+	/*
-+	 * The idle task doesn't need the kthread struct to function, but it
-+	 * is dressed up as a per-CPU kthread and thus needs to play the part
-+	 * if we want to avoid special-casing it in code that deals with per-CPU
-+	 * kthreads.
-+	 */
-+	set_kthread_struct(idle);
-+
-+	raw_spin_lock_irqsave(&idle->pi_lock, flags);
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	idle->last_ran = rq->clock_task;
-+	idle->__state = TASK_RUNNING;
-+	/*
-+	 * PF_KTHREAD should already be set at this point; regardless, make it
-+	 * look like a proper per-CPU kthread.
-+	 */
-+	idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
-+	kthread_set_per_cpu(idle, cpu);
-+
-+	sched_queue_init_idle(&rq->queue, idle);
-+
-+	scs_task_reset(idle);
-+	kasan_unpoison_task_stack(idle);
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * It's possible that init_idle() gets called multiple times on a task,
-+	 * in that case do_set_cpus_allowed() will not do the right thing.
-+	 *
-+	 * And since this is boot we can forgo the serialisation.
-+	 */
-+	set_cpus_allowed_common(idle, cpumask_of(cpu));
-+#endif
-+
-+	/* Silence PROVE_RCU */
-+	rcu_read_lock();
-+	__set_task_cpu(idle, cpu);
-+	rcu_read_unlock();
-+
-+	rq->idle = idle;
-+	rcu_assign_pointer(rq->curr, idle);
-+	idle->on_cpu = 1;
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
-+
-+	/* Set the preempt count _outside_ the spinlocks! */
-+	init_idle_preempt_count(idle, cpu);
-+
-+	ftrace_graph_init_idle_task(idle, cpu);
-+	vtime_init_idle(idle, cpu);
-+#ifdef CONFIG_SMP
-+	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
-+			      const struct cpumask __maybe_unused *trial)
-+{
-+	return 1;
-+}
-+
-+int task_can_attach(struct task_struct *p,
-+		    const struct cpumask *cs_cpus_allowed)
-+{
-+	int ret = 0;
-+
-+	/*
-+	 * Kthreads which disallow setaffinity shouldn't be moved
-+	 * to a new cpuset; we don't want to change their CPU
-+	 * affinity and isolating such threads by their set of
-+	 * allowed nodes is unnecessary.  Thus, cpusets are not
-+	 * applicable for such threads.  This prevents checking for
-+	 * success of set_cpus_allowed_ptr() on all attached tasks
-+	 * before cpus_mask may be changed.
-+	 */
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		ret = -EINVAL;
-+
-+	return ret;
-+}
-+
-+bool sched_smp_initialized __read_mostly;
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+/*
-+ * Ensures that the idle task is using init_mm right before its CPU goes
-+ * offline.
-+ */
-+void idle_task_exit(void)
-+{
-+	struct mm_struct *mm = current->active_mm;
-+
-+	BUG_ON(current != this_rq()->idle);
-+
-+	if (mm != &init_mm) {
-+		switch_mm(mm, &init_mm, current);
-+		finish_arch_post_lock_switch();
-+	}
-+
-+	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
-+}
-+
-+static int __balance_push_cpu_stop(void *arg)
-+{
-+	struct task_struct *p = arg;
-+	struct rq *rq = this_rq();
-+	struct rq_flags rf;
-+	int cpu;
-+
-+	raw_spin_lock_irq(&p->pi_lock);
-+	rq_lock(rq, &rf);
-+
-+	update_rq_clock(rq);
-+
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		cpu = select_fallback_rq(rq->cpu, p);
-+		rq = __migrate_task(rq, p, cpu);
-+	}
-+
-+	rq_unlock(rq, &rf);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	put_task_struct(p);
-+
-+	return 0;
-+}
-+
-+static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
-+
-+/*
-+ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
-+ * effective when the hotplug motion is down.
-+ */
-+static void balance_push(struct rq *rq)
-+{
-+	struct task_struct *push_task = rq->curr;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*
-+	 * Ensure the thing is persistent until balance_push_set(.on = false);
-+	 */
-+	rq->balance_callback = &balance_push_callback;
-+
-+	/*
-+	 * Only active while going offline and when invoked on the outgoing
-+	 * CPU.
-+	 */
-+	if (!cpu_dying(rq->cpu) || rq != this_rq())
-+		return;
-+
-+	/*
-+	 * Both the cpu-hotplug and stop task are in this case and are
-+	 * required to complete the hotplug process.
-+	 */
-+	if (kthread_is_per_cpu(push_task) ||
-+	    is_migration_disabled(push_task)) {
-+
-+		/*
-+		 * If this is the idle task on the outgoing CPU try to wake
-+		 * up the hotplug control thread which might wait for the
-+		 * last task to vanish. The rcuwait_active() check is
-+		 * accurate here because the waiter is pinned on this CPU
-+		 * and can't obviously be running in parallel.
-+		 *
-+		 * On RT kernels this also has to check whether there are
-+		 * pinned and scheduled out tasks on the runqueue. They
-+		 * need to leave the migrate disabled section first.
-+		 */
-+		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
-+		    rcuwait_active(&rq->hotplug_wait)) {
-+			raw_spin_unlock(&rq->lock);
-+			rcuwait_wake_up(&rq->hotplug_wait);
-+			raw_spin_lock(&rq->lock);
-+		}
-+		return;
-+	}
-+
-+	get_task_struct(push_task);
-+	/*
-+	 * Temporarily drop rq->lock such that we can wake-up the stop task.
-+	 * Both preemption and IRQs are still disabled.
-+	 */
-+	raw_spin_unlock(&rq->lock);
-+	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
-+			    this_cpu_ptr(&push_work));
-+	/*
-+	 * At this point need_resched() is true and we'll take the loop in
-+	 * schedule(). The next pick is obviously going to be the stop task
-+	 * which kthread_is_per_cpu() and will push this task away.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct rq_flags rf;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	if (on) {
-+		WARN_ON_ONCE(rq->balance_callback);
-+		rq->balance_callback = &balance_push_callback;
-+	} else if (rq->balance_callback == &balance_push_callback) {
-+		rq->balance_callback = NULL;
-+	}
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Invoked from a CPUs hotplug control thread after the CPU has been marked
-+ * inactive. All tasks which are not per CPU kernel threads are either
-+ * pushed off this CPU now via balance_push() or placed on a different CPU
-+ * during wakeup. Wait until the CPU is quiescent.
-+ */
-+static void balance_hotplug_wait(void)
-+{
-+	struct rq *rq = this_rq();
-+
-+	rcuwait_wait_event(&rq->hotplug_wait,
-+			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
-+			   TASK_UNINTERRUPTIBLE);
-+}
-+
-+#else
-+
-+static void balance_push(struct rq *rq)
-+{
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+}
-+
-+static inline void balance_hotplug_wait(void)
-+{
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+static void set_rq_offline(struct rq *rq)
-+{
-+	if (rq->online)
-+		rq->online = false;
-+}
-+
-+static void set_rq_online(struct rq *rq)
-+{
-+	if (!rq->online)
-+		rq->online = true;
-+}
-+
-+/*
-+ * used to mark begin/end of suspend/resume:
-+ */
-+static int num_cpus_frozen;
-+
-+/*
-+ * Update cpusets according to cpu_active mask.  If cpusets are
-+ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
-+ * around partition_sched_domains().
-+ *
-+ * If we come here as part of a suspend/resume, don't touch cpusets because we
-+ * want to restore it back to its original state upon resume anyway.
-+ */
-+static void cpuset_cpu_active(void)
-+{
-+	if (cpuhp_tasks_frozen) {
-+		/*
-+		 * num_cpus_frozen tracks how many CPUs are involved in suspend
-+		 * resume sequence. As long as this is not the last online
-+		 * operation in the resume sequence, just build a single sched
-+		 * domain, ignoring cpusets.
-+		 */
-+		partition_sched_domains(1, NULL, NULL);
-+		if (--num_cpus_frozen)
-+			return;
-+		/*
-+		 * This is the last CPU online operation. So fall through and
-+		 * restore the original sched domains by considering the
-+		 * cpuset configurations.
-+		 */
-+		cpuset_force_rebuild();
-+	}
-+
-+	cpuset_update_active_cpus();
-+}
-+
-+static int cpuset_cpu_inactive(unsigned int cpu)
-+{
-+	if (!cpuhp_tasks_frozen) {
-+		cpuset_update_active_cpus();
-+	} else {
-+		num_cpus_frozen++;
-+		partition_sched_domains(1, NULL, NULL);
-+	}
-+	return 0;
-+}
-+
-+int sched_cpu_activate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/*
-+	 * Clear the balance_push callback and prepare to schedule
-+	 * regular tasks.
-+	 */
-+	balance_push_set(cpu, false);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going up, increment the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
-+		static_branch_inc_cpuslocked(&sched_smt_present);
-+#endif
-+	set_cpu_active(cpu, true);
-+
-+	if (sched_smp_initialized)
-+		cpuset_cpu_active();
-+
-+	/*
-+	 * Put the rq online, if not already. This happens:
-+	 *
-+	 * 1) In the early boot process, because we build the real domains
-+	 *    after all cpus have been brought up.
-+	 *
-+	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
-+	 *    domains.
-+	 */
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_online(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	return 0;
-+}
-+
-+int sched_cpu_deactivate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+	int ret;
-+
-+	set_cpu_active(cpu, false);
-+
-+	/*
-+	 * From this point forward, this CPU will refuse to run any task that
-+	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
-+	 * push those tasks away until this gets cleared, see
-+	 * sched_cpu_dying().
-+	 */
-+	balance_push_set(cpu, true);
-+
-+	/*
-+	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
-+	 * users of this state to go away such that all new such users will
-+	 * observe it.
-+	 *
-+	 * Specifically, we rely on ttwu to no longer target this CPU, see
-+	 * ttwu_queue_cond() and is_cpu_allowed().
-+	 *
-+	 * Do sync before park smpboot threads to take care the rcu boost case.
-+	 */
-+	synchronize_rcu();
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	update_rq_clock(rq);
-+	set_rq_offline(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going down, decrement the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
-+		static_branch_dec_cpuslocked(&sched_smt_present);
-+		if (!static_branch_likely(&sched_smt_present))
-+			cpumask_clear(&sched_sg_idle_mask);
-+	}
-+#endif
-+
-+	if (!sched_smp_initialized)
-+		return 0;
-+
-+	ret = cpuset_cpu_inactive(cpu);
-+	if (ret) {
-+		balance_push_set(cpu, false);
-+		set_cpu_active(cpu, true);
-+		return ret;
-+	}
-+
-+	return 0;
-+}
-+
-+static void sched_rq_cpu_starting(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	rq->calc_load_update = calc_load_update;
-+}
-+
-+int sched_cpu_starting(unsigned int cpu)
-+{
-+	sched_rq_cpu_starting(cpu);
-+	sched_tick_start(cpu);
-+	return 0;
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+
-+/*
-+ * Invoked immediately before the stopper thread is invoked to bring the
-+ * CPU down completely. At this point all per CPU kthreads except the
-+ * hotplug thread (current) and the stopper thread (inactive) have been
-+ * either parked or have been unbound from the outgoing CPU. Ensure that
-+ * any of those which might be on the way out are gone.
-+ *
-+ * If after this point a bound task is being woken on this CPU then the
-+ * responsible hotplug callback has failed to do it's job.
-+ * sched_cpu_dying() will catch it with the appropriate fireworks.
-+ */
-+int sched_cpu_wait_empty(unsigned int cpu)
-+{
-+	balance_hotplug_wait();
-+	return 0;
-+}
-+
-+/*
-+ * Since this CPU is going 'away' for a while, fold any nr_active delta we
-+ * might have. Called from the CPU stopper task after ensuring that the
-+ * stopper is the last running task on the CPU, so nr_active count is
-+ * stable. We need to take the teardown thread which is calling this into
-+ * account, so we hand in adjust = 1 to the load calculation.
-+ *
-+ * Also see the comment "Global load-average calculations".
-+ */
-+static void calc_load_migrate(struct rq *rq)
-+{
-+	long delta = calc_load_fold_active(rq, 1);
-+
-+	if (delta)
-+		atomic_long_add(delta, &calc_load_tasks);
-+}
-+
-+static void dump_rq_tasks(struct rq *rq, const char *loglvl)
-+{
-+	struct task_struct *g, *p;
-+	int cpu = cpu_of(rq);
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
-+	for_each_process_thread(g, p) {
-+		if (task_cpu(p) != cpu)
-+			continue;
-+
-+		if (!task_on_rq_queued(p))
-+			continue;
-+
-+		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
-+	}
-+}
-+
-+int sched_cpu_dying(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/* Handle pending wakeups and then migrate everything off */
-+	sched_tick_stop(cpu);
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
-+		WARN(true, "Dying CPU not properly vacated!");
-+		dump_rq_tasks(rq, KERN_WARNING);
-+	}
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	calc_load_migrate(rq);
-+	hrtick_clear(rq);
-+	return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_SMP
-+static void sched_init_topology_cpumask_early(void)
-+{
-+	int cpu;
-+	cpumask_t *tmp;
-+
-+	for_each_possible_cpu(cpu) {
-+		/* init topo masks */
-+		tmp = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+		cpumask_copy(tmp, cpumask_of(cpu));
-+		tmp++;
-+		cpumask_copy(tmp, cpu_possible_mask);
-+		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
-+		/*per_cpu(sd_llc_id, cpu) = cpu;*/
-+	}
-+}
-+
-+#define TOPOLOGY_CPUMASK(name, mask, last)\
-+	if (cpumask_and(topo, topo, mask)) {					\
-+		cpumask_copy(topo, mask);					\
-+		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
-+		       cpu, (topo++)->bits[0]);					\
-+	}									\
-+	if (!last)								\
-+		cpumask_complement(topo, mask)
-+
-+static void sched_init_topology_cpumask(void)
-+{
-+	int cpu;
-+	cpumask_t *topo;
-+
-+	for_each_online_cpu(cpu) {
-+		/* take chance to reset time slice for idle tasks */
-+		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
-+
-+		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
-+
-+		cpumask_complement(topo, cpumask_of(cpu));
-+#ifdef CONFIG_SCHED_SMT
-+		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
-+#endif
-+		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
-+		per_cpu(sched_cpu_llc_mask, cpu) = topo;
-+		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
-+
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
-+		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
-+		       cpu, per_cpu(sd_llc_id, cpu),
-+		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
-+			      per_cpu(sched_cpu_topo_masks, cpu)));
-+	}
-+}
-+#endif
-+
-+void __init sched_init_smp(void)
-+{
-+	/* Move init over to a non-isolated CPU */
-+	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
-+		BUG();
-+	current->flags &= ~PF_NO_SETAFFINITY;
-+
-+	sched_init_topology_cpumask();
-+
-+	sched_smp_initialized = true;
-+}
-+#else
-+void __init sched_init_smp(void)
-+{
-+	cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
-+}
-+#endif /* CONFIG_SMP */
-+
-+int in_sched_functions(unsigned long addr)
-+{
-+	return in_lock_functions(addr) ||
-+		(addr >= (unsigned long)__sched_text_start
-+		&& addr < (unsigned long)__sched_text_end);
-+}
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/* task group related information */
-+struct task_group {
-+	struct cgroup_subsys_state css;
-+
-+	struct rcu_head rcu;
-+	struct list_head list;
-+
-+	struct task_group *parent;
-+	struct list_head siblings;
-+	struct list_head children;
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	unsigned long		shares;
-+#endif
-+};
-+
-+/*
-+ * Default task group.
-+ * Every task in system belongs to this group at bootup.
-+ */
-+struct task_group root_task_group;
-+LIST_HEAD(task_groups);
-+
-+/* Cacheline aligned slab cache for task_group */
-+static struct kmem_cache *task_group_cache __read_mostly;
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+void __init sched_init(void)
-+{
-+	int i;
-+	struct rq *rq;
-+
-+	printk(KERN_INFO ALT_SCHED_VERSION_MSG);
-+
-+	wait_bit_init();
-+
-+#ifdef CONFIG_SMP
-+	for (i = 0; i < SCHED_BITS; i++)
-+		cpumask_copy(sched_rq_watermark + i, cpu_present_mask);
-+#endif
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+	task_group_cache = KMEM_CACHE(task_group, 0);
-+
-+	list_add(&root_task_group.list, &task_groups);
-+	INIT_LIST_HEAD(&root_task_group.children);
-+	INIT_LIST_HEAD(&root_task_group.siblings);
-+#endif /* CONFIG_CGROUP_SCHED */
-+	for_each_possible_cpu(i) {
-+		rq = cpu_rq(i);
-+
-+		sched_queue_init(&rq->queue);
-+		rq->watermark = IDLE_TASK_SCHED_PRIO;
-+		rq->skip = NULL;
-+
-+		raw_spin_lock_init(&rq->lock);
-+		rq->nr_running = rq->nr_uninterruptible = 0;
-+		rq->calc_load_active = 0;
-+		rq->calc_load_update = jiffies + LOAD_FREQ;
-+#ifdef CONFIG_SMP
-+		rq->online = false;
-+		rq->cpu = i;
-+
-+#ifdef CONFIG_SCHED_SMT
-+		rq->active_balance = 0;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
-+#endif
-+		rq->balance_callback = &balance_push_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+		rcuwait_init(&rq->hotplug_wait);
-+#endif
-+#endif /* CONFIG_SMP */
-+		rq->nr_switches = 0;
-+
-+		hrtick_rq_init(rq);
-+		atomic_set(&rq->nr_iowait, 0);
-+	}
-+#ifdef CONFIG_SMP
-+	/* Set rq->online for cpu 0 */
-+	cpu_rq(0)->online = true;
-+#endif
-+	/*
-+	 * The boot idle thread does lazy MMU switching as well:
-+	 */
-+	mmgrab(&init_mm);
-+	enter_lazy_tlb(&init_mm, current);
-+
-+	/*
-+	 * Make us the idle thread. Technically, schedule() should not be
-+	 * called from this thread, however somewhere below it might be,
-+	 * but because we are the idle thread, we just pick up running again
-+	 * when this runqueue becomes "idle".
-+	 */
-+	init_idle(current, smp_processor_id());
-+
-+	calc_load_update = jiffies + LOAD_FREQ;
-+
-+#ifdef CONFIG_SMP
-+	idle_thread_set_boot_cpu();
-+	balance_push_set(smp_processor_id(), false);
-+
-+	sched_init_topology_cpumask_early();
-+#endif /* SMP */
-+
-+	psi_init();
-+}
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+static inline int preempt_count_equals(int preempt_offset)
-+{
-+	int nested = preempt_count() + rcu_preempt_depth();
-+
-+	return (nested == preempt_offset);
-+}
-+
-+void __might_sleep(const char *file, int line, int preempt_offset)
-+{
-+	unsigned int state = get_current_state();
-+	/*
-+	 * Blocking primitives will set (and therefore destroy) current->state,
-+	 * since we will exit with TASK_RUNNING make sure we enter with it,
-+	 * otherwise we will destroy state.
-+	 */
-+	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
-+			"do not call blocking ops when !TASK_RUNNING; "
-+			"state=%x set at [<%p>] %pS\n", state,
-+			(void *)current->task_state_change,
-+			(void *)current->task_state_change);
-+
-+	___might_sleep(file, line, preempt_offset);
-+}
-+EXPORT_SYMBOL(__might_sleep);
-+
-+void ___might_sleep(const char *file, int line, int preempt_offset)
-+{
-+	/* Ratelimiting timestamp: */
-+	static unsigned long prev_jiffy;
-+
-+	unsigned long preempt_disable_ip;
-+
-+	/* WARN_ON_ONCE() by default, no rate limit required: */
-+	rcu_sleep_check();
-+
-+	if ((preempt_count_equals(preempt_offset) && !irqs_disabled() &&
-+	     !is_idle_task(current) && !current->non_block_count) ||
-+	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
-+	    oops_in_progress)
-+		return;
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	/* Save this before calling printk(), since that will clobber it: */
-+	preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	printk(KERN_ERR
-+		"BUG: sleeping function called from invalid context at %s:%d\n",
-+			file, line);
-+	printk(KERN_ERR
-+		"in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
-+			in_atomic(), irqs_disabled(), current->non_block_count,
-+			current->pid, current->comm);
-+
-+	if (task_stack_end_corrupted(current))
-+		printk(KERN_EMERG "Thread overran stack, or stack corrupted\n");
-+
-+	debug_show_held_locks(current);
-+	if (irqs_disabled())
-+		print_irqtrace_events(current);
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	if (!preempt_count_equals(preempt_offset)) {
-+		pr_err("Preemption disabled at:");
-+		print_ip_sym(KERN_ERR, preempt_disable_ip);
-+	}
-+#endif
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL(___might_sleep);
-+
-+void __cant_sleep(const char *file, int line, int preempt_offset)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > preempt_offset)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
-+	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
-+			in_atomic(), irqs_disabled(),
-+			current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_sleep);
-+
-+#ifdef CONFIG_SMP
-+void __cant_migrate(const char *file, int line)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (is_migration_disabled(current))
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > 0)
-+		return;
-+
-+	if (current->migration_flags & MDF_FORCE_ENABLED)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
-+	       current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_migrate);
-+#endif
-+#endif
-+
-+#ifdef CONFIG_MAGIC_SYSRQ
-+void normalize_rt_tasks(void)
-+{
-+	struct task_struct *g, *p;
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+	};
-+
-+	read_lock(&tasklist_lock);
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * Only normalize user tasks:
-+		 */
-+		if (p->flags & PF_KTHREAD)
-+			continue;
-+
-+		if (!rt_task(p)) {
-+			/*
-+			 * Renice negative nice level userspace
-+			 * tasks back to 0:
-+			 */
-+			if (task_nice(p) < 0)
-+				set_user_nice(p, 0);
-+			continue;
-+		}
-+
-+		__sched_setscheduler(p, &attr, false, false);
-+	}
-+	read_unlock(&tasklist_lock);
-+}
-+#endif /* CONFIG_MAGIC_SYSRQ */
-+
-+#if defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB)
-+/*
-+ * These functions are only useful for the IA64 MCA handling, or kdb.
-+ *
-+ * They can only be called when the whole system has been
-+ * stopped - every CPU needs to be quiescent, and no scheduling
-+ * activity can take place. Using them for anything else would
-+ * be a serious bug, and as a result, they aren't even visible
-+ * under any other configuration.
-+ */
-+
-+/**
-+ * curr_task - return the current task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
-+ *
-+ * Return: The current task for @cpu.
-+ */
-+struct task_struct *curr_task(int cpu)
-+{
-+	return cpu_curr(cpu);
-+}
-+
-+#endif /* defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB) */
-+
-+#ifdef CONFIG_IA64
-+/**
-+ * ia64_set_curr_task - set the current task for a given CPU.
-+ * @cpu: the processor in question.
-+ * @p: the task pointer to set.
-+ *
-+ * Description: This function must only be used when non-maskable interrupts
-+ * are serviced on a separate stack.  It allows the architecture to switch the
-+ * notion of the current task on a CPU in a non-blocking manner.  This function
-+ * must be called with all CPU's synchronised, and interrupts disabled, the
-+ * and caller must save the original value of the current task (see
-+ * curr_task() above) and restore that value before reenabling interrupts and
-+ * re-starting the system.
-+ *
-+ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
-+ */
-+void ia64_set_curr_task(int cpu, struct task_struct *p)
-+{
-+	cpu_curr(cpu) = p;
-+}
-+
-+#endif
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+static void sched_free_group(struct task_group *tg)
-+{
-+	kmem_cache_free(task_group_cache, tg);
-+}
-+
-+/* allocate runqueue etc for a new task group */
-+struct task_group *sched_create_group(struct task_group *parent)
-+{
-+	struct task_group *tg;
-+
-+	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
-+	if (!tg)
-+		return ERR_PTR(-ENOMEM);
-+
-+	return tg;
-+}
-+
-+void sched_online_group(struct task_group *tg, struct task_group *parent)
-+{
-+}
-+
-+/* rcu callback to free various structures associated with a task group */
-+static void sched_free_group_rcu(struct rcu_head *rhp)
-+{
-+	/* Now it should be safe to free those cfs_rqs */
-+	sched_free_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+void sched_destroy_group(struct task_group *tg)
-+{
-+	/* Wait for possible concurrent references to cfs_rqs complete */
-+	call_rcu(&tg->rcu, sched_free_group_rcu);
-+}
-+
-+void sched_offline_group(struct task_group *tg)
-+{
-+}
-+
-+static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
-+{
-+	return css ? container_of(css, struct task_group, css) : NULL;
-+}
-+
-+static struct cgroup_subsys_state *
-+cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
-+{
-+	struct task_group *parent = css_tg(parent_css);
-+	struct task_group *tg;
-+
-+	if (!parent) {
-+		/* This is early initialization for the top cgroup */
-+		return &root_task_group.css;
-+	}
-+
-+	tg = sched_create_group(parent);
-+	if (IS_ERR(tg))
-+		return ERR_PTR(-ENOMEM);
-+	return &tg->css;
-+}
-+
-+/* Expose task group only after completing cgroup initialization */
-+static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+	struct task_group *parent = css_tg(css->parent);
-+
-+	if (parent)
-+		sched_online_group(tg, parent);
-+	return 0;
-+}
-+
-+static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	sched_offline_group(tg);
-+}
-+
-+static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	/*
-+	 * Relies on the RCU grace period between css_released() and this.
-+	 */
-+	sched_free_group(tg);
-+}
-+
-+static void cpu_cgroup_fork(struct task_struct *task)
-+{
-+}
-+
-+static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
-+{
-+	return 0;
-+}
-+
-+static void cpu_cgroup_attach(struct cgroup_taskset *tset)
-+{
-+}
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+static DEFINE_MUTEX(shares_mutex);
-+
-+int sched_group_set_shares(struct task_group *tg, unsigned long shares)
-+{
-+	/*
-+	 * We can't change the weight of the root cgroup.
-+	 */
-+	if (&root_task_group == tg)
-+		return -EINVAL;
-+
-+	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
-+
-+	mutex_lock(&shares_mutex);
-+	if (tg->shares == shares)
-+		goto done;
-+
-+	tg->shares = shares;
-+done:
-+	mutex_unlock(&shares_mutex);
-+	return 0;
-+}
-+
-+static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cftype, u64 shareval)
-+{
-+	if (shareval > scale_load_down(ULONG_MAX))
-+		shareval = MAX_SHARES;
-+	return sched_group_set_shares(css_tg(css), scale_load(shareval));
-+}
-+
-+static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	return (u64) scale_load_down(tg->shares);
-+}
-+#endif
-+
-+static struct cftype cpu_legacy_files[] = {
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	{
-+		.name = "shares",
-+		.read_u64 = cpu_shares_read_u64,
-+		.write_u64 = cpu_shares_write_u64,
-+	},
-+#endif
-+	{ }	/* Terminate */
-+};
-+
-+
-+static struct cftype cpu_files[] = {
-+	{ }	/* terminate */
-+};
-+
-+static int cpu_extra_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+struct cgroup_subsys cpu_cgrp_subsys = {
-+	.css_alloc	= cpu_cgroup_css_alloc,
-+	.css_online	= cpu_cgroup_css_online,
-+	.css_released	= cpu_cgroup_css_released,
-+	.css_free	= cpu_cgroup_css_free,
-+	.css_extra_stat_show = cpu_extra_stat_show,
-+	.fork		= cpu_cgroup_fork,
-+	.can_attach	= cpu_cgroup_can_attach,
-+	.attach		= cpu_cgroup_attach,
-+	.legacy_cftypes	= cpu_files,
-+	.legacy_cftypes	= cpu_legacy_files,
-+	.dfl_cftypes	= cpu_files,
-+	.early_init	= true,
-+	.threaded	= true,
-+};
-+#endif	/* CONFIG_CGROUP_SCHED */
-+
-+#undef CREATE_TRACE_POINTS
-diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
-new file mode 100644
-index 000000000000..1212a031700e
---- /dev/null
-+++ b/kernel/sched/alt_debug.c
-@@ -0,0 +1,31 @@
-+/*
-+ * kernel/sched/alt_debug.c
-+ *
-+ * Print the alt scheduler debugging details
-+ *
-+ * Author: Alfred Chen
-+ * Date  : 2020
-+ */
-+#include "sched.h"
-+
-+/*
-+ * This allows printing both to /proc/sched_debug and
-+ * to the console
-+ */
-+#define SEQ_printf(m, x...)			\
-+ do {						\
-+	if (m)					\
-+		seq_printf(m, x);		\
-+	else					\
-+		pr_cont(x);			\
-+ } while (0)
-+
-+void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
-+			  struct seq_file *m)
-+{
-+	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
-+						get_nr_threads(p));
-+}
-+
-+void proc_sched_set_task(struct task_struct *p)
-+{}
-diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
-new file mode 100644
-index 000000000000..289058a09bd5
---- /dev/null
-+++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,666 @@
-+#ifndef ALT_SCHED_H
-+#define ALT_SCHED_H
-+
-+#include <linux/sched.h>
-+
-+#include <linux/sched/clock.h>
-+#include <linux/sched/cpufreq.h>
-+#include <linux/sched/cputime.h>
-+#include <linux/sched/debug.h>
-+#include <linux/sched/init.h>
-+#include <linux/sched/isolation.h>
-+#include <linux/sched/loadavg.h>
-+#include <linux/sched/mm.h>
-+#include <linux/sched/nohz.h>
-+#include <linux/sched/signal.h>
-+#include <linux/sched/stat.h>
-+#include <linux/sched/sysctl.h>
-+#include <linux/sched/task.h>
-+#include <linux/sched/topology.h>
-+#include <linux/sched/wake_q.h>
-+
-+#include <uapi/linux/sched/types.h>
-+
-+#include <linux/cgroup.h>
-+#include <linux/cpufreq.h>
-+#include <linux/cpuidle.h>
-+#include <linux/cpuset.h>
-+#include <linux/ctype.h>
-+#include <linux/debugfs.h>
-+#include <linux/kthread.h>
-+#include <linux/livepatch.h>
-+#include <linux/membarrier.h>
-+#include <linux/proc_fs.h>
-+#include <linux/psi.h>
-+#include <linux/slab.h>
-+#include <linux/stop_machine.h>
-+#include <linux/suspend.h>
-+#include <linux/swait.h>
-+#include <linux/syscalls.h>
-+#include <linux/tsacct_kern.h>
-+
-+#include <asm/tlb.h>
-+
-+#ifdef CONFIG_PARAVIRT
-+# include <asm/paravirt.h>
-+#endif
-+
-+#include "cpupri.h"
-+
-+#include <trace/events/sched.h>
-+
-+#ifdef CONFIG_SCHED_BMQ
-+/* bits:
-+ * RT(0-99), (Low prio adj range, nice width, high prio adj range) / 2, cpu idle task */
-+#define SCHED_BITS	(MAX_RT_PRIO + NICE_WIDTH / 2 + MAX_PRIORITY_ADJ + 1)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+/* bits: RT(0-99), reserved(100-127), NORMAL_PRIO_NUM, cpu idle task */
-+#define SCHED_BITS	(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM + 1)
-+#endif /* CONFIG_SCHED_PDS */
-+
-+#define IDLE_TASK_SCHED_PRIO	(SCHED_BITS - 1)
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
-+extern void resched_latency_warn(int cpu, u64 latency);
-+#else
-+# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
-+static inline void resched_latency_warn(int cpu, u64 latency) {}
-+#endif
-+
-+/*
-+ * Increase resolution of nice-level calculations for 64-bit architectures.
-+ * The extra resolution improves shares distribution and load balancing of
-+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
-+ * hierarchies, especially on larger systems. This is not a user-visible change
-+ * and does not change the user-interface for setting shares/weights.
-+ *
-+ * We increase resolution only if we have enough bits to allow this increased
-+ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
-+ * are pretty high and the returns do not justify the increased costs.
-+ *
-+ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
-+ * increase coverage and consistency always enable it on 64-bit platforms.
-+ */
-+#ifdef CONFIG_64BIT
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load_down(w) \
-+({ \
-+	unsigned long __w = (w); \
-+	if (__w) \
-+		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
-+	__w; \
-+})
-+#else
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		(w)
-+# define scale_load_down(w)	(w)
-+#endif
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
-+
-+/*
-+ * A weight of 0 or 1 can cause arithmetics problems.
-+ * A weight of a cfs_rq is the sum of weights of which entities
-+ * are queued on this cfs_rq, so a weight of a entity should not be
-+ * too large, so as the shares value of a task group.
-+ * (The default weight is 1024 - so there's no practical
-+ *  limitation from this.)
-+ */
-+#define MIN_SHARES		(1UL <<  1)
-+#define MAX_SHARES		(1UL << 18)
-+#endif
-+
-+/* task_struct::on_rq states: */
-+#define TASK_ON_RQ_QUEUED	1
-+#define TASK_ON_RQ_MIGRATING	2
-+
-+static inline int task_on_rq_queued(struct task_struct *p)
-+{
-+	return p->on_rq == TASK_ON_RQ_QUEUED;
-+}
-+
-+static inline int task_on_rq_migrating(struct task_struct *p)
-+{
-+	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
-+}
-+
-+/*
-+ * wake flags
-+ */
-+#define WF_SYNC		0x01		/* waker goes to sleep after wakeup */
-+#define WF_FORK		0x02		/* child wakeup after fork */
-+#define WF_MIGRATED	0x04		/* internal use, task got migrated */
-+#define WF_ON_CPU	0x08		/* Wakee is on_rq */
-+
-+#define SCHED_QUEUE_BITS	(SCHED_BITS - 1)
-+
-+struct sched_queue {
-+	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
-+	struct list_head heads[SCHED_BITS];
-+};
-+
-+/*
-+ * This is the main, per-CPU runqueue data structure.
-+ * This data should only be modified by the local cpu.
-+ */
-+struct rq {
-+	/* runqueue lock: */
-+	raw_spinlock_t lock;
-+
-+	struct task_struct __rcu *curr;
-+	struct task_struct *idle, *stop, *skip;
-+	struct mm_struct *prev_mm;
-+
-+	struct sched_queue	queue;
-+#ifdef CONFIG_SCHED_PDS
-+	u64			time_edge;
-+#endif
-+	unsigned long watermark;
-+
-+	/* switch count */
-+	u64 nr_switches;
-+
-+	atomic_t nr_iowait;
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	u64 last_seen_need_resched_ns;
-+	int ticks_without_resched;
-+#endif
-+
-+#ifdef CONFIG_MEMBARRIER
-+	int membarrier_state;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+	int cpu;		/* cpu of this runqueue */
-+	bool online;
-+
-+	unsigned int		ttwu_pending;
-+	unsigned char		nohz_idle_balance;
-+	unsigned char		idle_balance;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	struct sched_avg	avg_irq;
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+	int active_balance;
-+	struct cpu_stop_work	active_balance_work;
-+#endif
-+	struct callback_head	*balance_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+	struct rcuwait		hotplug_wait;
-+#endif
-+	unsigned int		nr_pinned;
-+
-+#endif /* CONFIG_SMP */
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	u64 prev_irq_time;
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+#ifdef CONFIG_PARAVIRT
-+	u64 prev_steal_time;
-+#endif /* CONFIG_PARAVIRT */
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	u64 prev_steal_time_rq;
-+#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
-+
-+	/* For genenal cpu load util */
-+	s32 load_history;
-+	u64 load_block;
-+	u64 load_stamp;
-+
-+	/* calc_load related fields */
-+	unsigned long calc_load_update;
-+	long calc_load_active;
-+
-+	u64 clock, last_tick;
-+	u64 last_ts_switch;
-+	u64 clock_task;
-+
-+	unsigned int  nr_running;
-+	unsigned long nr_uninterruptible;
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+#ifdef CONFIG_SMP
-+	call_single_data_t hrtick_csd;
-+#endif
-+	struct hrtimer		hrtick_timer;
-+	ktime_t			hrtick_time;
-+#endif
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+	/* latency stats */
-+	struct sched_info rq_sched_info;
-+	unsigned long long rq_cpu_time;
-+	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
-+
-+	/* sys_sched_yield() stats */
-+	unsigned int yld_count;
-+
-+	/* schedule() stats */
-+	unsigned int sched_switch;
-+	unsigned int sched_count;
-+	unsigned int sched_goidle;
-+
-+	/* try_to_wake_up() stats */
-+	unsigned int ttwu_count;
-+	unsigned int ttwu_local;
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+#ifdef CONFIG_CPU_IDLE
-+	/* Must be inspected within a rcu lock section */
-+	struct cpuidle_state *idle_state;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#ifdef CONFIG_SMP
-+	call_single_data_t	nohz_csd;
-+#endif
-+	atomic_t		nohz_flags;
-+#endif /* CONFIG_NO_HZ_COMMON */
-+};
-+
-+extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
-+
-+extern unsigned long calc_load_update;
-+extern atomic_long_t calc_load_tasks;
-+
-+extern void calc_global_load_tick(struct rq *this_rq);
-+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
-+
-+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
-+#define this_rq()		this_cpu_ptr(&runqueues)
-+#define task_rq(p)		cpu_rq(task_cpu(p))
-+#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
-+#define raw_rq()		raw_cpu_ptr(&runqueues)
-+
-+#ifdef CONFIG_SMP
-+#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
-+void register_sched_domain_sysctl(void);
-+void unregister_sched_domain_sysctl(void);
-+#else
-+static inline void register_sched_domain_sysctl(void)
-+{
-+}
-+static inline void unregister_sched_domain_sysctl(void)
-+{
-+}
-+#endif
-+
-+extern bool sched_smp_initialized;
-+
-+enum {
-+	ITSELF_LEVEL_SPACE_HOLDER,
-+#ifdef CONFIG_SCHED_SMT
-+	SMT_LEVEL_SPACE_HOLDER,
-+#endif
-+	COREGROUP_LEVEL_SPACE_HOLDER,
-+	CORE_LEVEL_SPACE_HOLDER,
-+	OTHER_LEVEL_SPACE_HOLDER,
-+	NR_CPU_AFFINITY_LEVELS
-+};
-+
-+DECLARE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+DECLARE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
-+
-+static inline int
-+__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
-+{
-+	int cpu;
-+
-+	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
-+		mask++;
-+
-+	return cpu;
-+}
-+
-+static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
-+{
-+	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
-+}
-+
-+extern void flush_smp_call_function_from_idle(void);
-+
-+#else  /* !CONFIG_SMP */
-+static inline void flush_smp_call_function_from_idle(void) { }
-+#endif
-+
-+#ifndef arch_scale_freq_tick
-+static __always_inline
-+void arch_scale_freq_tick(void)
-+{
-+}
-+#endif
-+
-+#ifndef arch_scale_freq_capacity
-+static __always_inline
-+unsigned long arch_scale_freq_capacity(int cpu)
-+{
-+	return SCHED_CAPACITY_SCALE;
-+}
-+#endif
-+
-+static inline u64 __rq_clock_broken(struct rq *rq)
-+{
-+	return READ_ONCE(rq->clock);
-+}
-+
-+static inline u64 rq_clock(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock;
-+}
-+
-+static inline u64 rq_clock_task(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock_task;
-+}
-+
-+/*
-+ * {de,en}queue flags:
-+ *
-+ * DEQUEUE_SLEEP  - task is no longer runnable
-+ * ENQUEUE_WAKEUP - task just became runnable
-+ *
-+ */
-+
-+#define DEQUEUE_SLEEP		0x01
-+
-+#define ENQUEUE_WAKEUP		0x01
-+
-+
-+/*
-+ * Below are scheduler API which using in other kernel code
-+ * It use the dummy rq_flags
-+ * ToDo : BMQ need to support these APIs for compatibility with mainline
-+ * scheduler code.
-+ */
-+struct rq_flags {
-+	unsigned long flags;
-+};
-+
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock);
-+
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock);
-+
-+static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+}
-+
-+static inline void
-+rq_lock(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline struct rq *
-+this_rq_lock_irq(struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	local_irq_disable();
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	return rq;
-+}
-+
-+extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
-+extern void raw_spin_rq_unlock(struct rq *rq);
-+
-+static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
-+{
-+	return &rq->lock;
-+}
-+
-+static inline raw_spinlock_t *rq_lockp(struct rq *rq)
-+{
-+	return __rq_lockp(rq);
-+}
-+
-+static inline void raw_spin_rq_lock(struct rq *rq)
-+{
-+	raw_spin_rq_lock_nested(rq, 0);
-+}
-+
-+static inline void raw_spin_rq_lock_irq(struct rq *rq)
-+{
-+	local_irq_disable();
-+	raw_spin_rq_lock(rq);
-+}
-+
-+static inline void raw_spin_rq_unlock_irq(struct rq *rq)
-+{
-+	raw_spin_rq_unlock(rq);
-+	local_irq_enable();
-+}
-+
-+static inline int task_current(struct rq *rq, struct task_struct *p)
-+{
-+	return rq->curr == p;
-+}
-+
-+static inline bool task_running(struct task_struct *p)
-+{
-+	return p->on_cpu;
-+}
-+
-+extern int task_running_nice(struct task_struct *p);
-+
-+extern struct static_key_false sched_schedstats;
-+
-+#ifdef CONFIG_CPU_IDLE
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+	rq->idle_state = idle_state;
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	WARN_ON(!rcu_read_lock_held());
-+	return rq->idle_state;
-+}
-+#else
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	return NULL;
-+}
-+#endif
-+
-+static inline int cpu_of(const struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	return rq->cpu;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+#include "stats.h"
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#define NOHZ_BALANCE_KICK_BIT	0
-+#define NOHZ_STATS_KICK_BIT	1
-+
-+#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
-+#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
-+
-+#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
-+
-+#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
-+
-+/* TODO: needed?
-+extern void nohz_balance_exit_idle(struct rq *rq);
-+#else
-+static inline void nohz_balance_exit_idle(struct rq *rq) { }
-+*/
-+#endif
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+struct irqtime {
-+	u64			total;
-+	u64			tick_delta;
-+	u64			irq_start_time;
-+	struct u64_stats_sync	sync;
-+};
-+
-+DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
-+
-+/*
-+ * Returns the irqtime minus the softirq time computed by ksoftirqd.
-+ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
-+ * and never move forward.
-+ */
-+static inline u64 irq_time_read(int cpu)
-+{
-+	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
-+	unsigned int seq;
-+	u64 total;
-+
-+	do {
-+		seq = __u64_stats_fetch_begin(&irqtime->sync);
-+		total = irqtime->total;
-+	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
-+
-+	return total;
-+}
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+
-+#ifdef CONFIG_CPU_FREQ
-+DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+extern int __init sched_tick_offload_init(void);
-+#else
-+static inline int sched_tick_offload_init(void) { return 0; }
-+#endif
-+
-+#ifdef arch_scale_freq_capacity
-+#ifndef arch_scale_freq_invariant
-+#define arch_scale_freq_invariant()	(true)
-+#endif
-+#else /* arch_scale_freq_capacity */
-+#define arch_scale_freq_invariant()	(false)
-+#endif
-+
-+extern void schedule_idle(void);
-+
-+#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
-+
-+/*
-+ * !! For sched_setattr_nocheck() (kernel) only !!
-+ *
-+ * This is actually gross. :(
-+ *
-+ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
-+ * tasks, but still be able to sleep. We need this on platforms that cannot
-+ * atomically change clock frequency. Remove once fast switching will be
-+ * available on such platforms.
-+ *
-+ * SUGOV stands for SchedUtil GOVernor.
-+ */
-+#define SCHED_FLAG_SUGOV	0x10000000
-+
-+#ifdef CONFIG_MEMBARRIER
-+/*
-+ * The scheduler provides memory barriers required by membarrier between:
-+ * - prior user-space memory accesses and store to rq->membarrier_state,
-+ * - store to rq->membarrier_state and following user-space memory accesses.
-+ * In the same way it provides those guarantees around store to rq->curr.
-+ */
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+	int membarrier_state;
-+
-+	if (prev_mm == next_mm)
-+		return;
-+
-+	membarrier_state = atomic_read(&next_mm->membarrier_state);
-+	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
-+		return;
-+
-+	WRITE_ONCE(rq->membarrier_state, membarrier_state);
-+}
-+#else
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+}
-+#endif
-+
-+#ifdef CONFIG_NUMA
-+extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
-+#else
-+static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return nr_cpu_ids;
-+}
-+#endif
-+
-+extern void swake_up_all_locked(struct swait_queue_head *q);
-+extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+extern int preempt_dynamic_mode;
-+extern int sched_dynamic_mode(const char *str);
-+extern void sched_dynamic_update(int mode);
-+#endif
-+
-+static inline void nohz_run_idle_balance(int cpu) { }
-+#endif /* ALT_SCHED_H */
-diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
-new file mode 100644
-index 000000000000..be3ee4a553ca
---- /dev/null
-+++ b/kernel/sched/bmq.h
-@@ -0,0 +1,111 @@
-+#define ALT_SCHED_VERSION_MSG "sched/bmq: BMQ CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
-+
-+/*
-+ * BMQ only routines
-+ */
-+#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
-+#define boost_threshold(p)	(sched_timeslice_ns >>\
-+				 (15 - MAX_PRIORITY_ADJ -  (p)->boost_prio))
-+
-+static inline void boost_task(struct task_struct *p)
-+{
-+	int limit;
-+
-+	switch (p->policy) {
-+	case SCHED_NORMAL:
-+		limit = -MAX_PRIORITY_ADJ;
-+		break;
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		limit = 0;
-+		break;
-+	default:
-+		return;
-+	}
-+
-+	if (p->boost_prio > limit)
-+		p->boost_prio--;
-+}
-+
-+static inline void deboost_task(struct task_struct *p)
-+{
-+	if (p->boost_prio < MAX_PRIORITY_ADJ)
-+		p->boost_prio++;
-+}
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms) {}
-+
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	return p->prio + p->boost_prio - MAX_RT_PRIO;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO)? p->prio : MAX_RT_PRIO / 2 + (p->prio + p->boost_prio) / 2;
-+}
-+
-+static inline int
-+task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
-+{
-+	return task_sched_prio(p);
-+}
-+
-+static inline int sched_prio2idx(int prio, struct rq *rq)
-+{
-+	return prio;
-+}
-+
-+static inline int sched_idx2prio(int idx, struct rq *rq)
-+{
-+	return idx;
-+}
-+
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sched_timeslice_ns;
-+
-+	if (SCHED_FIFO != p->policy && task_on_rq_queued(p)) {
-+		if (SCHED_RR != p->policy)
-+			deboost_task(p);
-+		requeue_task(p, rq);
-+	}
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
-+
-+inline int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
-+}
-+
-+static void sched_task_fork(struct task_struct *p, struct rq *rq)
-+{
-+	p->boost_prio = (p->boost_prio < 0) ?
-+		p->boost_prio + MAX_PRIORITY_ADJ : MAX_PRIORITY_ADJ;
-+}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->boost_prio = MAX_PRIORITY_ADJ;
-+}
-+
-+#ifdef CONFIG_SMP
-+static inline void sched_task_ttwu(struct task_struct *p)
-+{
-+	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
-+		boost_task(p);
-+}
-+#endif
-+
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
-+{
-+	if (rq_switch_time(rq) < boost_threshold(p))
-+		boost_task(p);
-+}
-+
-+static inline void update_rq_time_edge(struct rq *rq) {}
-diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
-index 57124614363d..f0e9c7543542 100644
---- a/kernel/sched/cpufreq_schedutil.c
-+++ b/kernel/sched/cpufreq_schedutil.c
-@@ -167,9 +167,14 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
- 	unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);
- 
- 	sg_cpu->max = max;
-+#ifndef CONFIG_SCHED_ALT
- 	sg_cpu->bw_dl = cpu_bw_dl(rq);
- 	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(rq), max,
- 					  FREQUENCY_UTIL, NULL);
-+#else
-+	sg_cpu->bw_dl = 0;
-+	sg_cpu->util = rq_load_util(rq, max);
-+#endif /* CONFIG_SCHED_ALT */
- }
- 
- /**
-@@ -312,8 +317,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
-  */
- static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
- 		sg_cpu->sg_policy->limits_changed = true;
-+#endif
- }
- 
- static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
-@@ -599,6 +606,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
- 	}
- 
- 	ret = sched_setattr_nocheck(thread, &attr);
-+
- 	if (ret) {
- 		kthread_stop(thread);
- 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
-@@ -833,7 +841,9 @@ cpufreq_governor_init(schedutil_gov);
- #ifdef CONFIG_ENERGY_MODEL
- static void rebuild_sd_workfn(struct work_struct *work)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	rebuild_sched_domains_energy();
-+#endif /* CONFIG_SCHED_ALT */
- }
- static DECLARE_WORK(rebuild_sd_work, rebuild_sd_workfn);
- 
-diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
-index 872e481d5098..f920c8b48ec1 100644
---- a/kernel/sched/cputime.c
-+++ b/kernel/sched/cputime.c
-@@ -123,7 +123,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
- 	p->utime += cputime;
- 	account_group_user_time(p, cputime);
- 
--	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
-+	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
- 
- 	/* Add user time to cpustat. */
- 	task_group_account_field(p, index, cputime);
-@@ -147,7 +147,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
- 	p->gtime += cputime;
- 
- 	/* Add guest time to cpustat. */
--	if (task_nice(p) > 0) {
-+	if (task_running_nice(p)) {
- 		cpustat[CPUTIME_NICE] += cputime;
- 		cpustat[CPUTIME_GUEST_NICE] += cputime;
- 	} else {
-@@ -270,7 +270,7 @@ static inline u64 account_other_time(u64 max)
- #ifdef CONFIG_64BIT
- static inline u64 read_sum_exec_runtime(struct task_struct *t)
- {
--	return t->se.sum_exec_runtime;
-+	return tsk_seruntime(t);
- }
- #else
- static u64 read_sum_exec_runtime(struct task_struct *t)
-@@ -280,7 +280,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
- 	struct rq *rq;
- 
- 	rq = task_rq_lock(t, &rf);
--	ns = t->se.sum_exec_runtime;
-+	ns = tsk_seruntime(t);
- 	task_rq_unlock(rq, t, &rf);
- 
- 	return ns;
-@@ -612,7 +612,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
- void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
- {
- 	struct task_cputime cputime = {
--		.sum_exec_runtime = p->se.sum_exec_runtime,
-+		.sum_exec_runtime = tsk_seruntime(p),
- 	};
- 
- 	task_cputime(p, &cputime.utime, &cputime.stime);
-diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
-index 0c5ec2776ddf..e3f4fe3f6e2c 100644
---- a/kernel/sched/debug.c
-+++ b/kernel/sched/debug.c
-@@ -8,6 +8,7 @@
-  */
- #include "sched.h"
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * This allows printing both to /proc/sched_debug and
-  * to the console
-@@ -210,6 +211,7 @@ static const struct file_operations sched_scaling_fops = {
- };
- 
- #endif /* SMP */
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 
-@@ -273,6 +275,7 @@ static const struct file_operations sched_dynamic_fops = {
- 
- #endif /* CONFIG_PREEMPT_DYNAMIC */
- 
-+#ifndef CONFIG_SCHED_ALT
- __read_mostly bool sched_debug_verbose;
- 
- static const struct seq_operations sched_debug_sops;
-@@ -288,6 +291,7 @@ static const struct file_operations sched_debug_fops = {
- 	.llseek		= seq_lseek,
- 	.release	= seq_release,
- };
-+#endif /* !CONFIG_SCHED_ALT */
- 
- static struct dentry *debugfs_sched;
- 
-@@ -297,12 +301,15 @@ static __init int sched_init_debug(void)
- 
- 	debugfs_sched = debugfs_create_dir("sched", NULL);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
- 	debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
-+#endif /* !CONFIG_SCHED_ALT */
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
- #endif
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency);
- 	debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity);
- 	debugfs_create_u32("wakeup_granularity_ns", 0644, debugfs_sched, &sysctl_sched_wakeup_granularity);
-@@ -330,11 +337,13 @@ static __init int sched_init_debug(void)
- #endif
- 
- 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- 
- 	return 0;
- }
- late_initcall(sched_init_debug);
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 
- static cpumask_var_t		sd_sysctl_cpus;
-@@ -1047,6 +1056,7 @@ void proc_sched_set_task(struct task_struct *p)
- 	memset(&p->se.statistics, 0, sizeof(p->se.statistics));
- #endif
- }
-+#endif /* !CONFIG_SCHED_ALT */
- 
- void resched_latency_warn(int cpu, u64 latency)
- {
-diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
-index 912b47aa99d8..7f6b13883c2a 100644
---- a/kernel/sched/idle.c
-+++ b/kernel/sched/idle.c
-@@ -403,6 +403,7 @@ void cpu_startup_entry(enum cpuhp_state state)
- 		do_idle();
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * idle-task scheduling class.
-  */
-@@ -525,3 +526,4 @@ DEFINE_SCHED_CLASS(idle) = {
- 	.switched_to		= switched_to_idle,
- 	.update_curr		= update_curr_idle,
- };
-+#endif
-diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
-new file mode 100644
-index 000000000000..0f1f0d708b77
---- /dev/null
-+++ b/kernel/sched/pds.h
-@@ -0,0 +1,127 @@
-+#define ALT_SCHED_VERSION_MSG "sched/pds: PDS CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
-+
-+static int sched_timeslice_shift = 22;
-+
-+#define NORMAL_PRIO_MOD(x)	((x) & (NORMAL_PRIO_NUM - 1))
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms)
-+{
-+	if (2 == timeslice_ms)
-+		sched_timeslice_shift = 21;
-+}
-+
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	s64 delta = p->deadline - rq->time_edge + NORMAL_PRIO_NUM - NICE_WIDTH;
-+
-+	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
-+		      "pds: task_sched_prio_normal() delta %lld\n", delta))
-+		return NORMAL_PRIO_NUM - 1;
-+
-+	return (delta < 0) ? 0 : delta;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO) ? p->prio :
-+		MIN_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+static inline int
-+task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
-+{
-+	return (p->prio < MAX_RT_PRIO) ? p->prio : MIN_NORMAL_PRIO +
-+		NORMAL_PRIO_MOD(task_sched_prio_normal(p, rq) + rq->time_edge);
-+}
-+
-+static inline int sched_prio2idx(int prio, struct rq *rq)
-+{
-+	return (IDLE_TASK_SCHED_PRIO == prio || prio < MAX_RT_PRIO) ? prio :
-+		MIN_NORMAL_PRIO + NORMAL_PRIO_MOD((prio - MIN_NORMAL_PRIO) +
-+						  rq->time_edge);
-+}
-+
-+static inline int sched_idx2prio(int idx, struct rq *rq)
-+{
-+	return (idx < MAX_RT_PRIO) ? idx : MIN_NORMAL_PRIO +
-+		NORMAL_PRIO_MOD((idx - MIN_NORMAL_PRIO) + NORMAL_PRIO_NUM -
-+				NORMAL_PRIO_MOD(rq->time_edge));
-+}
-+
-+static inline void sched_renew_deadline(struct task_struct *p, const struct rq *rq)
-+{
-+	if (p->prio >= MAX_RT_PRIO)
-+		p->deadline = (rq->clock >> sched_timeslice_shift) +
-+			p->static_prio - (MAX_PRIO - NICE_WIDTH);
-+}
-+
-+int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio > DEFAULT_PRIO);
-+}
-+
-+static inline void update_rq_time_edge(struct rq *rq)
-+{
-+	struct list_head head;
-+	u64 old = rq->time_edge;
-+	u64 now = rq->clock >> sched_timeslice_shift;
-+	u64 prio, delta;
-+
-+	if (now == old)
-+		return;
-+
-+	delta = min_t(u64, NORMAL_PRIO_NUM, now - old);
-+	INIT_LIST_HEAD(&head);
-+
-+	for_each_set_bit(prio, &rq->queue.bitmap[2], delta)
-+		list_splice_tail_init(rq->queue.heads + MIN_NORMAL_PRIO +
-+				      NORMAL_PRIO_MOD(prio + old), &head);
-+
-+	rq->queue.bitmap[2] = (NORMAL_PRIO_NUM == delta) ? 0UL :
-+		rq->queue.bitmap[2] >> delta;
-+	rq->time_edge = now;
-+	if (!list_empty(&head)) {
-+		u64 idx = MIN_NORMAL_PRIO + NORMAL_PRIO_MOD(now);
-+		struct task_struct *p;
-+
-+		list_for_each_entry(p, &head, sq_node)
-+			p->sq_idx = idx;
-+
-+		list_splice(&head, rq->queue.heads + idx);
-+		rq->queue.bitmap[2] |= 1UL;
-+	}
-+}
-+
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sched_timeslice_ns;
-+	sched_renew_deadline(p, rq);
-+	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
-+		requeue_task(p, rq);
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
-+{
-+	u64 max_dl = rq->time_edge + NICE_WIDTH - 1;
-+	if (unlikely(p->deadline > max_dl))
-+		p->deadline = max_dl;
-+}
-+
-+static void sched_task_fork(struct task_struct *p, struct rq *rq)
-+{
-+	sched_renew_deadline(p, rq);
-+}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	time_slice_expired(p, rq);
-+}
-+
-+#ifdef CONFIG_SMP
-+static inline void sched_task_ttwu(struct task_struct *p) {}
-+#endif
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
-diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
-index a554e3bbab2b..3e56f5e6ff5c 100644
---- a/kernel/sched/pelt.c
-+++ b/kernel/sched/pelt.c
-@@ -270,6 +270,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
- 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * sched_entity:
-  *
-@@ -387,8 +388,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
- 
- 	return 0;
- }
-+#endif
- 
--#ifdef CONFIG_SCHED_THERMAL_PRESSURE
-+#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- /*
-  * thermal:
-  *
-diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
-index e06071bf3472..adf567df34d4 100644
---- a/kernel/sched/pelt.h
-+++ b/kernel/sched/pelt.h
-@@ -1,13 +1,15 @@
- #ifdef CONFIG_SMP
- #include "sched-pelt.h"
- 
-+#ifndef CONFIG_SCHED_ALT
- int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
- int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
- int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
- int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
- int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
-+#endif
- 
--#ifdef CONFIG_SCHED_THERMAL_PRESSURE
-+#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
- 
- static inline u64 thermal_load_avg(struct rq *rq)
-@@ -42,6 +44,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
- 	return LOAD_AVG_MAX - 1024 + avg->period_contrib;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void cfs_se_util_change(struct sched_avg *avg)
- {
- 	unsigned int enqueued;
-@@ -153,9 +156,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
- 	return rq_clock_pelt(rq_of(cfs_rq));
- }
- #endif
-+#endif /* CONFIG_SCHED_ALT */
- 
- #else
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline int
- update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
- {
-@@ -173,6 +178,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
- {
- 	return 0;
- }
-+#endif
- 
- static inline int
- update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
-diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
-index ddefb0419d7a..658c41b15d3c 100644
---- a/kernel/sched/sched.h
-+++ b/kernel/sched/sched.h
-@@ -2,6 +2,10 @@
- /*
-  * Scheduler internal types and methods:
-  */
-+#ifdef CONFIG_SCHED_ALT
-+#include "alt_sched.h"
-+#else
-+
- #include <linux/sched.h>
- 
- #include <linux/sched/autogroup.h>
-@@ -3038,3 +3042,8 @@ extern int sched_dynamic_mode(const char *str);
- extern void sched_dynamic_update(int mode);
- #endif
- 
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+	return (task_nice(p) > 0);
-+}
-+#endif /* !CONFIG_SCHED_ALT */
-diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
-index 3f93fc3b5648..528b71e144e9 100644
---- a/kernel/sched/stats.c
-+++ b/kernel/sched/stats.c
-@@ -22,8 +22,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
- 	} else {
- 		struct rq *rq;
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		struct sched_domain *sd;
- 		int dcount = 0;
-+#endif
- #endif
- 		cpu = (unsigned long)(v - 2);
- 		rq = cpu_rq(cpu);
-@@ -40,6 +42,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
- 		seq_printf(seq, "\n");
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		/* domain-specific stats */
- 		rcu_read_lock();
- 		for_each_domain(cpu, sd) {
-@@ -68,6 +71,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
- 			    sd->ttwu_move_balance);
- 		}
- 		rcu_read_unlock();
-+#endif
- #endif
- 	}
- 	return 0;
-diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
-index b77ad49dc14f..be9edf086412 100644
---- a/kernel/sched/topology.c
-+++ b/kernel/sched/topology.c
-@@ -4,6 +4,7 @@
-  */
- #include "sched.h"
- 
-+#ifndef CONFIG_SCHED_ALT
- DEFINE_MUTEX(sched_domains_mutex);
- 
- /* Protected by sched_domains_mutex: */
-@@ -1382,8 +1383,10 @@ static void asym_cpu_capacity_scan(void)
-  */
- 
- static int default_relax_domain_level = -1;
-+#endif /* CONFIG_SCHED_ALT */
- int sched_domain_level_max;
- 
-+#ifndef CONFIG_SCHED_ALT
- static int __init setup_relax_domain_level(char *str)
- {
- 	if (kstrtoint(str, 0, &default_relax_domain_level))
-@@ -1617,6 +1620,7 @@ sd_init(struct sched_domain_topology_level *tl,
- 
- 	return sd;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- /*
-  * Topology list, bottom-up.
-@@ -1646,6 +1650,7 @@ void set_sched_topology(struct sched_domain_topology_level *tl)
- 	sched_domain_topology = tl;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_NUMA
- 
- static const struct cpumask *sd_numa_mask(int cpu)
-@@ -2451,3 +2456,17 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
- 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
- 	mutex_unlock(&sched_domains_mutex);
- }
-+#else /* CONFIG_SCHED_ALT */
-+void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
-+			     struct sched_domain_attr *dattr_new)
-+{}
-+
-+#ifdef CONFIG_NUMA
-+int __read_mostly		node_reclaim_distance = RECLAIM_DISTANCE;
-+
-+int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return best_mask_cpu(cpu, cpus);
-+}
-+#endif /* CONFIG_NUMA */
-+#endif
-diff --git a/kernel/sysctl.c b/kernel/sysctl.c
-index 272f4a272f8c..1c9455c8ecf6 100644
---- a/kernel/sysctl.c
-+++ b/kernel/sysctl.c
-@@ -122,6 +122,10 @@ static unsigned long long_max = LONG_MAX;
- static int one_hundred = 100;
- static int two_hundred = 200;
- static int one_thousand = 1000;
-+#ifdef CONFIG_SCHED_ALT
-+static int __maybe_unused zero = 0;
-+extern int sched_yield_type;
-+#endif
- #ifdef CONFIG_PRINTK
- static int ten_thousand = 10000;
- #endif
-@@ -1730,6 +1734,24 @@ int proc_do_static_key(struct ctl_table *table, int write,
- }
- 
- static struct ctl_table kern_table[] = {
-+#ifdef CONFIG_SCHED_ALT
-+/* In ALT, only supported "sched_schedstats" */
-+#ifdef CONFIG_SCHED_DEBUG
-+#ifdef CONFIG_SMP
-+#ifdef CONFIG_SCHEDSTATS
-+	{
-+		.procname	= "sched_schedstats",
-+		.data		= NULL,
-+		.maxlen		= sizeof(unsigned int),
-+		.mode		= 0644,
-+		.proc_handler	= sysctl_schedstats,
-+		.extra1		= SYSCTL_ZERO,
-+		.extra2		= SYSCTL_ONE,
-+	},
-+#endif /* CONFIG_SCHEDSTATS */
-+#endif /* CONFIG_SMP */
-+#endif /* CONFIG_SCHED_DEBUG */
-+#else  /* !CONFIG_SCHED_ALT */
- 	{
- 		.procname	= "sched_child_runs_first",
- 		.data		= &sysctl_sched_child_runs_first,
-@@ -1860,6 +1882,7 @@ static struct ctl_table kern_table[] = {
- 		.extra2		= SYSCTL_ONE,
- 	},
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
- #ifdef CONFIG_PROVE_LOCKING
- 	{
- 		.procname	= "prove_locking",
-@@ -2436,6 +2459,17 @@ static struct ctl_table kern_table[] = {
- 		.proc_handler	= proc_dointvec,
- 	},
- #endif
-+#ifdef CONFIG_SCHED_ALT
-+	{
-+		.procname	= "yield_type",
-+		.data		= &sched_yield_type,
-+		.maxlen		= sizeof (int),
-+		.mode		= 0644,
-+		.proc_handler	= &proc_dointvec_minmax,
-+		.extra1		= &zero,
-+		.extra2		= &two,
-+	},
-+#endif
- #if defined(CONFIG_S390) && defined(CONFIG_SMP)
- 	{
- 		.procname	= "spin_retry",
-diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
-index 4a66725b1d4a..cb80ed5c1f5c 100644
---- a/kernel/time/hrtimer.c
-+++ b/kernel/time/hrtimer.c
-@@ -1940,8 +1940,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
- 	int ret = 0;
- 	u64 slack;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	slack = current->timer_slack_ns;
- 	if (dl_task(current) || rt_task(current))
-+#endif
- 		slack = 0;
- 
- 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
-diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
-index 517be7fd175e..de3afe8e0800 100644
---- a/kernel/time/posix-cpu-timers.c
-+++ b/kernel/time/posix-cpu-timers.c
-@@ -216,7 +216,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
- 	u64 stime, utime;
- 
- 	task_cputime(p, &utime, &stime);
--	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
-+	store_samples(samples, stime, utime, tsk_seruntime(p));
- }
- 
- static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
-@@ -801,6 +801,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
- 	}
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void check_dl_overrun(struct task_struct *tsk)
- {
- 	if (tsk->dl.dl_overrun) {
-@@ -808,6 +809,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
- 		__group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk);
- 	}
- }
-+#endif
- 
- static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
- {
-@@ -835,8 +837,10 @@ static void check_thread_timers(struct task_struct *tsk,
- 	u64 samples[CPUCLOCK_MAX];
- 	unsigned long soft;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk))
- 		check_dl_overrun(tsk);
-+#endif
- 
- 	if (expiry_cache_is_inactive(pct))
- 		return;
-@@ -850,7 +854,7 @@ static void check_thread_timers(struct task_struct *tsk,
- 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
- 	if (soft != RLIM_INFINITY) {
- 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
--		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
-+		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
- 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
- 
- 		/* At the hard limit, send SIGKILL. No further action. */
-@@ -1086,8 +1090,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
- 			return true;
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk) && tsk->dl.dl_overrun)
- 		return true;
-+#endif
- 
- 	return false;
- }
-diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
-index adf7ef194005..11c8f36e281b 100644
---- a/kernel/trace/trace_selftest.c
-+++ b/kernel/trace/trace_selftest.c
-@@ -1052,10 +1052,15 @@ static int trace_wakeup_test_thread(void *data)
- {
- 	/* Make this a -deadline thread */
- 	static const struct sched_attr attr = {
-+#ifdef CONFIG_SCHED_ALT
-+		/* No deadline on BMQ/PDS, use RR */
-+		.sched_policy = SCHED_RR,
-+#else
- 		.sched_policy = SCHED_DEADLINE,
- 		.sched_runtime = 100000ULL,
- 		.sched_deadline = 10000000ULL,
- 		.sched_period = 10000000ULL
-+#endif
- 	};
- 	struct wakeup_test_data *x = data;
- 
---- a/kernel/sched/alt_core.c	2021-11-18 18:58:14.290182408 -0500
-+++ b/kernel/sched/alt_core.c	2021-11-18 18:58:54.870593883 -0500
-@@ -2762,7 +2762,7 @@ int sched_fork(unsigned long clone_flags
- 	return 0;
- }
- 
--void sched_post_fork(struct task_struct *p) {}
-+void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs) {}
- 
- #ifdef CONFIG_SCHEDSTATS
- 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
deleted file mode 100644
index d449eec4..00000000
--- a/5021_BMQ-and-PDS-gentoo-defaults.patch
+++ /dev/null
@@ -1,13 +0,0 @@
---- a/init/Kconfig	2021-04-27 07:38:30.556467045 -0400
-+++ b/init/Kconfig	2021-04-27 07:39:32.956412800 -0400
-@@ -780,8 +780,9 @@ config GENERIC_SCHED_CLOCK
- menu "Scheduler features"
- 
- menuconfig SCHED_ALT
-+	depends on X86_64
- 	bool "Alternative CPU Schedulers"
--	default y
-+	default n
- 	help
- 	  This feature enable alternative CPU scheduler"
- 


^ permalink raw reply related	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2021-11-21 21:14 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-08-02 22:32 [gentoo-commits] proj/linux-patches:5.14 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2021-08-18 15:03 Mike Pagano
2021-08-18 15:03 Mike Pagano
2021-08-25 12:25 Mike Pagano
2021-08-25 16:24 Mike Pagano
2021-08-25 16:24 Mike Pagano
2021-08-25 16:30 Mike Pagano
2021-08-30 17:23 Mike Pagano
2021-09-03  9:15 Alice Ferrazzi
2021-09-03 11:17 Mike Pagano
2021-09-03 11:52 Mike Pagano
2021-09-08 12:39 Alice Ferrazzi
2021-09-12 14:36 Mike Pagano
2021-09-14 15:37 Mike Pagano
2021-09-15 11:58 Mike Pagano
2021-09-16 11:03 Mike Pagano
2021-09-17 12:40 Mike Pagano
2021-09-17 12:48 Mike Pagano
2021-09-18 16:06 Mike Pagano
2021-09-20 22:01 Mike Pagano
2021-09-22 11:37 Mike Pagano
2021-09-26 14:11 Mike Pagano
2021-09-30 10:48 Mike Pagano
2021-10-03 19:14 Mike Pagano
2021-10-07 10:36 Mike Pagano
2021-10-09 21:30 Mike Pagano
2021-10-13 16:15 Alice Ferrazzi
2021-10-17 13:10 Mike Pagano
2021-10-18 21:17 Mike Pagano
2021-10-20 13:22 Mike Pagano
2021-10-21 12:16 Mike Pagano
2021-10-27 11:56 Mike Pagano
2021-11-02 19:30 Mike Pagano
2021-11-06 13:41 Mike Pagano
2021-11-12 14:19 Mike Pagano
2021-11-17 11:59 Mike Pagano
2021-11-18 15:32 Mike Pagano
2021-11-19  0:18 Mike Pagano
2021-11-21 20:38 Mike Pagano
2021-11-21 21:14 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox